Real-time simulation of the TF30-P-3 turbofan engine using a hybrid computer
NASA Technical Reports Server (NTRS)
Szuch, J. R.; Bruton, W. M.
1974-01-01
A real-time, hybrid-computer simulation of the TF30-P-3 turbofan engine was developed. The simulation was primarily analog in nature but used the digital portion of the hybrid computer to perform bivariate function generation associated with the performance of the engine's rotating components. FORTRAN listings and analog patching diagrams are provided. The hybrid simulation was controlled by a digital computer programmed to simulate the engine's standard hydromechanical control. Both steady-state and dynamic data obtained from the digitally controlled engine simulation are presented. Hybrid simulation data are compared with data obtained from a digital simulation provided by the engine manufacturer. The comparisons indicate that the real-time hybrid simulation adequately matches the baseline digital simulation.
Protocols for Handling Messages Between Simulation Computers
NASA Technical Reports Server (NTRS)
Balcerowski, John P.; Dunnam, Milton
2006-01-01
Practical Simulator Network (PSimNet) is a set of data-communication protocols designed especially for use in handling messages between computers that are engaging cooperatively in real-time or nearly-real-time training simulations. In a typical application, computers that provide individualized training at widely dispersed locations would communicate, by use of PSimNet, with a central host computer that would provide a common computational- simulation environment and common data. Originally intended for use in supporting interfaces between training computers and computers that simulate the responses of spacecraft scientific payloads, PSimNet could be especially well suited for a variety of other applications -- for example, group automobile-driver training in a classroom. Another potential application might lie in networking of automobile-diagnostic computers at repair facilities to a central computer that would compile the expertise of numerous technicians and engineers and act as an expert consulting technician.
NeuroManager: a workflow analysis based simulation management engine for computational neuroscience
Stockton, David B.; Santamaria, Fidel
2015-01-01
We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project. PMID:26528175
NeuroManager: a workflow analysis based simulation management engine for computational neuroscience.
Stockton, David B; Santamaria, Fidel
2015-01-01
We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project.
Software for Brain Network Simulations: A Comparative Study
Tikidji-Hamburyan, Ruben A.; Narayana, Vikram; Bozkus, Zeki; El-Ghazawi, Tarek A.
2017-01-01
Numerical simulations of brain networks are a critical part of our efforts in understanding brain functions under pathological and normal conditions. For several decades, the community has developed many software packages and simulators to accelerate research in computational neuroscience. In this article, we select the three most popular simulators, as determined by the number of models in the ModelDB database, such as NEURON, GENESIS, and BRIAN, and perform an independent evaluation of these simulators. In addition, we study NEST, one of the lead simulators of the Human Brain Project. First, we study them based on one of the most important characteristics, the range of supported models. Our investigation reveals that brain network simulators may be biased toward supporting a specific set of models. However, all simulators tend to expand the supported range of models by providing a universal environment for the computational study of individual neurons and brain networks. Next, our investigations on the characteristics of computational architecture and efficiency indicate that all simulators compile the most computationally intensive procedures into binary code, with the aim of maximizing their computational performance. However, not all simulators provide the simplest method for module development and/or guarantee efficient binary code. Third, a study of their amenability for high-performance computing reveals that NEST can almost transparently map an existing model on a cluster or multicore computer, while NEURON requires code modification if the model developed for a single computer has to be mapped on a computational cluster. Interestingly, parallelization is the weakest characteristic of BRIAN, which provides no support for cluster computations and limited support for multicore computers. Fourth, we identify the level of user support and frequency of usage for all simulators. Finally, we carry out an evaluation using two case studies: a large network with simplified neural and synaptic models and a small network with detailed models. These two case studies allow us to avoid any bias toward a particular software package. The results indicate that BRIAN provides the most concise language for both cases considered. Furthermore, as expected, NEST mostly favors large network models, while NEURON is better suited for detailed models. Overall, the case studies reinforce our general observation that simulators have a bias in the computational performance toward specific types of the brain network models. PMID:28775687
Computer simulation of on-orbit manned maneuvering unit operations
NASA Technical Reports Server (NTRS)
Stuart, G. M.; Garcia, K. D.
1986-01-01
Simulation of spacecraft on-orbit operations is discussed in reference to Martin Marietta's Space Operations Simulation laboratory's use of computer software models to drive a six-degree-of-freedom moving base carriage and two target gimbal systems. In particular, key simulation issues and related computer software models associated with providing real-time, man-in-the-loop simulations of the Manned Maneuvering Unit (MMU) are addressed with special attention given to how effectively these models and motion systems simulate the MMU's actual on-orbit operations. The weightless effects of the space environment require the development of entirely new devices for locomotion. Since the access to space is very limited, it is necessary to design, build, and test these new devices within the physical constraints of earth using simulators. The simulation method that is discussed here is the technique of using computer software models to drive a Moving Base Carriage (MBC) that is capable of providing simultaneous six-degree-of-freedom motions. This method, utilized at Martin Marietta's Space Operations Simulation (SOS) laboratory, provides the ability to simulate the operation of manned spacecraft, provides the pilot with proper three-dimensional visual cues, and allows training of on-orbit operations. The purpose here is to discuss significant MMU simulation issues, the related models that were developed in response to these issues and how effectively these models simulate the MMU's actual on-orbiter operations.
Simulating Laboratory Procedures.
ERIC Educational Resources Information Center
Baker, J. E.; And Others
1986-01-01
Describes the use of computer assisted instruction in a medical microbiology course. Presents examples of how computer assisted instruction can present case histories in which the laboratory procedures are simulated. Discusses an authoring system used to prepare computer simulations and provides one example of a case history dealing with fractured…
[Animal experimentation, computer simulation and surgical research].
Carpentier, Alain
2009-11-01
We live in a digital world In medicine, computers are providing new tools for data collection, imaging, and treatment. During research and development of complex technologies and devices such as artificial hearts, computer simulation can provide more reliable information than experimentation on large animals. In these specific settings, animal experimentation should serve more to validate computer models of complex devices than to demonstrate their reliability.
Symplectic molecular dynamics simulations on specially designed parallel computers.
Borstnik, Urban; Janezic, Dusanka
2005-01-01
We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.
Monte Carlo Computer Simulation of a Rainbow.
ERIC Educational Resources Information Center
Olson, Donald; And Others
1990-01-01
Discusses making a computer-simulated rainbow using principles of physics, such as reflection and refraction. Provides BASIC program for the simulation. Appends a program illustrating the effects of dispersion of the colors. (YP)
2007-11-05
limits of what is considered practical when applying all-atom molecular - dynamics simulation methods. Lattice models provide computationally robust...of expectation values from the density of states. All-atom molecular - dynamics simulations provide the most rigorous sampling method to generate con... molecular - dynamics simulations of protein folding,6–9 reported studies of computing a heat capacity or other calorimetric observables have been limited to
NASA Astrophysics Data System (ADS)
Smetana, Lara Kathleen; Bell, Randy L.
2012-06-01
Researchers have explored the effectiveness of computer simulations for supporting science teaching and learning during the past four decades. The purpose of this paper is to provide a comprehensive, critical review of the literature on the impact of computer simulations on science teaching and learning, with the goal of summarizing what is currently known and providing guidance for future research. We report on the outcomes of 61 empirical studies dealing with the efficacy of, and implications for, computer simulations in science instruction. The overall findings suggest that simulations can be as effective, and in many ways more effective, than traditional (i.e. lecture-based, textbook-based and/or physical hands-on) instructional practices in promoting science content knowledge, developing process skills, and facilitating conceptual change. As with any other educational tool, the effectiveness of computer simulations is dependent upon the ways in which they are used. Thus, we outline specific research-based guidelines for best practice. Computer simulations are most effective when they (a) are used as supplements; (b) incorporate high-quality support structures; (c) encourage student reflection; and (d) promote cognitive dissonance. Used appropriately, computer simulations involve students in inquiry-based, authentic science explorations. Additionally, as educational technologies continue to evolve, advantages such as flexibility, safety, and efficiency deserve attention.
Problem-Solving in the Pre-Clinical Curriculum: The Uses of Computer Simulations.
ERIC Educational Resources Information Center
Michael, Joel A.; Rovick, Allen A.
1986-01-01
Promotes the use of computer-based simulations in the pre-clinical medical curriculum as a means of providing students with opportunities for problem solving. Describes simple simulations of skeletal muscle loads, complex simulations of major organ systems and comprehensive simulation models of the entire human body. (TW)
Computer Simulation of the Circulation Subsystem of a Library
ERIC Educational Resources Information Center
Shaw, W. M., Jr.
1975-01-01
When circulation data are used as input parameters for a computer simulation of a library's circulation subsystem, the results of the simulation provide information on book availability and delays. The model may be used to simulate alternative loan policies. (Author/LS)
Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation
NASA Technical Reports Server (NTRS)
Stocker, John C.; Golomb, Andrew M.
2011-01-01
Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.
ERIC Educational Resources Information Center
Prince, Amber T.
Computer assisted instruction, and especially computer simulations, can help to ensure that preservice and inservice teachers learn from the right experiences. In the past, colleges of education used large mainframe computer systems to store student registration, provide simulation lessons on diagnosing reading difficulties, construct informal…
BeeSim: Leveraging Wearable Computers in Participatory Simulations with Young Children
ERIC Educational Resources Information Center
Peppler, Kylie; Danish, Joshua; Zaitlen, Benjamin; Glosson, Diane; Jacobs, Alexander; Phelps, David
2010-01-01
New technologies have enabled students to become active participants in computational simulations of dynamic and complex systems (called Participatory Simulations), providing a "first-person"perspective on complex systems. However, most existing Participatory Simulations have targeted older children, teens, and adults assuming that such concepts…
NASA Astrophysics Data System (ADS)
Valasek, Lukas; Glasa, Jan
2017-12-01
Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.
Faster Aerodynamic Simulation With Cart3D
NASA Technical Reports Server (NTRS)
2003-01-01
A NASA-developed aerodynamic simulation tool is ensuring the safety of future space operations while providing designers and engineers with an automated, highly accurate computer simulation suite. Cart3D, co-winner of NASA's 2002 Software of the Year award, is the result of over 10 years of research and software development conducted by Michael Aftosmis and Dr. John Melton of Ames Research Center and Professor Marsha Berger of the Courant Institute at New York University. Cart3D offers a revolutionary approach to computational fluid dynamics (CFD), the computer simulation of how fluids and gases flow around an object of a particular design. By fusing technological advancements in diverse fields such as mineralogy, computer graphics, computational geometry, and fluid dynamics, the software provides a new industrial geometry processing and fluid analysis capability with unsurpassed automation and efficiency.
ERIC Educational Resources Information Center
Smetana, Lara Kathleen; Bell, Randy L.
2012-01-01
Researchers have explored the effectiveness of computer simulations for supporting science teaching and learning during the past four decades. The purpose of this paper is to provide a comprehensive, critical review of the literature on the impact of computer simulations on science teaching and learning, with the goal of summarizing what is…
A Computer Simulation of Community Pharmacy Practice for Educational Use.
Bindoff, Ivan; Ling, Tristan; Bereznicki, Luke; Westbury, Juanita; Chalmers, Leanne; Peterson, Gregory; Ollington, Robert
2014-11-15
To provide a computer-based learning method for pharmacy practice that is as effective as paper-based scenarios, but more engaging and less labor-intensive. We developed a flexible and customizable computer simulation of community pharmacy. Using it, the students would be able to work through scenarios which encapsulate the entirety of a patient presentation. We compared the traditional paper-based teaching method to our computer-based approach using equivalent scenarios. The paper-based group had 2 tutors while the computer group had none. Both groups were given a prescenario and postscenario clinical knowledge quiz and survey. Students in the computer-based group had generally greater improvements in their clinical knowledge score, and third-year students using the computer-based method also showed more improvements in history taking and counseling competencies. Third-year students also found the simulation fun and engaging. Our simulation of community pharmacy provided an educational experience as effective as the paper-based alternative, despite the lack of a human tutor.
ATLAS@Home: Harnessing Volunteer Computing for HEP
NASA Astrophysics Data System (ADS)
Adam-Bourdarios, C.; Cameron, D.; Filipčič, A.; Lancon, E.; Wu, W.; ATLAS Collaboration
2015-12-01
A recent common theme among HEP computing is exploitation of opportunistic resources in order to provide the maximum statistics possible for Monte Carlo simulation. Volunteer computing has been used over the last few years in many other scientific fields and by CERN itself to run simulations of the LHC beams. The ATLAS@Home project was started to allow volunteers to run simulations of collisions in the ATLAS detector. So far many thousands of members of the public have signed up to contribute their spare CPU cycles for ATLAS, and there is potential for volunteer computing to provide a significant fraction of ATLAS computing resources. Here we describe the design of the project, the lessons learned so far and the future plans.
The Use of Computer Simulation Techniques in Educational Planning.
ERIC Educational Resources Information Center
Wilson, Charles Z.
Computer simulations provide powerful models for establishing goals, guidelines, and constraints in educational planning. They are dynamic models that allow planners to examine logical descriptions of organizational behavior over time as well as permitting consideration of the large and complex systems required to provide realistic descriptions of…
ERIC Educational Resources Information Center
Njoo, Melanie; de Jong, Ton
This paper contains the results of a study on the importance of discovery learning using computer simulations. The purpose of the study was to identify what constitutes discovery learning and to assess the effects of instructional support measures. College students were observed working with an assignment and a computer simulation in the domain of…
NASA Astrophysics Data System (ADS)
Develaki, Maria
2017-11-01
Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and evaluate in a scientific way. This paper aims (a) to contribute to an extended understanding of the nature and pedagogical importance of model-based reasoning and (b) to exemplify how using computer simulations can support students' model-based reasoning. We provide first a background for both scientific reasoning and computer simulations, based on the relevant philosophical views and the related educational discussion. This background suggests that the model-based framework provides an epistemologically valid and pedagogically appropriate basis for teaching scientific reasoning and for helping students develop sounder reasoning and decision-taking abilities and explains how using computer simulations can foster these abilities. We then provide some examples illustrating the use of computer simulations to support model-based reasoning and evaluation activities in the classroom. The examples reflect the procedure and criteria for evaluating models in science and demonstrate the educational advantages of their application in classroom reasoning activities.
NASA Technical Reports Server (NTRS)
Walker, Carrie K.
1991-01-01
A technique has been developed for combining features of a systems architecture design and assessment tool and a software development tool. This technique reduces simulation development time and expands simulation detail. The Architecture Design and Assessment System (ADAS), developed at the Research Triangle Institute, is a set of computer-assisted engineering tools for the design and analysis of computer systems. The ADAS system is based on directed graph concepts and supports the synthesis and analysis of software algorithms mapped to candidate hardware implementations. Greater simulation detail is provided by the ADAS functional simulator. With the functional simulator, programs written in either Ada or C can be used to provide a detailed description of graph nodes. A Computer-Aided Software Engineering tool developed at the Charles Stark Draper Laboratory (CSDL CASE) automatically generates Ada or C code from engineering block diagram specifications designed with an interactive graphical interface. A technique to use the tools together has been developed, which further automates the design process.
Computer modeling and simulation of human movement. Applications in sport and rehabilitation.
Neptune, R R
2000-05-01
Computer modeling and simulation of human movement plays an increasingly important role in sport and rehabilitation, with applications ranging from sport equipment design to understanding pathologic gait. The complex dynamic interactions within the musculoskeletal and neuromuscular systems make analyzing human movement with existing experimental techniques difficult but computer modeling and simulation allows for the identification of these complex interactions and causal relationships between input and output variables. This article provides an overview of computer modeling and simulation and presents an example application in the field of rehabilitation.
Computer Simulation of a Hardwood Processing Plant
D. Earl Kline; Philip A. Araman
1990-01-01
The overall purpose of this paper is to introduce computer simulation as a decision support tool that can be used to provide managers with timely information. A simulation/animation modeling procedure is demonstrated for wood products manufacuring systems. Simulation modeling techniques are used to assist in identifying and solving problems. Animation is used for...
A Review of Computer Simulations in Teacher Education
ERIC Educational Resources Information Center
Bradley, Elizabeth Gates; Kendall, Brittany
2014-01-01
Computer simulations can provide guided practice for a variety of situations that pre-service teachers would not frequently experience during their teacher education studies. Pre-service teachers can use simulations to turn the knowledge they have gained in their coursework into real experience. Teacher simulation training has come a long way over…
Radio Frequency Mass Gauging of Propellants
NASA Technical Reports Server (NTRS)
Zimmerli, Gregory A.; Vaden, Karl R.; Herlacher, Michael D.; Buchanan, David A.; VanDresar, Neil T.
2007-01-01
A combined experimental and computer simulation effort was conducted to measure radio frequency (RF) tank resonance modes in a dewar partially filled with liquid oxygen, and compare the measurements with numerical simulations. The goal of the effort was to demonstrate that computer simulations of a tank's electromagnetic eigenmodes can be used to accurately predict ground-based measurements, thereby providing a computational tool for predicting tank modes in a low-gravity environment. Matching the measured resonant frequencies of several tank modes with computer simulations can be used to gauge the amount of liquid in a tank, thus providing a possible method to gauge cryogenic propellant tanks in low-gravity. Using a handheld RF spectrum analyzer and a small antenna in a 46 liter capacity dewar for experimental measurements, we have verified that the four lowest transverse magnetic eigenmodes can be accurately predicted as a function of liquid oxygen fill level using computer simulations. The input to the computer simulations consisted of tank dimensions, and the dielectric constant of the fluid. Without using any adjustable parameters, the calculated and measured frequencies agree such that the liquid oxygen fill level was gauged to within 2 percent full scale uncertainty. These results demonstrate the utility of using electromagnetic simulations to form the basis of an RF mass gauging technology with the power to simulate tank resonance frequencies from arbitrary fluid configurations.
Launch Site Computer Simulation and its Application to Processes
NASA Technical Reports Server (NTRS)
Sham, Michael D.
1995-01-01
This paper provides an overview of computer simulation, the Lockheed developed STS Processing Model, and the application of computer simulation to a wide range of processes. The STS Processing Model is an icon driven model that uses commercial off the shelf software and a Macintosh personal computer. While it usually takes one year to process and launch 8 space shuttles, with the STS Processing Model this process is computer simulated in about 5 minutes. Facilities, orbiters, or ground support equipment can be added or deleted and the impact on launch rate, facility utilization, or other factors measured as desired. This same computer simulation technology can be used to simulate manufacturing, engineering, commercial, or business processes. The technology does not require an 'army' of software engineers to develop and operate, but instead can be used by the layman with only a minimal amount of training. Instead of making changes to a process and realizing the results after the fact, with computer simulation, changes can be made and processes perfected before they are implemented.
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Putt, Charles W.
1997-01-01
The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.
Users guide for guidance and control Launch and Abort Simulation for Spacecraft (LASS), volume 1
NASA Technical Reports Server (NTRS)
Havig, T. F.; Backman, H. D.
1972-01-01
The mathematical models and computer program which are used to implement LASS are described. The computer program provides for a simulation of boost to orbit and abort capability from boost trajectories to a prescribed target. The abort target provides a decision point for engine shutdown from which the vehicle coasts to the vicinity of the selected abort recovery site. The simulation is a six degree of freedom simulation describing a rigid body. The vehicle is influenced by forces and moments from nondistributed aerodynamics. An adaptive autopilot is provided to control vehicle attitudes during powered and unpowered flight. A conventional autopilot is provided for study of vehicle during powered flight.
Computer Simulation in Social Science.
ERIC Educational Resources Information Center
Garson, G. David
From a base in military models, computer simulation has evolved to provide a wide variety of applications in social science. General purpose simulation packages and languages such as FIRM, DYNAMO, and others have made significant contributions toward policy discussion in the social sciences and have well-documented efficacy in instructional…
Exact and efficient simulation of concordant computation
NASA Astrophysics Data System (ADS)
Cable, Hugo; Browne, Daniel E.
2015-11-01
Concordant computation is a circuit-based model of quantum computation for mixed states, that assumes that all correlations within the register are discord-free (i.e. the correlations are essentially classical) at every step of the computation. The question of whether concordant computation always admits efficient simulation by a classical computer was first considered by Eastin in arXiv:quant-ph/1006.4402v1, where an answer in the affirmative was given for circuits consisting only of one- and two-qubit gates. Building on this work, we develop the theory of classical simulation of concordant computation. We present a new framework for understanding such computations, argue that a larger class of concordant computations admit efficient simulation, and provide alternative proofs for the main results of arXiv:quant-ph/1006.4402v1 with an emphasis on the exactness of simulation which is crucial for this model. We include detailed analysis of the arithmetic complexity for solving equations in the simulation, as well as extensions to larger gates and qudits. We explore the limitations of our approach, and discuss the challenges faced in developing efficient classical simulation algorithms for all concordant computations.
Cardiovascular Physiology Teaching: Computer Simulations vs. Animal Demonstrations.
ERIC Educational Resources Information Center
Samsel, Richard W.; And Others
1994-01-01
At the introductory level, the computer provides an effective alternative to using animals for laboratory teaching. Computer software can simulate the operation of multiple organ systems. Advantages of software include alteration of variables that are not easily changed in vivo, repeated interventions, and cost-effective hands-on student access.…
Computer simulation of space charge
NASA Astrophysics Data System (ADS)
Yu, K. W.; Chung, W. K.; Mak, S. S.
1991-05-01
Using the particle-mesh (PM) method, a one-dimensional simulation of the well-known Langmuir-Child's law is performed on an INTEL 80386-based personal computer system. The program is coded in turbo basic (trademark of Borland International, Inc.). The numerical results obtained were in excellent agreement with theoretical predictions and the computational time required is quite modest. This simulation exercise demonstrates that some simple computer simulation using particles may be implemented successfully on PC's that are available today, and hopefully this will provide the necessary incentives for newcomers to the field who wish to acquire a flavor of the elementary aspects of the practice.
Computer Simulation Performed for Columbia Project Cooling System
NASA Technical Reports Server (NTRS)
Ahmad, Jasim
2005-01-01
This demo shows a high-fidelity simulation of the air flow in the main computer room housing the Columbia (10,024 intel titanium processors) system. The simulation asseses the performance of the cooling system and identified deficiencies, and recommended modifications to eliminate them. It used two in house software packages on NAS supercomputers: Chimera Grid tools to generate a geometric model of the computer room, OVERFLOW-2 code for fluid and thermal simulation. This state-of-the-art technology can be easily extended to provide a general capability for air flow analyses on any modern computer room. Columbia_CFD_black.tiff
Development of a Computer Simulation Game Using a Reverse Engineering Approach
ERIC Educational Resources Information Center
Ozkul, Ahmet
2012-01-01
Business simulation games are widely used in the classroom to provide students with experiential learning opportunities on business situations in a dynamic fashion. When properly designed and implemented, the computer simulation game can be a useful educational tool by integrating separate theoretical concepts and demonstrating the nature of…
Developing Simulations in Multi-User Virtual Environments to Enhance Healthcare Education
ERIC Educational Resources Information Center
Rogers, Luke
2011-01-01
Computer-based clinical simulations are a powerful teaching and learning tool because of their ability to expand healthcare students' clinical experience by providing practice-based learning. Despite the benefits of traditional computer-based clinical simulations, there are significant issues that arise when incorporating them into a flexible,…
NASA Astrophysics Data System (ADS)
Mann, Christopher; Narasimhamurthi, Natarajan
1998-08-01
This paper discusses a specific implementation of a web and complement based simulation systems. The overall simulation container is implemented within a web page viewed with Microsoft's Internet Explorer 4.0 web browser. Microsoft's ActiveX/Distributed Component Object Model object interfaces are used in conjunction with the Microsoft DirectX graphics APIs to provide visualization functionality for the simulation. The MathWorks' Matlab computer aided control system design program is used as an ActiveX automation server to provide the compute engine for the simulations.
Numerical propulsion system simulation
NASA Technical Reports Server (NTRS)
Lytle, John K.; Remaklus, David A.; Nichols, Lester D.
1990-01-01
The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributors to the high cost is the need to perform many large scale system tests. Extensive testing is used to capture the complex interactions among the multiple disciplines and the multiple components inherent in complex systems. The objective of the Numerical Propulsion System Simulation (NPSS) is to provide insight into these complex interactions through computational simulations. This will allow for comprehensive evaluation of new concepts early in the design phase before a commitment to hardware is made. It will also allow for rapid assessment of field-related problems, particularly in cases where operational problems were encountered during conditions that would be difficult to simulate experimentally. The tremendous progress taking place in computational engineering and the rapid increase in computing power expected through parallel processing make this concept feasible within the near future. However it is critical that the framework for such simulations be put in place now to serve as a focal point for the continued developments in computational engineering and computing hardware and software. The NPSS concept which is described will provide that framework.
A study of workstation computational performance for real-time flight simulation
NASA Technical Reports Server (NTRS)
Maddalon, Jeffrey M.; Cleveland, Jeff I., II
1995-01-01
With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.
Massively parallel multicanonical simulations
NASA Astrophysics Data System (ADS)
Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard
2018-03-01
Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.
Computer simulation modeling of recreation use: Current status, case studies, and future directions
David N. Cole
2005-01-01
This report compiles information about recent progress in the application of computer simulation modeling to planning and management of recreation use, particularly in parks and wilderness. Early modeling efforts are described in a chapter that provides an historical perspective. Another chapter provides an overview of modeling options, common data input requirements,...
Computer Simulation for Pain Management Education: A Pilot Study.
Allred, Kelly; Gerardi, Nicole
2017-10-01
Effective pain management is an elusive concept in acute care. Inadequate knowledge has been identified as a barrier to providing optimal pain management. This study aimed to determine student perceptions of an interactive computer simulation as a potential method for learning pain management, as a motivator to read and learn more about pain management, preference over traditional lecture, and its potential to change nursing practice. A post-computer simulation survey with a mixed-methods descriptive design was used in this study. A college of nursing in a large metropolitan university in the Southeast United States. A convenience sample of 30 nursing students in a Bachelor of Science nursing program. An interactive computer simulation was developed as a potential alternative method of teaching pain management to nursing students. Increases in educational gain as well as its potential to change practice were explored. Each participant was asked to complete a survey consisting of 10 standard 5-point Likert scale items and 5 open-ended questions. The survey was used to evaluate the students' perception of the simulation, specifically related to educational benefit, preference compared with traditional teaching methods, and perceived potential to change nursing practice. Data provided descriptive statistics for initial evaluation of the computer simulation. The responses on the survey suggest nursing students perceive the computer simulation to be entertaining, fun, educational, occasionally preferred over regular lecture, and with potential to change practice. Preliminary data support the use of computer simulation in educating nursing students about pain management. Copyright © 2017 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.
Efficient scatter model for simulation of ultrasound images from computed tomography data
NASA Astrophysics Data System (ADS)
D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.
2015-12-01
Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.
Parallel stochastic simulation of macroscopic calcium currents.
González-Vélez, Virginia; González-Vélez, Horacio
2007-06-01
This work introduces MACACO, a macroscopic calcium currents simulator. It provides a parameter-sweep framework which computes macroscopic Ca(2+) currents from the individual aggregation of unitary currents, using a stochastic model for L-type Ca(2+) channels. MACACO uses a simplified 3-state Markov model to simulate the response of each Ca(2+) channel to different voltage inputs to the cell. In order to provide an accurate systematic view for the stochastic nature of the calcium channels, MACACO is composed of an experiment generator, a central simulation engine and a post-processing script component. Due to the computational complexity of the problem and the dimensions of the parameter space, the MACACO simulation engine employs a grid-enabled task farm. Having been designed as a computational biology tool, MACACO heavily borrows from the way cell physiologists conduct and report their experimental work.
Fiber Composite Sandwich Thermostructural Behavior: Computational Simulation
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Aiello, R. A.; Murthy, P. L. N.
1986-01-01
Several computational levels of progressive sophistication/simplification are described to computationally simulate composite sandwich hygral, thermal, and structural behavior. The computational levels of sophistication include: (1) three-dimensional detailed finite element modeling of the honeycomb, the adhesive and the composite faces; (2) three-dimensional finite element modeling of the honeycomb assumed to be an equivalent continuous, homogeneous medium, the adhesive and the composite faces; (3) laminate theory simulation where the honeycomb (metal or composite) is assumed to consist of plies with equivalent properties; and (4) derivations of approximate, simplified equations for thermal and mechanical properties by simulating the honeycomb as an equivalent homogeneous medium. The approximate equations are combined with composite hygrothermomechanical and laminate theories to provide a simple and effective computational procedure for simulating the thermomechanical/thermostructural behavior of fiber composite sandwich structures.
Frontiers in the Teaching of Physiology. Computer Literacy and Simulation.
ERIC Educational Resources Information Center
Tidball, Charles S., Ed.; Shelesnyak, M. C., Ed.
Provided is a collection of papers on computer literacy and simulation originally published in The Physiology Teacher, supplemented by additional papers and a glossary of terms relevant to the field. The 12 papers are presented in five sections. An affirmation of conventional physiology laboratory exercises, coping with computer terminology, and…
A Digital Computer Simulation of Cardiovascular and Renal Physiology.
ERIC Educational Resources Information Center
Tidball, Charles S.
1979-01-01
Presents the physiological MACPEE, one of a family of digital computer simulations used in Canada and Great Britain. A general description of the model is provided, along with a sample of computer output format, options for making interventions, advanced capabilities, an evaluation, and technical information for running a MAC model. (MA)
Cane Toad or Computer Mouse? Real and Computer-Simulated Laboratory Exercises in Physiology Classes
ERIC Educational Resources Information Center
West, Jan; Veenstra, Anneke
2012-01-01
Traditional practical classes in many countries are being rationalised to reduce costs. The challenge for university educators is to provide students with the opportunity to reinforce theoretical concepts by running something other than a traditional practical program. One alternative is to replace wet labs with comparable computer simulations.…
Quantum chemistry simulation on quantum computers: theories and experiments.
Lu, Dawei; Xu, Boruo; Xu, Nanyang; Li, Zhaokai; Chen, Hongwei; Peng, Xinhua; Xu, Ruixue; Du, Jiangfeng
2012-07-14
It has been claimed that quantum computers can mimic quantum systems efficiently in the polynomial scale. Traditionally, those simulations are carried out numerically on classical computers, which are inevitably confronted with the exponential growth of required resources, with the increasing size of quantum systems. Quantum computers avoid this problem, and thus provide a possible solution for large quantum systems. In this paper, we first discuss the ideas of quantum simulation, the background of quantum simulators, their categories, and the development in both theories and experiments. We then present a brief introduction to quantum chemistry evaluated via classical computers followed by typical procedures of quantum simulation towards quantum chemistry. Reviewed are not only theoretical proposals but also proof-of-principle experimental implementations, via a small quantum computer, which include the evaluation of the static molecular eigenenergy and the simulation of chemical reaction dynamics. Although the experimental development is still behind the theory, we give prospects and suggestions for future experiments. We anticipate that in the near future quantum simulation will become a powerful tool for quantum chemistry over classical computations.
Learning English with "The Sims": Exploiting Authentic Computer Simulation Games for L2 Learning
ERIC Educational Resources Information Center
Ranalli, Jim
2008-01-01
With their realistic animation, complex scenarios and impressive interactivity, computer simulation games might be able to provide context-rich, cognitively engaging virtual environments for language learning. However, simulation games designed for L2 learners are in short supply. As an alternative, could games designed for the mass-market be…
YASS: A System Simulator for Operating System and Computer Architecture Teaching and Learning
ERIC Educational Resources Information Center
Mustafa, Besim
2013-01-01
A highly interactive, integrated and multi-level simulator has been developed specifically to support both the teachers and the learners of modern computer technologies at undergraduate level. The simulator provides a highly visual and user configurable environment with many pedagogical features aimed at facilitating deep understanding of concepts…
Computer-Aided Engineering Tools | Water Power | NREL
energy converters that will provide a full range of simulation capabilities for single devices and arrays simulation of water power technologies on high-performance computers enables the study of complex systems and experimentation. Such simulation is critical to accelerate progress in energy programs within the U.S. Department
Technology survey of computer software as applicable to the MIUS project
NASA Technical Reports Server (NTRS)
Fulbright, B. E.
1975-01-01
Existing computer software, available from either governmental or private sources, applicable to modular integrated utility system program simulation is surveyed. Several programs and subprograms are described to provide a consolidated reference, and a bibliography is included. The report covers the two broad areas of design simulation and system simulation.
ERIC Educational Resources Information Center
Felts, Renee R.
2013-01-01
As increasing numbers of students enroll in introductory computer application courses, instructors have difficulty providing the needed assistance in the traditional laboratory setting. Simulators have been used to facilitate college instruction, but the effectiveness of using a simulator in an introductory computer application course had not yet…
A Computer Model for Red Blood Cell Chemistry
1996-10-01
5012. 13. ABSTRACT (Maximum 200 There is a growing need for interactive computational tools for medical education and research. The most exciting...paradigm for interactive education is simulation. Fluid Mod is a simulation based computational tool developed in the late sixties and early seventies at...to a modern Windows, object oriented interface. This development will provide students with a useful computational tool for learning . More important
Condor-COPASI: high-throughput computing for biochemical networks
2012-01-01
Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage. PMID:22834945
Simulation of DKIST solar adaptive optics system
NASA Astrophysics Data System (ADS)
Marino, Jose; Carlisle, Elizabeth; Schmidt, Dirk
2016-07-01
Solar adaptive optics (AO) simulations are a valuable tool to guide the design and optimization process of current and future solar AO and multi-conjugate AO (MCAO) systems. Solar AO and MCAO systems rely on extended object cross-correlating Shack-Hartmann wavefront sensors to measure the wavefront. Accurate solar AO simulations require computationally intensive operations, which have until recently presented a prohibitive computational cost. We present an update on the status of a solar AO and MCAO simulation tool being developed at the National Solar Observatory. The simulation tool is a multi-threaded application written in the C++ language that takes advantage of current large multi-core CPU computer systems and fast ethernet connections to provide accurate full simulation of solar AO and MCAO systems. It interfaces with KAOS, a state of the art solar AO control software developed by the Kiepenheuer-Institut fuer Sonnenphysik, that provides reliable AO control. We report on the latest results produced by the solar AO simulation tool.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2016-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.
Cloud-based simulations on Google Exacycle reveal ligand modulation of GPCR activation pathways
NASA Astrophysics Data System (ADS)
Kohlhoff, Kai J.; Shukla, Diwakar; Lawrenz, Morgan; Bowman, Gregory R.; Konerding, David E.; Belov, Dan; Altman, Russ B.; Pande, Vijay S.
2014-01-01
Simulations can provide tremendous insight into the atomistic details of biological mechanisms, but micro- to millisecond timescales are historically only accessible on dedicated supercomputers. We demonstrate that cloud computing is a viable alternative that brings long-timescale processes within reach of a broader community. We used Google's Exacycle cloud-computing platform to simulate two milliseconds of dynamics of a major drug target, the G-protein-coupled receptor β2AR. Markov state models aggregate independent simulations into a single statistical model that is validated by previous computational and experimental results. Moreover, our models provide an atomistic description of the activation of a G-protein-coupled receptor and reveal multiple activation pathways. Agonists and inverse agonists interact differentially with these pathways, with profound implications for drug design.
Inflight IFR procedures simulator
NASA Technical Reports Server (NTRS)
Parker, L. C. (Inventor)
1984-01-01
An inflight IFR procedures simulator for generating signals and commands to conventional instruments provided in an airplane is described. The simulator includes a signal synthesizer which generates predetermined simulated signals corresponding to signals normally received from remote sources upon being activated. A computer is connected to the signal synthesizer and causes the signal synthesizer to produce simulated signals responsive to programs fed into the computer. A switching network is connected to the signal synthesizer, the antenna of the aircraft, and navigational instruments and communication devices for selectively connecting instruments and devices to the synthesizer and disconnecting the antenna from the navigational instruments and communication device. Pressure transducers are connected to the altimeter and speed indicator for supplying electrical signals to the computer indicating the altitude and speed of the aircraft. A compass is connected for supply electrical signals for the computer indicating the heading of the airplane. The computer upon receiving signals from the pressure transducer and compass, computes the signals that are fed to the signal synthesizer which, in turn, generates simulated navigational signals.
Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, Michel; Archer, Bill; Matzen, M. Keith
2014-09-16
The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.« less
Dispersion in Spherical Water Drops.
ERIC Educational Resources Information Center
Eliason, John C., Jr.
1989-01-01
Discusses a laboratory exercise simulating the paths of light rays through spherical water drops by applying principles of ray optics and geometry. Describes four parts: determining the output angles, computer simulation, explorations, model testing, and solutions. Provides a computer program and some diagrams. (YP)
DISCRETE EVENT SIMULATION OF OPTICAL SWITCH MATRIX PERFORMANCE IN COMPUTER NETWORKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imam, Neena; Poole, Stephen W
2013-01-01
In this paper, we present application of a Discrete Event Simulator (DES) for performance modeling of optical switching devices in computer networks. Network simulators are valuable tools in situations where one cannot investigate the system directly. This situation may arise if the system under study does not exist yet or the cost of studying the system directly is prohibitive. Most available network simulators are based on the paradigm of discrete-event-based simulation. As computer networks become increasingly larger and more complex, sophisticated DES tool chains have become available for both commercial and academic research. Some well-known simulators are NS2, NS3, OPNET,more » and OMNEST. For this research, we have applied OMNEST for the purpose of simulating multi-wavelength performance of optical switch matrices in computer interconnection networks. Our results suggest that the application of DES to computer interconnection networks provides valuable insight in device performance and aids in topology and system optimization.« less
Computer simulation of functioning of elements of security systems
NASA Astrophysics Data System (ADS)
Godovykh, A. V.; Stepanov, B. P.; Sheveleva, A. A.
2017-01-01
The article is devoted to issues of development of the informational complex for simulation of functioning of the security system elements. The complex is described from the point of view of main objectives, a design concept and an interrelation of main elements. The proposed conception of the computer simulation provides an opportunity to simulate processes of security system work for training security staff during normal and emergency operation.
ESIM_DSN Web-Enabled Distributed Simulation Network
NASA Technical Reports Server (NTRS)
Bedrossian, Nazareth; Novotny, John
2002-01-01
In this paper, the eSim(sup DSN) approach to achieve distributed simulation capability using the Internet is presented. With this approach a complete simulation can be assembled from component subsystems that run on different computers. The subsystems interact with each other via the Internet The distributed simulation uses a hub-and-spoke type network topology. It provides the ability to dynamically link simulation subsystem models to different computers as well as the ability to assign a particular model to each computer. A proof-of-concept demonstrator is also presented. The eSim(sup DSN) demonstrator can be accessed at http://www.jsc.draper.com/esim which hosts various examples of Web enabled simulations.
Simulation framework for intelligent transportation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, T.; Doss, E.; Hanebutte, U.
1996-10-01
A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System (ITS). The simulator is designed for running on parallel computers and distributed (networked) computer systems, but can run on standalone workstations for smaller simulations. The simulator currently models instrumented smart vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide two-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphicalmore » user interfaces to support human-factors studies. Realistic modeling of variations of the posted driving speed are based on human factors studies that take into consideration weather, road conditions, driver personality and behavior, and vehicle type. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on parallel computers, such as ANL`s IBM SP-2, for large-scale problems. A novel feature of the approach is that vehicles are represented by autonomous computer processes which exchange messages with other processes. The vehicles have a behavior model which governs route selection and driving behavior, and can react to external traffic events much like real vehicles. With this approach, the simulation is scaleable to take advantage of emerging massively parallel processor (MPP) systems.« less
Modelling the spread of innovation in wild birds.
Shultz, Thomas R; Montrey, Marcel; Aplin, Lucy M
2017-06-01
We apply three plausible algorithms in agent-based computer simulations to recent experiments on social learning in wild birds. Although some of the phenomena are simulated by all three learning algorithms, several manifestations of social conformity bias are simulated by only the approximate majority (AM) algorithm, which has roots in chemistry, molecular biology and theoretical computer science. The simulations generate testable predictions and provide several explanatory insights into the diffusion of innovation through a population. The AM algorithm's success raises the possibility of its usefulness in studying group dynamics more generally, in several different scientific domains. Our differential-equation model matches simulation results and provides mathematical insights into the dynamics of these algorithms. © 2017 The Author(s).
Using Computer Games to Train Information Warfare Teams
2004-01-01
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2004 2004 Paper No 1729 Page 1 of 10 Using Computer Games to...responses they will experience on real missions is crucial. 3D computer games have proved themselves to be highly effective in engaging players...motivationally and emotionally. This effort, therefore, uses gaming technology to provide realistic simulations. These games are augmented with
Computationally efficient multibody simulations
NASA Technical Reports Server (NTRS)
Ramakrishnan, Jayant; Kumar, Manoj
1994-01-01
Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.
Computer simulation of the NASA water vapor electrolysis reactor
NASA Technical Reports Server (NTRS)
Bloom, A. M.
1974-01-01
The water vapor electrolysis (WVE) reactor is a spacecraft waste reclamation system for extended-mission manned spacecraft. The WVE reactor's raw material is water, its product oxygen. A computer simulation of the WVE operational processes provided the data required for an optimal design of the WVE unit. The simulation process was implemented with the aid of a FORTRAN IV routine.
Simulated Sustainable Societies: Students' Reflections on Creating Future Cities in Computer Games
ERIC Educational Resources Information Center
Nilsson, Elisabet M.; Jakobsson, Anders
2011-01-01
The empirical study, in this article, involved 42 students (ages 14-15), who used the urban simulation computer game SimCity 4 to create models of sustainable future cities. The aim was to explore in what ways the simulated "real" worlds provided by this game could be a potential facilitator for science learning contexts. The topic investigated is…
Computer animation challenges for computational fluid dynamics
NASA Astrophysics Data System (ADS)
Vines, Mauricio; Lee, Won-Sook; Mavriplis, Catherine
2012-07-01
Computer animation requirements differ from those of traditional computational fluid dynamics (CFD) investigations in that visual plausibility and rapid frame update rates trump physical accuracy. We present an overview of the main techniques for fluid simulation in computer animation, starting with Eulerian grid approaches, the Lattice Boltzmann method, Fourier transform techniques and Lagrangian particle introduction. Adaptive grid methods, precomputation of results for model reduction, parallelisation and computation on graphical processing units (GPUs) are reviewed in the context of accelerating simulation computations for animation. A survey of current specific approaches for the application of these techniques to the simulation of smoke, fire, water, bubbles, mixing, phase change and solid-fluid coupling is also included. Adding plausibility to results through particle introduction, turbulence detail and concentration on regions of interest by level set techniques has elevated the degree of accuracy and realism of recent animations. Basic approaches are described here. Techniques to control the simulation to produce a desired visual effect are also discussed. Finally, some references to rendering techniques and haptic applications are mentioned to provide the reader with a complete picture of the challenges of simulating fluids in computer animation.
Fast Learning for Immersive Engagement in Energy Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, Brian W; Bugbee, Bruce; Gruchalla, Kenny M
The fast computation which is critical for immersive engagement with and learning from energy simulations would be furthered by developing a general method for creating rapidly computed simplified versions of NREL's computation-intensive energy simulations. Created using machine learning techniques, these 'reduced form' simulations can provide statistically sound estimates of the results of the full simulations at a fraction of the computational cost with response times - typically less than one minute of wall-clock time - suitable for real-time human-in-the-loop design and analysis. Additionally, uncertainty quantification techniques can document the accuracy of the approximate models and their domain of validity. Approximationmore » methods are applicable to a wide range of computational models, including supply-chain models, electric power grid simulations, and building models. These reduced-form representations cannot replace or re-implement existing simulations, but instead supplement them by enabling rapid scenario design and quality assurance for large sets of simulations. We present an overview of the framework and methods we have implemented for developing these reduced-form representations.« less
Recent advances in computational methodology for simulation of mechanical circulatory assist devices
Marsden, Alison L.; Bazilevs, Yuri; Long, Christopher C.; Behr, Marek
2014-01-01
Ventricular assist devices (VADs) provide mechanical circulatory support to offload the work of one or both ventricles during heart failure. They are used in the clinical setting as destination therapy, as bridge to transplant, or more recently as bridge to recovery to allow for myocardial remodeling. Recent developments in computational simulation allow for detailed assessment of VAD hemodynamics for device design and optimization for both children and adults. Here, we provide a focused review of the recent literature on finite element methods and optimization for VAD simulations. As VAD designs typically fall into two categories, pulsatile and continuous flow devices, we separately address computational challenges of both types of designs, and the interaction with the circulatory system with three representative case studies. In particular, we focus on recent advancements in finite element methodology that has increased the fidelity of VAD simulations. We outline key challenges, which extend to the incorporation of biological response such as thrombosis and hemolysis, as well as shape optimization methods and challenges in computational methodology. PMID:24449607
ASC FY17 Implementation Plan, Rev. 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, P. G.
The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computationalmore » resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resources, including technical staff, hardware, simulation software, and computer science solutions.« less
NASA Technical Reports Server (NTRS)
Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)
1993-01-01
This is a real-time robotic controller and simulator which is a MIMD-SIMD parallel architecture for interfacing with an external host computer and providing a high degree of parallelism in computations for robotic control and simulation. It includes a host processor for receiving instructions from the external host computer and for transmitting answers to the external host computer. There are a plurality of SIMD microprocessors, each SIMD processor being a SIMD parallel processor capable of exploiting fine grain parallelism and further being able to operate asynchronously to form a MIMD architecture. Each SIMD processor comprises a SIMD architecture capable of performing two matrix-vector operations in parallel while fully exploiting parallelism in each operation. There is a system bus connecting the host processor to the plurality of SIMD microprocessors and a common clock providing a continuous sequence of clock pulses. There is also a ring structure interconnecting the plurality of SIMD microprocessors and connected to the clock for providing the clock pulses to the SIMD microprocessors and for providing a path for the flow of data and instructions between the SIMD microprocessors. The host processor includes logic for controlling the RRCS by interpreting instructions sent by the external host computer, decomposing the instructions into a series of computations to be performed by the SIMD microprocessors, using the system bus to distribute associated data among the SIMD microprocessors, and initiating activity of the SIMD microprocessors to perform the computations on the data by procedure call.
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, A; Zbijewski, W; Bolch, W
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less
Computer simulation: A modern day crystal ball?
NASA Technical Reports Server (NTRS)
Sham, Michael; Siprelle, Andrew
1994-01-01
It has long been the desire of managers to be able to look into the future and predict the outcome of decisions. With the advent of computer simulation and the tremendous capability provided by personal computers, that desire can now be realized. This paper presents an overview of computer simulation and modeling, and discusses the capabilities of Extend. Extend is an iconic-driven Macintosh-based software tool that brings the power of simulation to the average computer user. An example of an Extend based model is presented in the form of the Space Transportation System (STS) Processing Model. The STS Processing Model produces eight shuttle launches per year, yet it takes only about ten minutes to run. In addition, statistical data such as facility utilization, wait times, and processing bottlenecks are produced. The addition or deletion of resources, such as orbiters or facilities, can be easily modeled and their impact analyzed. Through the use of computer simulation, it is possible to look into the future to see the impact of today's decisions.
Verifying a computational method for predicting extreme ground motion
Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.
2011-01-01
In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2017-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948
A School Finance Computer Simulation Model
ERIC Educational Resources Information Center
Boardman, Gerald R.
1974-01-01
Presents a description of the computer simulation model developed by the National Educational Finance Project for use by States in planning and evaluating alternative approaches for State support programs. Provides a general introduction to the model, a program operation overview, a sample run, and some conclusions. (Author/WM)
Advanced Simulation and Computing Fiscal Year 14 Implementation Plan, Rev. 0.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meisner, Robert; McCoy, Michel; Archer, Bill
2013-09-11
The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Moreover, ASC’s business model is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools.« less
The Australian Computational Earth Systems Simulator
NASA Astrophysics Data System (ADS)
Mora, P.; Muhlhaus, H.; Lister, G.; Dyskin, A.; Place, D.; Appelbe, B.; Nimmervoll, N.; Abramson, D.
2001-12-01
Numerical simulation of the physics and dynamics of the entire earth system offers an outstanding opportunity for advancing earth system science and technology but represents a major challenge due to the range of scales and physical processes involved, as well as the magnitude of the software engineering effort required. However, new simulation and computer technologies are bringing this objective within reach. Under a special competitive national funding scheme to establish new Major National Research Facilities (MNRF), the Australian government together with a consortium of Universities and research institutions have funded construction of the Australian Computational Earth Systems Simulator (ACcESS). The Simulator or computational virtual earth will provide the research infrastructure to the Australian earth systems science community required for simulations of dynamical earth processes at scales ranging from microscopic to global. It will consist of thematic supercomputer infrastructure and an earth systems simulation software system. The Simulator models and software will be constructed over a five year period by a multi-disciplinary team of computational scientists, mathematicians, earth scientists, civil engineers and software engineers. The construction team will integrate numerical simulation models (3D discrete elements/lattice solid model, particle-in-cell large deformation finite-element method, stress reconstruction models, multi-scale continuum models etc) with geophysical, geological and tectonic models, through advanced software engineering and visualization technologies. When fully constructed, the Simulator aims to provide the software and hardware infrastructure needed to model solid earth phenomena including global scale dynamics and mineralisation processes, crustal scale processes including plate tectonics, mountain building, interacting fault system dynamics, and micro-scale processes that control the geological, physical and dynamic behaviour of earth systems. ACcESS represents a part of Australia's contribution to the APEC Cooperation for Earthquake Simulation (ACES) international initiative. Together with other national earth systems science initiatives including the Japanese Earth Simulator and US General Earthquake Model projects, ACcESS aims to provide a driver for scientific advancement and technological breakthroughs including: quantum leaps in understanding of earth evolution at global, crustal, regional and microscopic scales; new knowledge of the physics of crustal fault systems required to underpin the grand challenge of earthquake prediction; new understanding and predictive capabilities of geological processes such as tectonics and mineralisation.
NASA's Information Power Grid: Large Scale Distributed Computing and Data Management
NASA Technical Reports Server (NTRS)
Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)
2001-01-01
Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.
Understanding Islamist political violence through computational social simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watkins, Jennifer H; Mackerrow, Edward P; Patelli, Paolo G
Understanding the process that enables political violence is of great value in reducing the future demand for and support of violent opposition groups. Methods are needed that allow alternative scenarios and counterfactuals to be scientifically researched. Computational social simulation shows promise in developing 'computer experiments' that would be unfeasible or unethical in the real world. Additionally, the process of modeling and simulation reveals and challenges assumptions that may not be noted in theories, exposes areas where data is not available, and provides a rigorous, repeatable, and transparent framework for analyzing the complex dynamics of political violence. This paper demonstrates themore » computational modeling process using two simulation techniques: system dynamics and agent-based modeling. The benefits and drawbacks of both techniques are discussed. In developing these social simulations, we discovered that the social science concepts and theories needed to accurately simulate the associated psychological and social phenomena were lacking.« less
LOS selective fading and AN/FRC-170(V) radio hybrid computer simulation phase A report
NASA Astrophysics Data System (ADS)
Klukis, M. K.; Lyon, T. I.; Walker, R.
1981-09-01
This report documents results of the first phase of modeling, simulation and study of the dual diversity AN/FRC-170(V) radio and frequency selective fading line of sight channel. Both hybrid computer and circuit technologies were used to develop a fast, accurate and flexible simulation tool to investigate changes and proposed improvements to the design of the AN/FRC-170(V) radio. In addition to the simulation study, a remote hybrid computer terminal was provided to DCEC for interactive study of the modeled radio and channel. Simulated performance of the radio for Rayleigh, line of sight two ray channels, and additive noise are included in the report.
Computer simulation of space station computer steered high gain antenna
NASA Technical Reports Server (NTRS)
Beach, S. W.
1973-01-01
The mathematical modeling and programming of a complete simulation program for a space station computer-steered high gain antenna are described. The program provides for reading input data cards, numerically integrating up to 50 first order differential equations, and monitoring up to 48 variables on printed output and on plots. The program system consists of a high gain antenna, an antenna gimbal control system, an on board computer, and the environment in which all are to operate.
NASA Technical Reports Server (NTRS)
Gibson, Jim; Jordan, Joe; Grant, Terry
1990-01-01
Local Area Network Extensible Simulator (LANES) computer program provides method for simulating performance of high-speed local-area-network (LAN) technology. Developed as design and analysis software tool for networking computers on board proposed Space Station. Load, network, link, and physical layers of layered network architecture all modeled. Mathematically models according to different lower-layer protocols: Fiber Distributed Data Interface (FDDI) and Star*Bus. Written in FORTRAN 77.
Efficient Computation Of Manipulator Inertia Matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1991-01-01
Improved method for computation of manipulator inertia matrix developed, based on concept of spatial inertia of composite rigid body. Required for implementation of advanced dynamic-control schemes as well as dynamic simulation of manipulator motion. Motivated by increasing demand for fast algorithms to provide real-time control and simulation capability and, particularly, need for faster-than-real-time simulation capability, required in many anticipated space teleoperation applications.
NASA Technical Reports Server (NTRS)
Smith, Jeffrey
2003-01-01
The Bio- Visualization, Imaging and Simulation (BioVIS) Technology Center at NASA's Ames Research Center is dedicated to developing and applying advanced visualization, computation and simulation technologies to support NASA Space Life Sciences research and the objectives of the Fundamental Biology Program. Research ranges from high resolution 3D cell imaging and structure analysis, virtual environment simulation of fine sensory-motor tasks, computational neuroscience and biophysics to biomedical/clinical applications. Computer simulation research focuses on the development of advanced computational tools for astronaut training and education. Virtual Reality (VR) and Virtual Environment (VE) simulation systems have become important training tools in many fields from flight simulation to, more recently, surgical simulation. The type and quality of training provided by these computer-based tools ranges widely, but the value of real-time VE computer simulation as a method of preparing individuals for real-world tasks is well established. Astronauts routinely use VE systems for various training tasks, including Space Shuttle landings, robot arm manipulations and extravehicular activities (space walks). Currently, there are no VE systems to train astronauts for basic and applied research experiments which are an important part of many missions. The Virtual Glovebox (VGX) is a prototype VE system for real-time physically-based simulation of the Life Sciences Glovebox where astronauts will perform many complex tasks supporting research experiments aboard the International Space Station. The VGX consists of a physical display system utilizing duel LCD projectors and circular polarization to produce a desktop-sized 3D virtual workspace. Physically-based modeling tools (Arachi Inc.) provide real-time collision detection, rigid body dynamics, physical properties and force-based controls for objects. The human-computer interface consists of two magnetic tracking devices (Ascention Inc.) attached to instrumented gloves (Immersion Inc.) which co-locate the user's hands with hand/forearm representations in the virtual workspace. Force-feedback is possible in a work volume defined by a Phantom Desktop device (SensAble inc.). Graphics are written in OpenGL. The system runs on a 2.2 GHz Pentium 4 PC. The prototype VGX provides astronauts and support personnel with a real-time physically-based VE system to simulate basic research tasks both on Earth and in the microgravity of Space. The immersive virtual environment of the VGX also makes it a useful tool for virtual engineering applications including CAD development, procedure design and simulation of human-system systems in a desktop-sized work volume.
Bibliography. Computer-Oriented Projects, 1987.
ERIC Educational Resources Information Center
Smith, Richard L., Comp.
1988-01-01
Provides an annotated list of references on computer-oriented projects. Includes information on computers; hands-on versus simulations; games; instruction; students' attitudes and learning styles; artificial intelligence; tutoring; and application of spreadsheets. (RT)
Evaluation of Complex Human Performance: The Promise of Computer-Based Simulation
ERIC Educational Resources Information Center
Newsom, Robert S.; And Others
1978-01-01
For the training and placement of professional workers, multiple-choice instruments are the norm for wide-scale measurement and evaluation efforts. These instruments contain fundamental problems. Computer-based management simulations may provide solutions to these problems, appear scoreable and reliable, offer increased validity, and are better…
Modelling rollover behaviour of exacavator-based forest machines
M.W. Veal; S.E. Taylor; Robert B. Rummer
2003-01-01
This poster presentation provides results from analytical and computer simulation models of rollover behaviour of hydraulic excavators. These results are being used as input to the operator protective structure standards development process. Results from rigid body mechanics and computer simulation methods agree well with field rollover test data. These results show...
Computer-Simulated Psychotherapy as an Aid in Teaching Clinical Psychology.
ERIC Educational Resources Information Center
Suler, John R.
1987-01-01
Describes how Elisa, a widely known computer program which simulates the responses of a psychotherapist, can be used as a teaching aid in undergraduate clinical psychology classes. Provides information on conducting the exercise, integrating it into the course syllabus, and evaluating its impact on students. (JDH)
Student Learning Opportunities in Traditional and Computer-Mediated Internships
ERIC Educational Resources Information Center
Bayerlein, Leopold; Jeske, Debora
2018-01-01
Purpose: The purpose of this paper is to provide a student learning outcome focussed assessment of the benefits and limitations of traditional internships, e-internships, and simulated internships to evaluate the potential of computer-mediated internships (CMIs) (e-internships and simulated internships) within higher education from a student…
Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong
2010-10-01
Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture
NASA Technical Reports Server (NTRS)
Jones, W. H.
1983-01-01
The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.
A heterogeneous computing environment for simulating astrophysical fluid flows
NASA Technical Reports Server (NTRS)
Cazes, J.
1994-01-01
In the Concurrent Computing Laboratory in the Department of Physics and Astronomy at Louisiana State University we have constructed a heterogeneous computing environment that permits us to routinely simulate complicated three-dimensional fluid flows and to readily visualize the results of each simulation via three-dimensional animation sequences. An 8192-node MasPar MP-1 computer with 0.5 GBytes of RAM provides 250 MFlops of execution speed for our fluid flow simulations. Utilizing the parallel virtual machine (PVM) language, at periodic intervals data is automatically transferred from the MP-1 to a cluster of workstations where individual three-dimensional images are rendered for inclusion in a single animation sequence. Work is underway to replace executions on the MP-1 with simulations performed on the 512-node CM-5 at NCSA and to simultaneously gain access to more potent volume rendering workstations.
Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.
2016-01-01
An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
This manual describes how to use the Emulation Simulation Computer Model (ESCM). Based on G189A, ESCM computes the transient performance of a Space Station atmospheric revitalization subsystem (ARS) with CO2 removal provided by a solid amine water desorbed subsystem called SAWD. Many performance parameters are computed some of which are cabin CO2 partial pressure, relative humidity, temperature, O2 partial pressure, and dew point. The program allows the user to simulate various possible combinations of man loading, metabolic profiles, cabin volumes and certain hypothesized failures that could occur.
Mohiuddin, Syed; Busby, John; Savović, Jelena; Richards, Alison; Northstone, Kate; Hollingworth, William; Donovan, Jenny L; Vasilakis, Christos
2017-01-01
Objectives Overcrowding in the emergency department (ED) is common in the UK as in other countries worldwide. Computer simulation is one approach used for understanding the causes of ED overcrowding and assessing the likely impact of changes to the delivery of emergency care. However, little is known about the usefulness of computer simulation for analysis of ED patient flow. We undertook a systematic review to investigate the different computer simulation methods and their contribution for analysis of patient flow within EDs in the UK. Methods We searched eight bibliographic databases (MEDLINE, EMBASE, COCHRANE, WEB OF SCIENCE, CINAHL, INSPEC, MATHSCINET and ACM DIGITAL LIBRARY) from date of inception until 31 March 2016. Studies were included if they used a computer simulation method to capture patient progression within the ED of an established UK National Health Service hospital. Studies were summarised in terms of simulation method, key assumptions, input and output data, conclusions drawn and implementation of results. Results Twenty-one studies met the inclusion criteria. Of these, 19 used discrete event simulation and 2 used system dynamics models. The purpose of many of these studies (n=16; 76%) centred on service redesign. Seven studies (33%) provided no details about the ED being investigated. Most studies (n=18; 86%) used specific hospital models of ED patient flow. Overall, the reporting of underlying modelling assumptions was poor. Nineteen studies (90%) considered patient waiting or throughput times as the key outcome measure. Twelve studies (57%) reported some involvement of stakeholders in the simulation study. However, only three studies (14%) reported on the implementation of changes supported by the simulation. Conclusions We found that computer simulation can provide a means to pretest changes to ED care delivery before implementation in a safe and efficient manner. However, the evidence base is small and poorly developed. There are some methodological, data, stakeholder, implementation and reporting issues, which must be addressed by future studies. PMID:28487459
Construction of dynamic stochastic simulation models using knowledge-based techniques
NASA Technical Reports Server (NTRS)
Williams, M. Douglas; Shiva, Sajjan G.
1990-01-01
Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).
R.B. Ferguson; V. Clark Baldwin
1987-01-01
Complete instructions for user operation of COMPUTE_P-LOB to include detailed examples of computer input and output, of a growth and yield prediction system providing volume and weight yields in stand and stock table format.A complete program listing is provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, U.A.; Baumle, B.; Kohler, P.
1992-10-01
Music, a DSP-based system with a parallel distributed-memory architecture, provides enormous computing power yet retains the flexibility of a general-purpose computer. Reaching a peak performance of 2.7 Gflops at a significantly lower cost, power consumption, and space requirement than conventional supercomputers, Music is well suited to computationally intensive applications such as neural network simulation. 12 refs., 9 figs., 2 tabs.
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU
Xia, Yong; Zhang, Henggui
2015-01-01
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations. PMID:26581957
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU.
Xia, Yong; Wang, Kuanquan; Zhang, Henggui
2015-01-01
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.
Loudos, George K; Papadimitroulas, Panagiotis G; Kagadis, George C
2014-01-01
Monte Carlo (MC) simulations play a crucial role in nuclear medical imaging since they can provide the ground truth for clinical acquisitions, by integrating and quantifing all physical parameters that affect image quality. The last decade a number of realistic computational anthropomorphic models have been developed to serve imaging, as well as other biomedical engineering applications. The combination of MC techniques with realistic computational phantoms can provide a powerful tool for pre and post processing in imaging, data analysis and dosimetry. This work aims to create a global database for simulated Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) exams and the methodology, as well as the first elements are presented. Simulations are performed using the well validated GATE opensource toolkit, standard anthropomorphic phantoms and activity distribution of various radiopharmaceuticals, derived from literature. The resulting images, projections and sinograms of each study are provided in the database and can be further exploited to evaluate processing and reconstruction algorithms. Patient studies using different characteristics are included in the database and different computational phantoms were tested for the same acquisitions. These include the XCAT, Zubal and the Virtual Family, which some of which are used for the first time in nuclear imaging. The created database will be freely available and our current work is towards its extension by simulating additional clinical pathologies.
Introducing Computer Simulation into the High School: An Applied Mathematics Curriculum.
ERIC Educational Resources Information Center
Roberts, Nancy
1981-01-01
A programing language called DYNAMO, developed especially for writing simulation models, is promoted. Details of six, self-teaching curriculum packages recently developed for simulation-oriented instruction are provided. (MP)
Projective simulation for artificial intelligence
NASA Astrophysics Data System (ADS)
Briegel, Hans J.; de Las Cuevas, Gemma
2012-05-01
We propose a model of a learning agent whose interaction with the environment is governed by a simulation-based projection, which allows the agent to project itself into future situations before it takes real action. Projective simulation is based on a random walk through a network of clips, which are elementary patches of episodic memory. The network of clips changes dynamically, both due to new perceptual input and due to certain compositional principles of the simulation process. During simulation, the clips are screened for specific features which trigger factual action of the agent. The scheme is different from other, computational, notions of simulation, and it provides a new element in an embodied cognitive science approach to intelligent action and learning. Our model provides a natural route for generalization to quantum-mechanical operation and connects the fields of reinforcement learning and quantum computation.
Projective simulation for artificial intelligence
Briegel, Hans J.; De las Cuevas, Gemma
2012-01-01
We propose a model of a learning agent whose interaction with the environment is governed by a simulation-based projection, which allows the agent to project itself into future situations before it takes real action. Projective simulation is based on a random walk through a network of clips, which are elementary patches of episodic memory. The network of clips changes dynamically, both due to new perceptual input and due to certain compositional principles of the simulation process. During simulation, the clips are screened for specific features which trigger factual action of the agent. The scheme is different from other, computational, notions of simulation, and it provides a new element in an embodied cognitive science approach to intelligent action and learning. Our model provides a natural route for generalization to quantum-mechanical operation and connects the fields of reinforcement learning and quantum computation. PMID:22590690
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, Kristin C; Brunhart-Lupo, Nicholas J; Bush, Brian W
We have developed a framework for the exploration, design, and planning of energy systems that combines interactive visualization with machine-learning based approximations of simulations through a general purpose dataflow API. Our system provides a visual inter- face allowing users to explore an ensemble of energy simulations representing a subset of the complex input parameter space, and spawn new simulations to 'fill in' input regions corresponding to new enegery system scenarios. Unfortunately, many energy simula- tions are far too slow to provide interactive responses. To support interactive feedback, we are developing reduced-form models via machine learning techniques, which provide statistically soundmore » esti- mates of the full simulations at a fraction of the computational cost and which are used as proxies for the full-form models. Fast com- putation and an agile dataflow enhance the engagement with energy simulations, and allow researchers to better allocate computational resources to capture informative relationships within the system and provide a low-cost method for validating and quality-checking large-scale modeling efforts.« less
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.
Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei
2011-09-07
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.
Two-step simulation of velocity and passive scalar mixing at high Schmidt number in turbulent jets
NASA Astrophysics Data System (ADS)
Rah, K. Jeff; Blanquart, Guillaume
2016-11-01
Simulation of passive scalar in the high Schmidt number turbulent mixing process requires higher computational cost than that of velocity fields, because the scalar is associated with smaller length scales than velocity. Thus, full simulation of both velocity and passive scalar with high Sc for a practical configuration is difficult to perform. In this work, a new approach to simulate velocity and passive scalar mixing at high Sc is suggested to reduce the computational cost. First, the velocity fields are resolved by Large Eddy Simulation (LES). Then, by extracting the velocity information from LES, the scalar inside a moving fluid blob is simulated by Direct Numerical Simulation (DNS). This two-step simulation method is applied to a turbulent jet and provides a new way to examine a scalar mixing process in a practical application with smaller computational cost. NSF, Samsung Scholarship.
Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus
2016-05-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.
2014-01-01
Computational Aerodynamic simulations of an 840 ft/sec tip speed, Advanced Ducted Propulsor fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, lownoise research fan/nacelle model that has undergone extensive experimental testing in the 9- by 15- foot Low Speed Wind Tunnel at the NASA Glenn Research Center, resulting in quality, detailed aerodynamic and acoustic measurement data. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating conditions simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, excluding a long core duct section downstream of the core inlet guide vane. As a result, only fan rotational speed and system bypass ratio, set by specifying static pressure downstream of the core inlet guide vane row, were adjusted in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. The computed blade row flow fields for all five fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the computed flow fields reveals no excessive boundary layer separations or related secondary-flow problems. A few spanwise comparisons between computational and measurement data in the bypass duct show that they are in good agreement, thus providing a partial validation of the computational results.
Computer-Assisted Scheduling of Army Unit Training: An Application of Simulated Annealing.
ERIC Educational Resources Information Center
Hart, Roland J.; Goehring, Dwight J.
This report of an ongoing research project intended to provide computer assistance to Army units for the scheduling of training focuses on the feasibility of simulated annealing, a heuristic approach for solving scheduling problems. Following an executive summary and brief introduction, the document is divided into three sections. First, the Army…
ERIC Educational Resources Information Center
Arumi, Francisco N.
Computer programs capable of describing the thermal behavior of buildings are used to help architectural students understand environmental systems. The Numerical Simulation Laboratory at the Architectural School of the University of Texas at Austin was developed to provide the necessary software capable of simulating the energy transactions…
NASA Technical Reports Server (NTRS)
Arnold, J. O.
1987-01-01
With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.
Salipur, Zdravko; Bertocci, Gina
2010-01-01
It has been shown that ANSI WC19 transit wheelchairs that are crashworthy in frontal impact exhibit catastrophic failures in rear impact and may not be able to provide stable seating support and thus occupant protection for the wheelchair occupant. Thus far only limited sled test and computer simulation data have been available to study rear impact wheelchair safety. Computer modeling can be used as an economic and comprehensive tool to gain critical knowledge regarding wheelchair integrity and occupant safety. This study describes the development and validation of a computer model simulating an adult wheelchair-seated occupant subjected to a rear impact event. The model was developed in MADYMO and validated rigorously using the results of three similar sled tests conducted to specifications provided in the draft ISO/TC 173 standard. Outcomes from the model can provide critical wheelchair loading information to wheelchair and tiedown manufacturers, resulting in safer wheelchair designs for rear impact conditions. (c) 2009 IPEM. Published by Elsevier Ltd. All rights reserved.
ANOPP/VMS HSCT ground contour system
NASA Technical Reports Server (NTRS)
Rawls, John, Jr.; Glaab, Lou
1992-01-01
This viewgraph shows the integration of the Visual Motion Simulator with ANOPP. ANOPP is an acronym for the Aircraft NOise Prediction Program. It is a computer code consisting of dedicated noise prediction modules for jet, propeller, and rotor powered aircraft along with flight support and noise propagation modules, all executed under the control of an executive system. The Visual Motion Simulator (VMS) is a ground based motion simulator with six degrees of freedom. The transport-type cockpit is equipped with conventional flight and engine-thrust controls and with flight instrument displays. Control forces on the wheel, column, and rudder pedals are provided by a hydraulic system coupled with an analog computer. The simulator provides variable-feel characteristics of stiffness, damping, coulomb friction, breakout forces, and inertia. The VMS provides a wide range of realistic flight trajectories necessary for computing accurate ground contours. The NASA VMS will be discussed in detail later in this presentation. An equally important part of the system for both ANOPP and VMS is the engine performance. This will also be discussed in the presentation.
NASA Technical Reports Server (NTRS)
Kavi, K. M.
1984-01-01
There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.
NASA Technical Reports Server (NTRS)
Moin, Parviz; Spalart, Philippe R.
1987-01-01
The use of simulation data bases for the examination of turbulent flows is an effective research tool. Studies of the structure of turbulence have been hampered by the limited number of probes and the impossibility of measuring all desired quantities. Also, flow visualization is confined to the observation of passive markers with limited field of view and contamination caused by time-history effects. Computer flow fields are a new resource for turbulence research, providing all the instantaneous flow variables in three-dimensional space. Simulation data bases also provide much-needed information for phenomenological turbulence modeling. Three dimensional velocity and pressure fields from direct simulations can be used to compute all the terms in the transport equations for the Reynolds stresses and the dissipation rate. However, only a few, geometrically simple flows have been computed by direct numerical simulation, and the inventory of simulation does not fully address the current modeling needs in complex turbulent flows. The availability of three-dimensional flow fields also poses challenges in developing new techniques for their analysis, techniques based on experimental methods, some of which are used here for the analysis of direct-simulation data bases in studies of the mechanics of turbulent flows.
Particle-In-Cell simulations of high pressure plasmas using graphics processing units
NASA Astrophysics Data System (ADS)
Gebhardt, Markus; Atteln, Frank; Brinkmann, Ralf Peter; Mussenbrock, Thomas; Mertmann, Philipp; Awakowicz, Peter
2009-10-01
Particle-In-Cell (PIC) simulations are widely used to understand the fundamental phenomena in low-temperature plasmas. Particularly plasmas at very low gas pressures are studied using PIC methods. The inherent drawback of these methods is that they are very time consuming -- certain stability conditions has to be satisfied. This holds even more for the PIC simulation of high pressure plasmas due to the very high collision rates. The simulations take up to very much time to run on standard computers and require the help of computer clusters or super computers. Recent advances in the field of graphics processing units (GPUs) provides every personal computer with a highly parallel multi processor architecture for very little money. This architecture is freely programmable and can be used to implement a wide class of problems. In this paper we present the concepts of a fully parallel PIC simulation of high pressure plasmas using the benefits of GPU programming.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demeure, I.M.
The research presented here is concerned with representation techniques and tools to support the design, prototyping, simulation, and evaluation of message-based parallel, distributed computations. The author describes ParaDiGM-Parallel, Distributed computation Graph Model-a visual representation technique for parallel, message-based distributed computations. ParaDiGM provides several views of a computation depending on the aspect of concern. It is made of two complementary submodels, the DCPG-Distributed Computing Precedence Graph-model, and the PAM-Process Architecture Model-model. DCPGs are precedence graphs used to express the functionality of a computation in terms of tasks, message-passing, and data. PAM graphs are used to represent the partitioning of a computationmore » into schedulable units or processes, and the pattern of communication among those units. There is a natural mapping between the two models. He illustrates the utility of ParaDiGM as a representation technique by applying it to various computations (e.g., an adaptive global optimization algorithm, the client-server model). ParaDiGM representations are concise. They can be used in documenting the design and the implementation of parallel, distributed computations, in describing such computations to colleagues, and in comparing and contrasting various implementations of the same computation. He then describes VISA-VISual Assistant, a software tool to support the design, prototyping, and simulation of message-based parallel, distributed computations. VISA is based on the ParaDiGM model. In particular, it supports the editing of ParaDiGM graphs to describe the computations of interest, and the animation of these graphs to provide visual feedback during simulations. The graphs are supplemented with various attributes, simulation parameters, and interpretations which are procedures that can be executed by VISA.« less
O'Donnell, Michael
2015-01-01
State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http://aimspress.com/aimses/ch/reader/view_abstract.aspx?file_no=Environ2015030&flag=1#sthash.p1XKDtF8.dpuf
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
2002-01-01
A high-fidelity simulation of a commercial turbofan engine has been created as part of the Numerical Propulsion System Simulation Project. The high-fidelity computer simulation utilizes computer models that were developed at NASA Glenn Research Center in cooperation with turbofan engine manufacturers. The average-passage (APNASA) Navier-Stokes based viscous flow computer code is used to simulate the 3D flow in the compressors and turbines of the advanced commercial turbofan engine. The 3D National Combustion Code (NCC) is used to simulate the flow and chemistry in the advanced aircraft combustor. The APNASA turbomachinery code and the NCC combustor code exchange boundary conditions at the interface planes at the combustor inlet and exit. This computer simulation technique can evaluate engine performance at steady operating conditions. The 3D flow models provide detailed knowledge of the airflow within the fan and compressor, the high and low pressure turbines, and the flow and chemistry within the combustor. The models simulate the performance of the engine at operating conditions that include sea level takeoff and the altitude cruise condition.
Realistic natural atmospheric phenomena and weather effects for interactive virtual environments
NASA Astrophysics Data System (ADS)
McLoughlin, Leigh
Clouds and the weather are important aspects of any natural outdoor scene, but existing dynamic techniques within computer graphics only offer the simplest of cloud representations. The problem that this work looks to address is how to provide a means of simulating clouds and weather features such as precipitation, that are suitable for virtual environments. Techniques for cloud simulation are available within the area of meteorology, but numerical weather prediction systems are computationally expensive, give more numerical accuracy than we require for graphics and are restricted to the laws of physics. Within computer graphics, we often need to direct and adjust physical features or to bend reality to meet artistic goals, which is a key difference between the subjects of computer graphics and physical science. Pure physically-based simulations, however, evolve their solutions according to pre-set rules and are notoriously difficult to control. The challenge then is for the solution to be computationally lightweight and able to be directed in some measure while at the same time producing believable results. This work presents a lightweight physically-based cloud simulation scheme that simulates the dynamic properties of cloud formation and weather effects. The system simulates water vapour, cloud water, cloud ice, rain, snow and hail. The water model incorporates control parameters and the cloud model uses an arbitrary vertical temperature profile, with a tool described to allow the user to define this. The result of this work is that clouds can now be simulated in near real-time complete with precipitation. The temperature profile and tool then provide a means of directing the resulting formation..
Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing
2011-01-01
Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779
Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing
2011-04-05
Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.
GATE Monte Carlo simulation in a cloud computing environment
NASA Astrophysics Data System (ADS)
Rowedder, Blake Austin
The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.
Free-electron laser simulations on the MPP
NASA Technical Reports Server (NTRS)
Vonlaven, Scott A.; Liebrock, Lorie M.
1987-01-01
Free electron lasers (FELs) are of interest because they provide high power, high efficiency, and broad tunability. FEL simulations can make efficient use of computers of the Massively Parallel Processor (MPP) class because most of the processing consists of applying a simple equation to a set of identical particles. A test version of the KMS Fusion FEL simulation, which resides mainly in the MPPs host computer and only partially in the MPP, has run successfully.
Experimental Identification of Non-Abelian Topological Orders on a Quantum Simulator.
Li, Keren; Wan, Yidun; Hung, Ling-Yan; Lan, Tian; Long, Guilu; Lu, Dawei; Zeng, Bei; Laflamme, Raymond
2017-02-24
Topological orders can be used as media for topological quantum computing-a promising quantum computation model due to its invulnerability against local errors. Conversely, a quantum simulator, often regarded as a quantum computing device for special purposes, also offers a way of characterizing topological orders. Here, we show how to identify distinct topological orders via measuring their modular S and T matrices. In particular, we employ a nuclear magnetic resonance quantum simulator to study the properties of three topologically ordered matter phases described by the string-net model with two string types, including the Z_{2} toric code, doubled semion, and doubled Fibonacci. The third one, non-Abelian Fibonacci order is notably expected to be the simplest candidate for universal topological quantum computing. Our experiment serves as the basic module, built on which one can simulate braiding of non-Abelian anyons and ultimately, topological quantum computation via the braiding, and thus provides a new approach of investigating topological orders using quantum computers.
A PC-based simulation of the National Transonic Facitity's safety microprocessor
NASA Technical Reports Server (NTRS)
Thibodeaux, J. J.; Kilgore, W. A.; Balakrishna, S.
1993-01-01
A brief study was undertaken to demonstrate the feasibility of using a state-of-the-art off-the-shelf high speed personal computer for simulating a microprocessor presently used for wind tunnel safety purposes at Langley Research Center's National Transonic Facility (NTF). Currently, there is no active display of tunnel alarm/alert safety information provided to the tunnel operators, but rather such information is periodically recorded on a process monitoring computer printout. This does not provide on-line situational information nor permit rapid identification of safety operational violations which are able to halt tunnel operations. It was therefore decided to simulate the existing algorithms and briefly evaluate a real-time display which could provide both position and trouble shooting information.
Bistatic passive radar simulator with spatial filtering subsystem
NASA Astrophysics Data System (ADS)
Hossa, Robert; Szlachetko, Boguslaw; Lewandowski, Andrzej; Górski, Maksymilian
2009-06-01
The purpose of this paper is to briefly introduce the structure and features of the developed virtual passive FM radar implemented in Matlab system of numerical computations and to present many alternative ways of its performance. An idea of the proposed solution is based on analytic representation of transmitted direct signals and reflected echo signals. As a spatial filtering subsystem a beamforming network of ULA and UCA dipole configuration dedicated to bistatic radar concept is considered and computationally efficient procedures are presented in details. Finally, exemplary results of the computer simulations of the elaborated virtual simulator are provided and discussed.
NASA Astrophysics Data System (ADS)
Larsen, J. D.; Schaap, M. G.
2013-12-01
Recent advances in computing technology and experimental techniques have made it possible to observe and characterize fluid dynamics at the micro-scale. Many computational methods exist that can adequately simulate fluid flow in porous media. Lattice Boltzmann methods provide the distinct advantage of tracking particles at the microscopic level and returning macroscopic observations. While experimental methods can accurately measure macroscopic fluid dynamics, computational efforts can be used to predict and gain insight into fluid dynamics by utilizing thin sections or computed micro-tomography (CMT) images of core sections. Although substantial effort have been made to advance non-invasive imaging methods such as CMT, fluid dynamics simulations, and microscale analysis, a true three dimensional image segmentation technique has not been developed until recently. Many competing segmentation techniques are utilized in industry and research settings with varying results. In this study lattice Boltzmann method is used to simulate stokes flow in a macroporous soil column. Two dimensional CMT images were used to reconstruct a three dimensional representation of the original sample. Six competing segmentation standards were used to binarize the CMT volumes which provide distinction between solid phase and pore space. The permeability of the reconstructed samples was calculated, with Darcy's Law, from lattice Boltzmann simulations of fluid flow in the samples. We compare simulated permeability from differing segmentation algorithms to experimental findings.
A Computational Framework for Efficient Low Temperature Plasma Simulations
NASA Astrophysics Data System (ADS)
Verma, Abhishek Kumar; Venkattraman, Ayyaswamy
2016-10-01
Over the past years, scientific computing has emerged as an essential tool for the investigation and prediction of low temperature plasmas (LTP) applications which includes electronics, nanomaterial synthesis, metamaterials etc. To further explore the LTP behavior with greater fidelity, we present a computational toolbox developed to perform LTP simulations. This framework will allow us to enhance our understanding of multiscale plasma phenomenon using high performance computing tools mainly based on OpenFOAM FVM distribution. Although aimed at microplasma simulations, the modular framework is able to perform multiscale, multiphysics simulations of physical systems comprises of LTP. Some salient introductory features are capability to perform parallel, 3D simulations of LTP applications on unstructured meshes. Performance of the solver is tested based on numerical results assessing accuracy and efficiency of benchmarks for problems in microdischarge devices. Numerical simulation of microplasma reactor at atmospheric pressure with hemispherical dielectric coated electrodes will be discussed and hence, provide an overview of applicability and future scope of this framework.
High performance hybrid functional Petri net simulations of biological pathway models on CUDA.
Chalkidis, Georgios; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Hybrid functional Petri nets are a wide-spread tool for representing and simulating biological models. Due to their potential of providing virtual drug testing environments, biological simulations have a growing impact on pharmaceutical research. Continuous research advancements in biology and medicine lead to exponentially increasing simulation times, thus raising the demand for performance accelerations by efficient and inexpensive parallel computation solutions. Recent developments in the field of general-purpose computation on graphics processing units (GPGPU) enabled the scientific community to port a variety of compute intensive algorithms onto the graphics processing unit (GPU). This work presents the first scheme for mapping biological hybrid functional Petri net models, which can handle both discrete and continuous entities, onto compute unified device architecture (CUDA) enabled GPUs. GPU accelerated simulations are observed to run up to 18 times faster than sequential implementations. Simulating the cell boundary formation by Delta-Notch signaling on a CUDA enabled GPU results in a speedup of approximately 7x for a model containing 1,600 cells.
ERIC Educational Resources Information Center
Kay, Jack G.; And Others
1988-01-01
Describes two applications of the microcomputer for laboratory exercises. Explores radioactive decay using the Batemen equations on a Macintosh computer. Provides examples and screen dumps of data. Investigates polymer configurations using a Monte Carlo simulation on an IBM personal computer. (MVL)
Computer-Based Training: Will it Replace You?
ERIC Educational Resources Information Center
Hudson, William J.
1982-01-01
Examines myths and fears about computer-based training (displaces trainers, dehumanizes learners), lists what computers cannot do (analyze needs, formulate objectives, act as subject experts), and what they can do effectively (handle knowledge transfer, provide simulation). (SK)
NASA Technical Reports Server (NTRS)
Seldner, K.
1976-01-01
The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clouse, C. J.; Edwards, M. J.; McCoy, M. G.
2015-07-07
Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.
Universe creation on a computer
NASA Astrophysics Data System (ADS)
McCabe, Gordon
The purpose of this paper is to provide an account of the epistemology and metaphysics of universe creation on a computer. The paper begins with F.J. Tipler's argument that our experience is indistinguishable from the experience of someone embedded in a perfect computer simulation of our own universe, hence we cannot know whether or not we are part of such a computer program ourselves. Tipler's argument is treated as a special case of epistemological scepticism, in a similar vein to 'brain-in-a-vat' arguments. It is argued that Tipler's hypothesis that our universe is a program running on a digital computer in another universe, generates empirical predictions, and is therefore a falsifiable hypothesis. The computer program hypothesis is also treated as a hypothesis about what exists beyond the physical world, and is compared with Kant's metaphysics of noumena. It is argued that if our universe is a program running on a digital computer, then our universe must have compact spatial topology, and the possibilities of observationally testing this prediction are considered. The possibility of testing the computer program hypothesis with the value of the density parameter Ω0 is also analysed. The informational requirements for a computer to represent a universe exactly and completely are considered. Consequent doubt is thrown upon Tipler's claim that if a hierarchy of computer universes exists, we would not be able to know which 'level of implementation' our universe exists at. It is then argued that a digital computer simulation of a universe, or any other physical system, does not provide a realisation of that universe or system. It is argued that a digital computer simulation of a physical system is not objectively related to that physical system, and therefore cannot exist as anything else other than a physical process occurring upon the components of the computer. It is concluded that Tipler's sceptical hypothesis, and a related hypothesis from Bostrom, cannot be true: it is impossible that our own experience is indistinguishable from the experience of somebody embedded in a digital computer simulation because it is impossible for anybody to be embedded in a digital computer simulation.
NASA Technical Reports Server (NTRS)
Reller, J. O., Jr.
1976-01-01
Data handling, communications, and documentation aspects of the ASSESS mission are described. Most experiments provided their own data handling equipment, although some used the airborne computer for backup, and one experiment required real-time computations. Communications facilities were set up to simulate those to be provided between Spacelab and the ground, including a downlink TV system. Mission documentation was kept to a minimum and proved sufficient. Examples are given of the basic documents of the mission.
NASA Technical Reports Server (NTRS)
Burgin, G. H.; Owens, A. J.
1975-01-01
A detailed description is presented of the computer programs in order to provide an understanding of the mathematical and geometrical relationships as implemented in the programs. The individual sbbroutines and their underlying mathematical relationships are described, and the required input data and the output provided by the program are explained. The relationship of the adaptive maneuvering logic program with the program to drive the differential maneuvering simulator is discussed.
System analysis for the Huntsville Operational Support Center distributed computer system
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Mauldin, J.
1984-01-01
The Huntsville Operations Support Center (HOSC) is a distributed computer system used to provide real time data acquisition, analysis and display during NASA space missions and to perform simulation and study activities during non-mission times. The primary purpose is to provide a HOSC system simulation model that is used to investigate the effects of various HOSC system configurations. Such a model would be valuable in planning the future growth of HOSC and in ascertaining the effects of data rate variations, update table broadcasting and smart display terminal data requirements on the HOSC HYPERchannel network system. A simulation model was developed in PASCAL and results of the simulation model for various system configuraions were obtained. A tutorial of the model is presented and the results of simulation runs are presented. Some very high data rate situations were simulated to observe the effects of the HYPERchannel switch over from contention to priority mode under high channel loading.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Michael K.; Davidson, Megan
As part of Sandia’s nuclear deterrence mission, the B61-12 Life Extension Program (LEP) aims to modernize the aging weapon system. Modernization requires requalification and Sandia is using high performance computing to perform advanced computational simulations to better understand, evaluate, and verify weapon system performance in conjunction with limited physical testing. The Nose Bomb Subassembly (NBSA) of the B61-12 is responsible for producing a fuzing signal upon ground impact. The fuzing signal is dependent upon electromechanical impact sensors producing valid electrical fuzing signals at impact. Computer generated models were used to assess the timing between the impact sensor’s response to themore » deceleration of impact and damage to major components and system subassemblies. The modeling and simulation team worked alongside the physical test team to design a large-scale reverse ballistic test to not only assess system performance, but to also validate their computational models. The reverse ballistic test conducted at Sandia’s sled test facility sent a rocket sled with a representative target into a stationary B61-12 (NBSA) to characterize the nose crush and functional response of NBSA components. Data obtained from data recorders and high-speed photometrics were integrated with previously generated computer models in order to refine and validate the model’s ability to reliably simulate real-world effects. Large-scale tests are impractical to conduct for every single impact scenario. By creating reliable computer models, we can perform simulations that identify trends and produce estimates of outcomes over the entire range of required impact conditions. Sandia’s HPCs enable geometric resolution that was unachievable before, allowing for more fidelity and detail, and creating simulations that can provide insight to support evaluation of requirements and performance margins. As computing resources continue to improve, researchers at Sandia are hoping to improve these simulations so they provide increasingly credible analysis of the system response and performance over the full range of conditions.« less
Application of computer virtual simulation technology in 3D animation production
NASA Astrophysics Data System (ADS)
Mo, Can
2017-11-01
In the continuous development of computer technology, the application system of virtual simulation technology has been further optimized and improved. It also has been widely used in various fields of social development, such as city construction, interior design, industrial simulation and tourism teaching etc. This paper mainly introduces the virtual simulation technology used in 3D animation. Based on analyzing the characteristics of virtual simulation technology, the application ways and means of this technology in 3D animation are researched. The purpose is to provide certain reference for the 3D effect promotion days after.
Integrating interactive computational modeling in biology curricula.
Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A
2015-03-01
While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.
Platform-Independence and Scheduling In a Multi-Threaded Real-Time Simulation
NASA Technical Reports Server (NTRS)
Sugden, Paul P.; Rau, Melissa A.; Kenney, P. Sean
2001-01-01
Aviation research often relies on real-time, pilot-in-the-loop flight simulation as a means to develop new flight software, flight hardware, or pilot procedures. Often these simulations become so complex that a single processor is incapable of performing the necessary computations within a fixed time-step. Threads are an elegant means to distribute the computational work-load when running on a symmetric multi-processor machine. However, programming with threads often requires operating system specific calls that reduce code portability and maintainability. While a multi-threaded simulation allows a significant increase in the simulation complexity, it also increases the workload of a simulation operator by requiring that the operator determine which models run on which thread. To address these concerns an object-oriented design was implemented in the NASA Langley Standard Real-Time Simulation in C++ (LaSRS++) application framework. The design provides a portable and maintainable means to use threads and also provides a mechanism to automatically load balance the simulation models.
Trends in Programming Languages for Neuroscience Simulations
Davison, Andrew P.; Hines, Michael L.; Muller, Eilif
2009-01-01
Neuroscience simulators allow scientists to express models in terms of biological concepts, without having to concern themselves with low-level computational details of their implementation. The expressiveness, power and ease-of-use of the simulator interface is critical in efficiently and accurately translating ideas into a working simulation. We review long-term trends in the development of programmable simulator interfaces, and examine the benefits of moving from proprietary, domain-specific languages to modern dynamic general-purpose languages, in particular Python, which provide neuroscientists with an interactive and expressive simulation development environment and easy access to state-of-the-art general-purpose tools for scientific computing. PMID:20198154
Trends in programming languages for neuroscience simulations.
Davison, Andrew P; Hines, Michael L; Muller, Eilif
2009-01-01
Neuroscience simulators allow scientists to express models in terms of biological concepts, without having to concern themselves with low-level computational details of their implementation. The expressiveness, power and ease-of-use of the simulator interface is critical in efficiently and accurately translating ideas into a working simulation. We review long-term trends in the development of programmable simulator interfaces, and examine the benefits of moving from proprietary, domain-specific languages to modern dynamic general-purpose languages, in particular Python, which provide neuroscientists with an interactive and expressive simulation development environment and easy access to state-of-the-art general-purpose tools for scientific computing.
Huntington II Simulation Program - MALAR. Student Workbook, Teacher's Guide, and Resource Handbook.
ERIC Educational Resources Information Center
Friedland, James; Frishman, Austin
Described is the computer model "MALAR" which deals with malaria and its eradication. A computer program allows the tenth- to twelfth-grade student to attempt to control a malaria epidemic. This simulation provides a context within which to study the biological, economic, social, political, and ecological aspects of a classic world health problem.…
Modeling a Wireless Network for International Space Station
NASA Technical Reports Server (NTRS)
Alena, Richard; Yaprak, Ece; Lamouri, Saad
2000-01-01
This paper describes the application of wireless local area network (LAN) simulation modeling methods to the hybrid LAN architecture designed for supporting crew-computing tools aboard the International Space Station (ISS). These crew-computing tools, such as wearable computers and portable advisory systems, will provide crew members with real-time vehicle and payload status information and access to digital technical and scientific libraries, significantly enhancing human capabilities in space. A wireless network, therefore, will provide wearable computer and remote instruments with the high performance computational power needed by next-generation 'intelligent' software applications. Wireless network performance in such simulated environments is characterized by the sustainable throughput of data under different traffic conditions. This data will be used to help plan the addition of more access points supporting new modules and more nodes for increased network capacity as the ISS grows.
Dataflow computing approach in high-speed digital simulation
NASA Technical Reports Server (NTRS)
Ercegovac, M. D.; Karplus, W. J.
1984-01-01
New computational tools and methodologies for the digital simulation of continuous systems were explored. Programmability, and cost effective performance in multiprocessor organizations for real time simulation was investigated. Approach is based on functional style languages and data flow computing principles, which allow for the natural representation of parallelism in algorithms and provides a suitable basis for the design of cost effective high performance distributed systems. The objectives of this research are to: (1) perform comparative evaluation of several existing data flow languages and develop an experimental data flow language suitable for real time simulation using multiprocessor systems; (2) investigate the main issues that arise in the architecture and organization of data flow multiprocessors for real time simulation; and (3) develop and apply performance evaluation models in typical applications.
An efficient formulation of robot arm dynamics for control and computer simulation
NASA Astrophysics Data System (ADS)
Lee, C. S. G.; Nigam, R.
This paper describes an efficient formulation of the dynamic equations of motion of industrial robots based on the Lagrange formulation of d'Alembert's principle. This formulation, as applied to a PUMA robot arm, results in a set of closed form second order differential equations with cross product terms. They are not as efficient in computation as those formulated by the Newton-Euler method, but provide a better analytical model for control analysis and computer simulation. Computational complexities of this dynamic model together with other models are tabulated for discussion.
Flight code validation simulator
NASA Astrophysics Data System (ADS)
Sims, Brent A.
1996-05-01
An End-To-End Simulation capability for software development and validation of missile flight software on the actual embedded computer has been developed utilizing a 486 PC, i860 DSP coprocessor, embedded flight computer and custom dual port memory interface hardware. This system allows real-time interrupt driven embedded flight software development and checkout. The flight software runs in a Sandia Digital Airborne Computer and reads and writes actual hardware sensor locations in which Inertial Measurement Unit data resides. The simulator provides six degree of freedom real-time dynamic simulation, accurate real-time discrete sensor data and acts on commands and discretes from the flight computer. This system was utilized in the development and validation of the successful premier flight of the Digital Miniature Attitude Reference System in January of 1995 at the White Sands Missile Range on a two stage attitude controlled sounding rocket.
Human operator identification model and related computer programs
NASA Technical Reports Server (NTRS)
Kessler, K. M.; Mohr, J. N.
1978-01-01
Four computer programs which provide computational assistance in the analysis of man/machine systems are reported. The programs are: (1) Modified Transfer Function Program (TF); (2) Time Varying Response Program (TVSR); (3) Optimal Simulation Program (TVOPT); and (4) Linear Identification Program (SCIDNT). The TV program converts the time domain state variable system representative to frequency domain transfer function system representation. The TVSR program computes time histories of the input/output responses of the human operator model. The TVOPT program is an optimal simulation program and is similar to TVSR in that it produces time histories of system states associated with an operator in the loop system. The differences between the two programs are presented. The SCIDNT program is an open loop identification code which operates on the simulated data from TVOPT (or TVSR) or real operator data from motion simulators.
Liu, Jun; Zhang, Liqun; Cao, Dapeng; Wang, Wenchuan
2009-12-28
Polymer nanocomposites (PNCs) often exhibit excellent mechanical, thermal, electrical and optical properties, because they combine the performances of both polymers and inorganic or organic nanoparticles. Recently, computer modeling and simulation are playing an important role in exploring the reinforcement mechanism of the PNCs and even the design of functional PNCs. This report provides an overview of the progress made in past decades in the investigation of the static, rheological and mechanical properties of polymer nanocomposites studied by computer modeling and simulation. Emphases are placed on exploring the mechanisms at the molecular level for the dispersion of nanoparticles in nanocomposites, the effects of nanoparticles on chain conformation and glass transition temperature (T(g)), as well as viscoelastic and mechanical properties. Finally, some future challenges and opportunities in computer modeling and simulation of PNCs are addressed.
Efficient parallelization of analytic bond-order potentials for large-scale atomistic simulations
NASA Astrophysics Data System (ADS)
Teijeiro, C.; Hammerschmidt, T.; Drautz, R.; Sutmann, G.
2016-07-01
Analytic bond-order potentials (BOPs) provide a way to compute atomistic properties with controllable accuracy. For large-scale computations of heterogeneous compounds at the atomistic level, both the computational efficiency and memory demand of BOP implementations have to be optimized. Since the evaluation of BOPs is a local operation within a finite environment, the parallelization concepts known from short-range interacting particle simulations can be applied to improve the performance of these simulations. In this work, several efficient parallelization methods for BOPs that use three-dimensional domain decomposition schemes are described. The schemes are implemented into the bond-order potential code BOPfox, and their performance is measured in a series of benchmarks. Systems of up to several millions of atoms are simulated on a high performance computing system, and parallel scaling is demonstrated for up to thousands of processors.
AceCloud: Molecular Dynamics Simulations in the Cloud.
Harvey, M J; De Fabritiis, G
2015-05-26
We present AceCloud, an on-demand service for molecular dynamics simulations. AceCloud is designed to facilitate the secure execution of large ensembles of simulations on an external cloud computing service (currently Amazon Web Services). The AceCloud client, integrated into the ACEMD molecular dynamics package, provides an easy-to-use interface that abstracts all aspects of interaction with the cloud services. This gives the user the experience that all simulations are running on their local machine, minimizing the learning curve typically associated with the transition to using high performance computing services.
Monte Carlo Methodology Serves Up a Software Success
NASA Technical Reports Server (NTRS)
2003-01-01
Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.
Computational Evaluation of the Traceback Method
ERIC Educational Resources Information Center
Kol, Sheli; Nir, Bracha; Wintner, Shuly
2014-01-01
Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,…
NASA Astrophysics Data System (ADS)
Shegog, Ross; Lazarus, Melanie M.; Murray, Nancy G.; Diamond, Pamela M.; Sessions, Nathalie; Zsigmond, Eva
2012-10-01
The transgenic mouse model is useful for studying the causes and potential cures for human genetic diseases. Exposing high school biology students to laboratory experience in developing transgenic animal models is logistically prohibitive. Computer-based simulation, however, offers this potential in addition to advantages of fidelity and reach. This study describes and evaluates a computer-based simulation to train advanced placement high school science students in laboratory protocols, a transgenic mouse model was produced. A simulation module on preparing a gene construct in the molecular biology lab was evaluated using a randomized clinical control design with advanced placement high school biology students in Mercedes, Texas ( n = 44). Pre-post tests assessed procedural and declarative knowledge, time on task, attitudes toward computers for learning and towards science careers. Students who used the simulation increased their procedural and declarative knowledge regarding molecular biology compared to those in the control condition (both p < 0.005). Significant increases continued to occur with additional use of the simulation ( p < 0.001). Students in the treatment group became more positive toward using computers for learning ( p < 0.001). The simulation did not significantly affect attitudes toward science in general. Computer simulation of complex transgenic protocols have potential to provide a "virtual" laboratory experience as an adjunct to conventional educational approaches.
The employment of a spoken language computer applied to an air traffic control task.
NASA Technical Reports Server (NTRS)
Laveson, J. I.; Silver, C. A.
1972-01-01
Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.
NASA Astrophysics Data System (ADS)
Gonczi, Amanda L.; Chiu, Jennifer L.; Maeng, Jennifer L.; Bell, Randy L.
2016-07-01
This investigation sought to identify patterns in elementary science teachers' computer simulation use, particularly implementation structures and instructional supports commonly employed by teachers. Data included video-recorded science lessons of 96 elementary teachers who used computer simulations in one or more science lessons. Results indicated teachers used a one-to-one student-to-computer ratio most often either during class-wide individual computer use or during a rotating station structure. Worksheets, general support, and peer collaboration were the most common forms of instructional support. The least common instructional support forms included lesson pacing, initial play, and a closure discussion. Students' simulation use was supported in the fewest ways during a rotating station structure. Results suggest that simulation professional development with elementary teachers needs to explicitly focus on implementation structures and instructional support to enhance participants' pedagogical knowledge and improve instructional simulation use. In addition, research is needed to provide theoretical explanations for the observed patterns that should subsequently be addressed in supporting teachers' instructional simulation use during professional development or in teacher preparation programs.
Advanced Techniques for Simulating the Behavior of Sand
NASA Astrophysics Data System (ADS)
Clothier, M.; Bailey, M.
2009-12-01
Computer graphics and visualization techniques continue to provide untapped research opportunities, particularly when working with earth science disciplines. Through collaboration with the Oregon Space Grant and IGERT Ecosystem Informatics programs we are developing new techniques for simulating sand. In addition, through collaboration with the Oregon Space Grant, we’ve been communicating with the Jet Propulsion Laboratory (JPL) to exchange ideas and gain feedback on our work. More specifically, JPL’s DARTS Laboratory specializes in planetary vehicle simulation, such as the Mars rovers. This simulation utilizes a virtual "sand box" to test how planetary rovers respond to different terrains while traversing them. Unfortunately, this simulation is unable to fully mimic the harsh, sandy environments of those found on Mars. Ideally, these simulations should allow a rover to interact with the sand beneath it, particularly for different sand granularities and densities. In particular, there may be situations where a rover may become stuck in sand due to lack of friction between the sand and wheels. In fact, in May 2009, the Spirit rover became stuck in the Martian sand and has provided additional motivation for this research. In order to develop a new sand simulation model, high performance computing will play a very important role in this work. More specifically, graphics processing units (GPUs) are useful due to their ability to run general purpose algorithms and ability to perform massively parallel computations. In prior research, simulating vast quantities of sand has been difficult to compute in real-time due to the computational complexity of many colliding particles. With the use of GPUs however, each particle collision will be parallelized, allowing for a dramatic performance increase. In addition, spatial partitioning will also provide a speed boost as this will help limit the number of particle collision calculations. However, since the goal of this research is to simulate the look and behavior of sand, this work will go beyond simple particle collision. In particular, we can continue to use our parallel algorithms not only on single particles but on particle “clumps” that consist of multiple combined particles. Since sand is typically not spherical in nature, these particle “clumps” help to simulate the coarse nature of sand. In a simulation environment, multiple combined particles could be used to simulate the polygonal and granular nature of sand grains. Thus, a diversity of sand particles can be generated. The interaction between these particles can then be parallelized using GPU hardware. As such, this research will investigate different graphics and physics techniques and determine the tradeoffs in performance and visual quality for sand simulation. An enhanced sand model through the use of high performance computing and GPUs has great potential to impact research for both earth and space scientists. Interaction with JPL has provided an opportunity for us to refine our simulation techniques that can ultimately be used for their vehicle simulator. As an added benefit of this work, advancements in simulating sand can also benefit scientists here on earth, especially in regard to understanding landslides and debris flows.
The Numerical Propulsion System Simulation: An Overview
NASA Technical Reports Server (NTRS)
Lytle, John K.
2000-01-01
Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.
Active and Cooperative Learning Using Web-Based Simulations.
ERIC Educational Resources Information Center
Schmidt, Stephen J.
2003-01-01
Cites advantages of using computers and the World Wide Web in classroom simulations. Provides a sample simulation that teaches the basic economic principles of trade, investment, and public goods in the context of U.S. economic history. (JEH)
Application of the rapid update cycle (RUC) to aircraft flight simulation.
DOT National Transportation Integrated Search
2008-01-01
An aircraft flight simulation model under development aims : to provide a computer simulation tool to investigate aircraft flight : performance during en route flight and landing under various : atmospherical conditions [1]. Within this model, the ai...
RESTSIM: A Simulation Model That Highlights Decision Making under Conditions of Uncertainty.
ERIC Educational Resources Information Center
Zinkhan, George M.; Taylor, James R.
1983-01-01
Describes RESTSIM, an interactive computer simulation program for graduate and upper-level undergraduate management, marketing, and retailing courses, which introduces naive users to simulation as a decision support technique, and provides a vehicle for studying various statistical procedures for evaluating simulation output. (MBR)
Computer Simulations Improve University Instructional Laboratories1
2004-01-01
Laboratory classes are commonplace and essential in biology departments but can sometimes be cumbersome, unreliable, and a drain on time and resources. As university intakes increase, pressure on budgets and staff time can often lead to reduction in practical class provision. Frequently, the ability to use laboratory equipment, mix solutions, and manipulate test animals are essential learning outcomes, and “wet” laboratory classes are thus appropriate. In others, however, interpretation and manipulation of the data are the primary learning outcomes, and here, computer-based simulations can provide a cheaper, easier, and less time- and labor-intensive alternative. We report the evaluation of two computer-based simulations of practical exercises: the first in chromosome analysis, the second in bioinformatics. Simulations can provide significant time savings to students (by a factor of four in our first case study) without affecting learning, as measured by performance in assessment. Moreover, under certain circumstances, performance can be improved by the use of simulations (by 7% in our second case study). We concluded that the introduction of these simulations can significantly enhance student learning where consideration of the learning outcomes indicates that it might be appropriate. In addition, they can offer significant benefits to teaching staff. PMID:15592599
Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus
2016-01-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922
NASA Technical Reports Server (NTRS)
Reed, John A.; Afjeh, Abdollah A.
1995-01-01
A major difficulty in designing aeropropulsion systems is that of identifying and understanding the interactions between the separate engine components and disciplines (e.g., fluid mechanics, structural mechanics, heat transfer, material properties, etc.). The traditional analysis approach is to decompose the system into separate components with the interaction between components being evaluated by the application of each of the single disciplines in a sequential manner. Here, one discipline uses information from the calculation of another discipline to determine the effects of component coupling. This approach, however, may not properly identify the consequences of these effects during the design phase, leaving the interactions to be discovered and evaluated during engine testing. This contributes to the time and cost of developing new propulsion systems as, typically, several design-build-test cycles are needed to fully identify multidisciplinary effects and reach the desired system performance. The alternative to sequential isolated component analysis is to use multidisciplinary coupling at a more fundamental level. This approach has been made more plausible due to recent advancements in computation simulation along with application of concurrent engineering concepts. Computer simulation systems designed to provide an environment which is capable of integrating the various disciplines into a single simulation system have been proposed and are currently being developed. One such system is being developed by the Numerical Propulsion System Simulation (NPSS) project. The NPSS project, being developed at the Interdisciplinary Technology Office at the NASA Lewis Research Center is a 'numerical test cell' designed to provide for comprehensive computational design and analysis of aerospace propulsion systems. It will provide multi-disciplinary analyses on a variety of computational platforms, and a user-interface consisting of expert systems, data base management and visualization tools, to allow the designer to investigate the complex interactions inherent in these systems. An interactive programming software system, known as the Application Visualization System (AVS), was utilized for the development of the propulsion system simulation. The modularity of this system provides the ability to couple propulsion system components, as well as disciplines, and provides for the ability to integrate existing, well established analysis codes into the overall system simulation. This feature allows the user to customize the simulation model by inserting desired analysis codes. The prototypical simulation environment for multidisciplinary analysis, called Turbofan Engine System Simulation (TESS), which incorporates many of the characteristics of the simulation environment proposed herein, is detailed.
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.
2014-01-01
Computational Aerodynamic simulations of a 1215 ft/sec tip speed transonic fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, low-noise research fan/nacelle model that has undergone extensive experimental testing in the 9- by 15-foot Low Speed Wind Tunnel at the NASA Glenn Research Center. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating points simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, which for this model did not include a split flow path with core and bypass ducts. As a result, it was only necessary to adjust fan rotational speed in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. Computed blade row flow fields at all fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the flow fields at all operating conditions reveals no excessive boundary layer separations or related secondary-flow problems.
Computer simulation of surface and film processes
NASA Technical Reports Server (NTRS)
Tiller, W. A.; Halicioglu, M. T.
1983-01-01
Adequate computer methods, based on interactions between discrete particles, provide information leading to an atomic level understanding of various physical processes. The success of these simulation methods, however, is related to the accuracy of the potential energy function representing the interactions among the particles. The development of a potential energy function for crystalline SiO2 forms that can be employed in lengthy computer modelling procedures was investigated. In many of the simulation methods which deal with discrete particles, semiempirical two body potentials were employed to analyze energy and structure related properties of the system. Many body interactions are required for a proper representation of the total energy for many systems. Many body interactions for simulations based on discrete particles are discussed.
Halloran, J. P.; Sibole, S.; van Donkelaar, C. C.; van Turnhout, M. C.; Oomens, C. W. J.; Weiss, J. A.; Guilak, F.; Erdemir, A.
2012-01-01
Articular cartilage experiences significant mechanical loads during daily activities. Healthy cartilage provides the capacity for load bearing and regulates the mechanobiological processes for tissue development, maintenance, and repair. Experimental studies at multiple scales have provided a fundamental understanding of macroscopic mechanical function, evaluation of the micromechanical environment of chondrocytes, and the foundations for mechanobiological response. In addition, computational models of cartilage have offered a concise description of experimental data at many spatial levels under healthy and diseased conditions, and have served to generate hypotheses for the mechanical and biological function. Further, modeling and simulation provides a platform for predictive risk assessment, management of dysfunction, as well as a means to relate multiple spatial scales. Simulation-based investigation of cartilage comes with many challenges including both the computational burden and often insufficient availability of data for model development and validation. This review outlines recent modeling and simulation approaches to understand cartilage function from a mechanical systems perspective, and illustrates pathways to associate mechanics with biological function. Computational representations at single scales are provided from the body down to the microstructure, along with attempts to explore multiscale mechanisms of load sharing that dictate the mechanical environment of the cartilage and chondrocytes. PMID:22648577
NASA Astrophysics Data System (ADS)
Alsadoon, Abeer; Prasad, P. W. C.; Beg, Azam
2017-09-01
Making the students understand the theoretical concepts of digital logic design concepts is one of the major issues faced by the academics, therefore the teachers have tried different techniques to link the theoretical information to the practical knowledge. Use of software simulations is a technique for learning and practice that can be applied to many different disciplines. Experimentation of different computer hardware components/integrated circuits with the use of the simulators enhances the student learning. The simulators can be rather simplistic or quite complex. This paper reports our evaluation of different simulators available for use in the higher education institutions. We also provide the experience of incorporating some selected tools in teaching introductory courses in computer systems. We justified the effectiveness of incorporating the simulators into the computer system courses by use of student survey and final grade results.
A Cloud-Based Simulation Architecture for Pandemic Influenza Simulation
Eriksson, Henrik; Raciti, Massimiliano; Basile, Maurizio; Cunsolo, Alessandro; Fröberg, Anders; Leifler, Ola; Ekberg, Joakim; Timpka, Toomas
2011-01-01
High-fidelity simulations of pandemic outbreaks are resource consuming. Cluster-based solutions have been suggested for executing such complex computations. We present a cloud-based simulation architecture that utilizes computing resources both locally available and dynamically rented online. The approach uses the Condor framework for job distribution and management of the Amazon Elastic Computing Cloud (EC2) as well as local resources. The architecture has a web-based user interface that allows users to monitor and control simulation execution. In a benchmark test, the best cost-adjusted performance was recorded for the EC2 H-CPU Medium instance, while a field trial showed that the job configuration had significant influence on the execution time and that the network capacity of the master node could become a bottleneck. We conclude that it is possible to develop a scalable simulation environment that uses cloud-based solutions, while providing an easy-to-use graphical user interface. PMID:22195089
Argonne simulation framework for intelligent transportation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, T.; Doss, E.; Hanebutte, U.
1996-04-01
A simulation framework has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS). The simulator is designed to run on parallel computers and distributed (networked) computer systems; however, a version for a stand alone workstation is also available. The ITS simulator includes an Expert Driver Model (EDM) of instrumented ``smart`` vehicles with in-vehicle navigation units. The EDM is capable of performing optimal route planning and communicating with Traffic Management Centers (TMC). A dynamic road map data base is sued for optimum route planning, where the data is updated periodically tomore » reflect any changes in road or weather conditions. The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces that includes human-factors studies to support safety and operational research. Realistic modeling of variations of the posted driving speed are based on human factor studies that take into consideration weather, road conditions, driver`s personality and behavior and vehicle type. The simulator has been developed on a distributed system of networked UNIX computers, but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of the developed simulator is that vehicles will be represented by autonomous computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. Vehicle processes interact with each other and with ITS components by exchanging messages. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.« less
Computer program for a four-cylinder-Stirling-engine controls simulation
NASA Technical Reports Server (NTRS)
Daniels, C. J.; Lorenzo, C. F.
1982-01-01
A four cylinder Stirling engine, transient engine simulation computer program is presented. The program is intended for controls analysis. The associated engine model was simplified to shorten computer calculation time. The model includes engine mechanical drive dynamics and vehicle load effects. The computer program also includes subroutines that allow: (1) acceleration of the engine by addition of hydrogen to the system, and (2) braking of the engine by short circuiting of the working spaces. Subroutines to calculate degraded engine performance (e.g., due to piston ring and piston rod leakage) are provided. Input data required to run the program are described and flow charts are provided. The program is modular to allow easy modification of individual routines. Examples of steady state and transient results are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, Michel; Archer, Bill; Hendrickson, Bruce
The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computationalmore » resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.« less
Interactive Simulated Patient: Experiences with Collaborative E-Learning in Medicine
ERIC Educational Resources Information Center
Bergin, Rolf; Youngblood, Patricia; Ayers, Mary K.; Boberg, Jonas; Bolander, Klara; Courteille, Olivier; Dev, Parvati; Hindbeck, Hans; Edward, Leonard E., II; Stringer, Jennifer R.; Thalme, Anders; Fors, Uno G. H.
2003-01-01
Interactive Simulated Patient (ISP) is a computer-based simulation tool designed to provide medical students with the opportunity to practice their clinical problem solving skills. The ISP system allows students to perform most clinical decision-making procedures in a simulated environment, including history taking in natural language, many…
Computer simulation of a single pilot flying a modern high-performance helicopter
NASA Technical Reports Server (NTRS)
Zipf, Mark E.; Vogt, William G.; Mickle, Marlin H.; Hoelzeman, Ronald G.; Kai, Fei; Mihaloew, James R.
1988-01-01
Presented is a computer simulation of a human response pilot model able to execute operational flight maneuvers and vehicle stabilization of a modern high-performance helicopter. Low-order, single-variable, human response mechanisms, integrated to form a multivariable pilot structure, provide a comprehensive operational control over the vehicle. Evaluations of the integrated pilot were performed by direct insertion into a nonlinear, total-force simulation environment provided by NASA Lewis. Comparisons between the integrated pilot structure and single-variable pilot mechanisms are presented. Static and dynamically alterable configurations of the pilot structure are introduced to simulate pilot activities during vehicle maneuvers. These configurations, in conjunction with higher level, decision-making processes, are considered for use where guidance and navigational procedures, operational mode transfers, and resource sharing are required.
A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Owen, Jeffrey E.
1988-01-01
A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure
NASA Astrophysics Data System (ADS)
Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei
2011-09-01
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. This work was presented in part at the 2010 Annual Meeting of the American Association of Physicists in Medicine (AAPM), Philadelphia, PA.
NASA Technical Reports Server (NTRS)
1984-01-01
The Spacelab Payload Development Support System PDSS Image Motion Compensator (IMC) computer interface simulation (CIS) user's manual is given. The software provides a real time interface simulation for the following IMC subsystems: the Dry Rotor Reference Unit, the Advanced Star/Target Reference Optical sensor, the Ultra Violet imaging telescope, the Wisconson Ultraviolet Photopolarimetry Experiment, the Cruciform Power distributor, and the Spacelab Experiment Computer Operating System.
NASA Technical Reports Server (NTRS)
Madore, B. F.; Freedman, W. L.
1994-01-01
Based on both empirical data for nearby galaxies, and on computer simulations, we show that measuring the position of the tip of the first-ascent red-giant branch (TRGB) provides a means of obtaining the distances to nearby galaxies with a precision and accuracy comparable to using Cepheids and/or RR Lyrae variables.
ERIC Educational Resources Information Center
Shifflet, Mark; Brown, Jane
2006-01-01
The purpose of this study was to investigate how exposure to classroom instruction affected the use of a computer simulation that was designed to provide students an opportunity to apply material presented in class. The study involved an analysis of a computer-based crisis communication case study designed for a college-level public relations…
NASA Technical Reports Server (NTRS)
Frank, Andreas O.; Twombly, I. Alexander; Barth, Timothy J.; Smith, Jeffrey D.; Dalton, Bonnie P. (Technical Monitor)
2001-01-01
We have applied the linear elastic finite element method to compute haptic force feedback and domain deformations of soft tissue models for use in virtual reality simulators. Our results show that, for virtual object models of high-resolution 3D data (>10,000 nodes), haptic real time computations (>500 Hz) are not currently possible using traditional methods. Current research efforts are focused in the following areas: 1) efficient implementation of fully adaptive multi-resolution methods and 2) multi-resolution methods with specialized basis functions to capture the singularity at the haptic interface (point loading). To achieve real time computations, we propose parallel processing of a Jacobi preconditioned conjugate gradient method applied to a reduced system of equations resulting from surface domain decomposition. This can effectively be achieved using reconfigurable computing systems such as field programmable gate arrays (FPGA), thereby providing a flexible solution that allows for new FPGA implementations as improved algorithms become available. The resulting soft tissue simulation system would meet NASA Virtual Glovebox requirements and, at the same time, provide a generalized simulation engine for any immersive environment application, such as biomedical/surgical procedures or interactive scientific applications.
Computational simulation of composite structures with and without damage. M.S. Thesis Final Report
NASA Technical Reports Server (NTRS)
Wilt, Thomas F.
1994-01-01
A methodology is described which uses finite element analysis of various laminates to computationally simulate the effects of delamination damage initiation and growth on the structural behavior of laminated composite structures. The delamination area is expanded according to a set pattern. As the delamination area increases, how the structural response of the laminate changes with respect to buckling and strain energy release rate are investigated. Rules are presented for laminates of different configurations, materials and thickness. These results demonstrate that computational simulation methods can provide alternate methods to investigate the complex delamination damage mechanisms found in composite structures.
The LHCb Grid Simulation: Proof of Concept
NASA Astrophysics Data System (ADS)
Hushchyn, M.; Ustyuzhanin, A.; Arzymatov, K.; Roiser, S.; Baranov, A.
2017-10-01
The Worldwide LHC Computing Grid provides access to data and computational resources to analyze it for researchers with different geographical locations. The grid has a hierarchical topology with multiple sites distributed over the world with varying number of CPUs, amount of disk storage and connection bandwidth. Job scheduling and data distribution strategy are key elements of grid performance. Optimization of algorithms for those tasks requires their testing on real grid which is hard to achieve. Having a grid simulator might simplify this task and therefore lead to more optimal scheduling and data placement algorithms. In this paper we demonstrate a grid simulator for the LHCb distributed computing software.
Computer simulation of the metastatic progression.
Wedemann, Gero; Bethge, Anja; Haustein, Volker; Schumacher, Udo
2014-01-01
A novel computer model based on a discrete event simulation procedure describes quantitatively the processes underlying the metastatic cascade. Analytical functions describe the size of the primary tumor and the metastases, while a rate function models the intravasation events of the primary tumor and metastases. Events describe the behavior of the malignant cells until the formation of new metastases. The results of the computer simulations are in quantitative agreement with clinical data determined from a patient with hepatocellular carcinoma in the liver. The model provides a more detailed view on the process than a conventional mathematical model. In particular, the implications of interventions on metastasis formation can be calculated.
Bio-inspired algorithms applied to molecular docking simulations.
Heberlé, G; de Azevedo, W F
2011-01-01
Nature as a source of inspiration has been shown to have a great beneficial impact on the development of new computational methodologies. In this scenario, analyses of the interactions between a protein target and a ligand can be simulated by biologically inspired algorithms (BIAs). These algorithms mimic biological systems to create new paradigms for computation, such as neural networks, evolutionary computing, and swarm intelligence. This review provides a description of the main concepts behind BIAs applied to molecular docking simulations. Special attention is devoted to evolutionary algorithms, guided-directed evolutionary algorithms, and Lamarckian genetic algorithms. Recent applications of these methodologies to protein targets identified in the Mycobacterium tuberculosis genome are described.
Development and validation of real-time simulation of X-ray imaging with respiratory motion.
Vidal, Franck P; Villard, Pierre-Frédéric
2016-04-01
We present a framework that combines evolutionary optimisation, soft tissue modelling and ray tracing on GPU to simultaneously compute the respiratory motion and X-ray imaging in real-time. Our aim is to provide validated building blocks with high fidelity to closely match both the human physiology and the physics of X-rays. A CPU-based set of algorithms is presented to model organ behaviours during respiration. Soft tissue deformation is computed with an extension of the Chain Mail method. Rigid elements move according to kinematic laws. A GPU-based surface rendering method is proposed to compute the X-ray image using the Beer-Lambert law. It is provided as an open-source library. A quantitative validation study is provided to objectively assess the accuracy of both components: (i) the respiration against anatomical data, and (ii) the X-ray against the Beer-Lambert law and the results of Monte Carlo simulations. Our implementation can be used in various applications, such as interactive medical virtual environment to train percutaneous transhepatic cholangiography in interventional radiology, 2D/3D registration, computation of digitally reconstructed radiograph, simulation of 4D sinograms to test tomography reconstruction tools. Copyright © 2015 Elsevier Ltd. All rights reserved.
Computer modeling of human decision making
NASA Technical Reports Server (NTRS)
Gevarter, William B.
1991-01-01
Models of human decision making are reviewed. Models which treat just the cognitive aspects of human behavior are included as well as models which include motivation. Both models which have associated computer programs, and those that do not, are considered. Since flow diagrams, that assist in constructing computer simulation of such models, were not generally available, such diagrams were constructed and are presented. The result provides a rich source of information, which can aid in construction of more realistic future simulations of human decision making.
BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments
Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; ...
2015-11-09
Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here in this paper, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive.
[Computer simulation by passenger wound analysis of vehicle collision].
Zou, Dong-Hua; Liu, Nning-Guo; Shen, Jie; Zhang, Xiao-Yun; Jin, Xian-Long; Chen, Yi-Jiu
2006-08-15
To reconstruct the course of vehicle collision, so that to provide the reference for forensic identification and disposal of traffic accidents. Through analyzing evidences left both on passengers and vehicles, technique of momentum impulse combined with multi-dynamics was applied to simulate the motion and injury of passengers as well as the track of vehicles. Model of computer stimulation perfectly reconstructed phases of the traffic collision, which coincide with details found by forensic investigation. Computer stimulation is helpful and feasible for forensic identification in traffic accidents.
Trace contaminant control simulation computer program, version 8.1
NASA Technical Reports Server (NTRS)
Perry, J. L.
1994-01-01
The Trace Contaminant Control Simulation computer program is a tool for assessing the performance of various process technologies for removing trace chemical contamination from a spacecraft cabin atmosphere. Included in the simulation are chemical and physical adsorption by activated charcoal, chemical adsorption by lithium hydroxide, absorption by humidity condensate, and low- and high-temperature catalytic oxidation. Means are provided for simulating regenerable as well as nonregenerable systems. The program provides an overall mass balance of chemical contaminants in a spacecraft cabin given specified generation rates. Removal rates are based on device flow rates specified by the user and calculated removal efficiencies based on cabin concentration and removal technology experimental data. Versions 1.0 through 8.0 are documented in NASA TM-108409. TM-108409 also contains a source file listing for version 8.0. Changes to version 8.0 are documented in this technical memorandum and a source file listing for the modified version, version 8.1, is provided. Detailed descriptions for the computer program subprograms are extracted from TM-108409 and modified as necessary to reflect version 8.1. Version 8.1 supersedes version 8.0. Information on a separate user's guide is available from the author.
Automated inverse computer modeling of borehole flow data in heterogeneous aquifers
NASA Astrophysics Data System (ADS)
Sawdey, J. R.; Reeve, A. S.
2012-09-01
A computer model has been developed to simulate borehole flow in heterogeneous aquifers where the vertical distribution of permeability may vary significantly. In crystalline fractured aquifers, flow into or out of a borehole occurs at discrete locations of fracture intersection. Under these circumstances, flow simulations are defined by independent variables of transmissivity and far-field heads for each flow contributing fracture intersecting the borehole. The computer program, ADUCK (A Downhole Underwater Computational Kit), was developed to automatically calibrate model simulations to collected flowmeter data providing an inverse solution to fracture transmissivity and far-field head. ADUCK has been tested in variable borehole flow scenarios, and converges to reasonable solutions in each scenario. The computer program has been created using open-source software to make the ADUCK model widely available to anyone who could benefit from its utility.
User's guide to resin infusion simulation program in the FORTRAN language
NASA Technical Reports Server (NTRS)
Weideman, Mark H.; Hammond, Vince H.; Loos, Alfred C.
1992-01-01
RTMCL is a user friendly computer code which simulates the manufacture of fabric composites by the resin infusion process. The computer code is based on the process simulation model described in reference 1. Included in the user's guide is a detailed step by step description of how to run the program and enter and modify the input data set. Sample input and output files are included along with an explanation of the results. Finally, a complete listing of the program is provided.
Computer simulation as a teaching aid in pharmacy management--Part 1: Principles of accounting.
Morrison, D J
1987-06-01
The need for pharmacists to develop management expertise through participation in formal courses is now widely acknowledged. Many schools of pharmacy lay the foundations for future management training by providing introductory courses as an integral or elective part of the undergraduate syllabus. The benefit of such courses may, however, be limited by the lack of opportunity for the student to apply the concepts and procedures in a practical working environment. Computer simulations provide a means to overcome this problem, particularly in the field of resource management. In this, the first of two articles, the use of a computer model to demonstrate basic accounting principles is described.
NASA Technical Reports Server (NTRS)
Mital, Subodh K.; Murthy, Pappu L. N.; Chamis, Christos C.
1994-01-01
A computational simulation procedure is presented for nonlinear analyses which incorporates microstress redistribution due to progressive fracture in ceramic matrix composites. This procedure facilitates an accurate simulation of the stress-strain behavior of ceramic matrix composites up to failure. The nonlinearity in the material behavior is accounted for at the constituent (fiber/matrix/interphase) level. This computational procedure is a part of recent upgrades to CEMCAN (Ceramic Matrix Composite Analyzer) computer code. The fiber substructuring technique in CEMCAN is used to monitor the damage initiation and progression as the load increases. The room-temperature tensile stress-strain curves for SiC fiber reinforced reaction-bonded silicon nitride (RBSN) matrix unidirectional and angle-ply laminates are simulated and compared with experimentally observed stress-strain behavior. Comparison between the predicted stress/strain behavior and experimental stress/strain curves is good. Collectively the results demonstrate that CEMCAN computer code provides the user with an effective computational tool to simulate the behavior of ceramic matrix composites.
LHC@Home: a BOINC-based volunteer computing infrastructure for physics studies at CERN
NASA Astrophysics Data System (ADS)
Barranco, Javier; Cai, Yunhai; Cameron, David; Crouch, Matthew; Maria, Riccardo De; Field, Laurence; Giovannozzi, Massimo; Hermes, Pascal; Høimyr, Nils; Kaltchev, Dobrin; Karastathis, Nikos; Luzzi, Cinzia; Maclean, Ewen; McIntosh, Eric; Mereghetti, Alessio; Molson, James; Nosochkov, Yuri; Pieloni, Tatiana; Reid, Ivan D.; Rivkin, Lenny; Segal, Ben; Sjobak, Kyrre; Skands, Peter; Tambasco, Claudia; Veken, Frederik Van der; Zacharov, Igor
2017-12-01
The LHC@Home BOINC project has provided computing capacity for numerical simulations to researchers at CERN since 2004, and has since 2011 been expanded with a wider range of applications. The traditional CERN accelerator physics simulation code SixTrack enjoys continuing volunteers support, and thanks to virtualisation a number of applications from the LHC experiment collaborations and particle theory groups have joined the consolidated LHC@Home BOINC project. This paper addresses the challenges related to traditional and virtualized applications in the BOINC environment, and how volunteer computing has been integrated into the overall computing strategy of the laboratory through the consolidated LHC@Home service. Thanks to the computing power provided by volunteers joining LHC@Home, numerous accelerator beam physics studies have been carried out, yielding an improved understanding of charged particle dynamics in the CERN Large Hadron Collider (LHC) and its future upgrades. The main results are highlighted in this paper.
TOPICAL REVIEW: Advances and challenges in computational plasma science
NASA Astrophysics Data System (ADS)
Tang, W. M.; Chan, V. S.
2005-02-01
Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.
Advances and challenges in computational plasma science
NASA Astrophysics Data System (ADS)
Tang, W. M.
2005-02-01
Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.
NASA Astrophysics Data System (ADS)
Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang
2010-11-01
This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.
2000-04-01
Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were appliedmore » to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.« less
Positive Wigner functions render classical simulation of quantum computation efficient.
Mari, A; Eisert, J
2012-12-07
We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.
Application of real-time engine simulations to the development of propulsion system controls
NASA Technical Reports Server (NTRS)
Szuch, J. R.
1975-01-01
The development of digital controls for turbojet and turbofan engines is presented by the use of real-time computer simulations of the engines. The engine simulation provides a test-bed for evaluating new control laws and for checking and debugging control software and hardware prior to engine testing. The development and use of real-time, hybrid computer simulations of the Pratt and Whitney TF30-P-3 and F100-PW-100 augmented turbofans are described in support of a number of controls research programs at the Lewis Research Center. The role of engine simulations in solving the propulsion systems integration problem is also discussed.
Advances in free-energy-based simulations of protein folding and ligand binding.
Perez, Alberto; Morrone, Joseph A; Simmerling, Carlos; Dill, Ken A
2016-02-01
Free-energy-based simulations are increasingly providing the narratives about the structures, dynamics and biological mechanisms that constitute the fabric of protein science. Here, we review two recent successes. It is becoming practical: first, to fold small proteins with free-energy methods without knowing substructures and second, to compute ligand-protein binding affinities, not just their binding poses. Over the past 40 years, the timescales that can be simulated by atomistic MD are doubling every 1.3 years--which is faster than Moore's law. Thus, these advances are not simply due to the availability of faster computers. Force fields, solvation models and simulation methodology have kept pace with computing advancements, and are now quite good. At the tip of the spear recently are GPU-based computing, improved fast-solvation methods, continued advances in force fields, and conformational sampling methods that harness external information. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Advanced Supercomputing Facility Expansion
NASA Technical Reports Server (NTRS)
Thigpen, William W.
2017-01-01
The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.
NASA Technical Reports Server (NTRS)
Lansing, F. L.; Chai, V. W.; Lascu, D.; Urbenajo, R.; Wong, P.
1978-01-01
The engineering manual provides a complete companion documentation about the structure of the main program and subroutines, the preparation of input data, the interpretation of output results, access and use of the program, and the detailed description of all the analytic, logical expressions and flow charts used in computations and program structure. A numerical example is provided and solved completely to show the sequence of computations followed. The program is carefully structured to reduce both user's time and costs without sacrificing accuracy. The user would expect a cost of CPU time of approximately $5.00 per building zone excluding printing costs. The accuracy, on the other hand, measured by deviation of simulated consumption from watt-hour meter readings, was found by many simulation tests not to exceed + or - 10 percent margin.
Program optimizations: The interplay between power, performance, and energy
Leon, Edgar A.; Karlin, Ian; Grant, Ryan E.; ...
2016-05-16
Practical considerations for future supercomputer designs will impose limits on both instantaneous power consumption and total energy consumption. Working within these constraints while providing the maximum possible performance, application developers will need to optimize their code for speed alongside power and energy concerns. This paper analyzes the effectiveness of several code optimizations including loop fusion, data structure transformations, and global allocations. A per component measurement and analysis of different architectures is performed, enabling the examination of code optimizations on different compute subsystems. Using an explicit hydrodynamics proxy application from the U.S. Department of Energy, LULESH, we show how code optimizationsmore » impact different computational phases of the simulation. This provides insight for simulation developers into the best optimizations to use during particular simulation compute phases when optimizing code for future supercomputing platforms. Here, we examine and contrast both x86 and Blue Gene architectures with respect to these optimizations.« less
SFU Hacking for Non-Hackers v. 1.005
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, David James
The program provides a limited virtual environment for exploring some concepts of computer hacking. It simulates a simple computer system with intentional vulnerabilities, allowing the user to issue commands and observe their results. It does not affect the computer on which it runs.
The GeantV project: Preparing the future of simulation
Amadio, G.; J. Apostolakis; Bandieramonte, M.; ...
2015-12-23
Detector simulation is consuming at least half of the HEP computing cycles, and even so, experiments have to take hard decisions on what to simulate, as their needs greatly surpass the availability of computing resources. New experiments still in the design phase such as FCC, CLIC and ILC as well as upgraded versions of the existing LHC detectors will push further the simulation requirements. Since the increase in computing resources is not likely to keep pace with our needs, it is therefore necessary to explore innovative ways of speeding up simulation in order to sustain the progress of High Energymore » Physics. The GeantV project aims at developing a high performance detector simulation system integrating fast and full simulation that can be ported on different computing architectures, including CPU accelerators. After more than two years of R&D the project has produced a prototype capable of transporting particles in complex geometries exploiting micro-parallelism, SIMD and multithreading. Portability is obtained via C++ template techniques that allow the development of machine- independent computational kernels. Furthermore, a set of tables derived from Geant4 for cross sections and final states provides a realistic shower development and, having been ported into a Geant4 physics list, can be used as a basis for a direct performance comparison.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hillis, D.R.
A computer-based simulation with an artificial intelligence component and discovery learning was investigated as a method to formulate training needs for new or unfamiliar technologies. Specifically, the study examined if this simulation method would provide for the recognition of applications and knowledge/skills which would be the basis for establishing training needs. The study also examined the effect of field-dependence/independence on recognition of applications and knowledge/skills. A pretest-posttest control group experimental design involving fifty-eight college students from an industrial technology program was used. The study concluded that the simulation was effective in developing recognition of applications and the knowledge/skills for amore » new or unfamiliar technology. And, the simulation's effectiveness for providing this recognition was not limited by an individual's field-dependence/independence.« less
The Numerical Propulsion System Simulation: A Multidisciplinary Design System for Aerospace Vehicles
NASA Technical Reports Server (NTRS)
Lytle, John K.
1999-01-01
Advances in computational technology and in physics-based modeling are making large scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze ma or propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of design process and to provide the designer with critical information about the components early in the design process. This paper describes the development of the Numerical Propulsion System Simulation (NPSS), a multidisciplinary system of analysis tools that is focussed on extending the simulation capability from components to the full system. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.
An Approach to Experimental Design for the Computer Analysis of Complex Phenomenon
NASA Technical Reports Server (NTRS)
Rutherford, Brian
2000-01-01
The ability to make credible system assessments, predictions and design decisions related to engineered systems and other complex phenomenon is key to a successful program for many large-scale investigations in government and industry. Recently, many of these large-scale analyses have turned to computational simulation to provide much of the required information. Addressing specific goals in the computer analysis of these complex phenomenon is often accomplished through the use of performance measures that are based on system response models. The response models are constructed using computer-generated responses together with physical test results where possible. They are often based on probabilistically defined inputs and generally require estimation of a set of response modeling parameters. As a consequence, the performance measures are themselves distributed quantities reflecting these variabilities and uncertainties. Uncertainty in the values of the performance measures leads to uncertainties in predicted performance and can cloud the decisions required of the analysis. A specific goal of this research has been to develop methodology that will reduce this uncertainty in an analysis environment where limited resources and system complexity together restrict the number of simulations that can be performed. An approach has been developed that is based on evaluation of the potential information provided for each "intelligently selected" candidate set of computer runs. Each candidate is evaluated by partitioning the performance measure uncertainty into two components - one component that could be explained through the additional computational simulation runs and a second that would remain uncertain. The portion explained is estimated using a probabilistic evaluation of likely results for the additional computational analyses based on what is currently known about the system. The set of runs indicating the largest potential reduction in uncertainty is then selected and the computational simulations are performed. Examples are provided to demonstrate this approach on small scale problems. These examples give encouraging results. Directions for further research are indicated.
Design for progressive fracture in composite shell structures
NASA Technical Reports Server (NTRS)
Minnetyan, Levon; Murthy, Pappu L. N.
1992-01-01
The load carrying capability and structural behavior of composite shell structures and stiffened curved panels are investigated to provide accurate early design loads. An integrated computer code is utilized for the computational simulation of composite structural degradation under practical loading for realistic design. Damage initiation, growth, accumulation, and propagation to structural fracture are included in the simulation. Progressive fracture investigations providing design insight for several classes of composite shells are presented. Results demonstrate the significance of local defects, interfacial regions, and stress concentrations on the structural durability of composite shells.
Biocellion: accelerating computer simulation of multicellular biological system models
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-01-01
Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572
About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture
NASA Astrophysics Data System (ADS)
Grauer, Manfred; Barth, Thomas
2004-06-01
Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations
Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka
2011-01-01
Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007
Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas
2011-12-15
The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research can be accurately described and combined.
Application of shell elements in simulation of cans ironing
NASA Astrophysics Data System (ADS)
Andrianov, A. V.; Erisov, Y. A.; Aryshensky, E. V.; Aryshensky, V. Y.
2017-01-01
In the present study, the special shell finite elements are used to simulate the drawing with high ironing ratio of aluminum beverage cans. These elements are implemented in commercial software complex PAM-STAMP 2G by means of T.T.S. normal stress option, which is used for ironing to describe well normal stress. By comparison of simulation and experimental data, it is shown that shell elements with T.T.S. option are capable to provide accurate simulation of deep drawing and ironing. The error of can thickness and height computation agrees with the engineering computation accuracy.
Influence of Computational Drop Representation in LES of a Droplet-Laden Mixing Layer
NASA Technical Reports Server (NTRS)
Bellan, Josette; Radhakrishnan, Senthilkumaran
2013-01-01
Multiphase turbulent flows are encountered in many practical applications including turbine engines or natural phenomena involving particle dispersion. Numerical computations of multiphase turbulent flows are important because they provide a cheaper alternative to performing experiments during an engine design process or because they can provide predictions of pollutant dispersion, etc. Two-phase flows contain millions and sometimes billions of particles. For flows with volumetrically dilute particle loading, the most accurate method of numerically simulating the flow is based on direct numerical simulation (DNS) of the governing equations in which all scales of the flow including the small scales that are responsible for the overwhelming amount of dissipation are resolved. DNS, however, requires high computational cost and cannot be used in engineering design applications where iterations among several design conditions are necessary. Because of high computational cost, numerical simulations of such flows cannot track all these drops. The objective of this work is to quantify the influence of the number of computational drops and grid spacing on the accuracy of predicted flow statistics, and to possibly identify the minimum number, or, if not possible, the optimal number of computational drops that provide minimal error in flow prediction. For this purpose, several Large Eddy Simulation (LES) of a mixing layer with evaporating drops have been performed by using coarse, medium, and fine grid spacings and computational drops, rather than physical drops. To define computational drops, an integer NR is introduced that represents the ratio of the number of existing physical drops to the desired number of computational drops; for example, if NR=8, this means that a computational drop represents 8 physical drops in the flow field. The desired number of computational drops is determined by the available computational resources; the larger NR is, the less computationally intensive is the simulation. A set of first order and second order flow statistics, and of drop statistics are extracted from LES predictions and are compared to results obtained by filtering a DNS database. First order statistics such as Favre averaged stream-wise velocity, Favre averaged vapor mass fraction, and the drop stream-wise velocity, are predicted accurately independent of the number of computational drops and grid spacing. Second order flow statistics depend both on the number of computational drops and on grid spacing. The scalar variance and turbulent vapor flux are predicted accurately by the fine mesh LES only when NR is less than 32, and by the coarse mesh LES reasonably accurately for all NR values. This is attributed to the fact that when the grid spacing is coarsened, the number of drops in a computational cell must not be significantly lower than that in the DNS.
Computational Intelligence for Medical Imaging Simulations.
Chang, Victor
2017-11-25
This paper describes how to simulate medical imaging by computational intelligence to explore areas that cannot be easily achieved by traditional ways, including genes and proteins simulations related to cancer development and immunity. This paper has presented simulations and virtual inspections of BIRC3, BIRC6, CCL4, KLKB1 and CYP2A6 with their outputs and explanations, as well as brain segment intensity due to dancing. Our proposed MapReduce framework with the fusion algorithm can simulate medical imaging. The concept is very similar to the digital surface theories to simulate how biological units can get together to form bigger units, until the formation of the entire unit of biological subject. The M-Fusion and M-Update function by the fusion algorithm can achieve a good performance evaluation which can process and visualize up to 40 GB of data within 600 s. We conclude that computational intelligence can provide effective and efficient healthcare research offered by simulations and visualization.
Simulating Microbial Community Patterning Using Biocellion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Seung-Hwa; Kahan, Simon H.; Momeni, Babak
2014-04-17
Mathematical modeling and computer simulation are important tools for understanding complex interactions between cells and their biotic and abiotic environment: similarities and differences between modeled and observed behavior provide the basis for hypothesis forma- tion. Momeni et al. [5] investigated pattern formation in communities of yeast strains engaging in different types of ecological interactions, comparing the predictions of mathematical modeling and simulation to actual patterns observed in wet-lab experiments. However, simu- lations of millions of cells in a three-dimensional community are ex- tremely time-consuming. One simulation run in MATLAB may take a week or longer, inhibiting exploration of the vastmore » space of parameter combinations and assumptions. Improving the speed, scale, and accu- racy of such simulations facilitates hypothesis formation and expedites discovery. Biocellion is a high performance software framework for ac- celerating discrete agent-based simulation of biological systems with millions to trillions of cells. Simulations of comparable scale and accu- racy to those taking a week of computer time using MATLAB require just hours using Biocellion on a multicore workstation. Biocellion fur- ther accelerates large scale, high resolution simulations using cluster computers by partitioning the work to run on multiple compute nodes. Biocellion targets computational biologists who have mathematical modeling backgrounds and basic C++ programming skills. This chap- ter describes the necessary steps to adapt the original Momeni et al.'s model to the Biocellion framework as a case study.« less
Evaluation of a grid based molecular dynamics approach for polypeptide simulations.
Merelli, Ivan; Morra, Giulia; Milanesi, Luciano
2007-09-01
Molecular dynamics is very important for biomedical research because it makes possible simulation of the behavior of a biological macromolecule in silico. However, molecular dynamics is computationally rather expensive: the simulation of some nanoseconds of dynamics for a large macromolecule such as a protein takes very long time, due to the high number of operations that are needed for solving the Newton's equations in the case of a system of thousands of atoms. In order to obtain biologically significant data, it is desirable to use high-performance computation resources to perform these simulations. Recently, a distributed computing approach based on replacing a single long simulation with many independent short trajectories has been introduced, which in many cases provides valuable results. This study concerns the development of an infrastructure to run molecular dynamics simulations on a grid platform in a distributed way. The implemented software allows the parallel submission of different simulations that are singularly short but together bring important biological information. Moreover, each simulation is divided into a chain of jobs to avoid data loss in case of system failure and to contain the dimension of each data transfer from the grid. The results confirm that the distributed approach on grid computing is particularly suitable for molecular dynamics simulations thanks to the elevated scalability.
Description and performance of the Langley differential maneuvering simulator
NASA Technical Reports Server (NTRS)
Ashworth, B. R.; Kahlbaum, W. M., Jr.
1973-01-01
The differential maneuvering simulator for simulating two aircraft or spacecraft operating in a differential mode is described. Tests made to verify that the system could provide the required simulated aircraft motions are given. The mathematical model which converts computed aircraft motions into the required motions of the various projector gimbals is described.
The Integrated Plasma Simulator: A Flexible Python Framework for Coupled Multiphysics Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foley, Samantha S; Elwasif, Wael R; Bernholdt, David E
2011-11-01
High-fidelity coupled multiphysics simulations are an increasingly important aspect of computational science. In many domains, however, there has been very limited experience with simulations of this sort, therefore research in coupled multiphysics often requires computational frameworks with significant flexibility to respond to the changing directions of the physics and mathematics. This paper presents the Integrated Plasma Simulator (IPS), a framework designed for loosely coupled simulations of fusion plasmas. The IPS provides users with a simple component architecture into which a wide range of existing plasma physics codes can be inserted as components. Simulations can take advantage of multiple levels ofmore » parallelism supported in the IPS, and can be controlled by a high-level ``driver'' component, or by other coordination mechanisms, such as an asynchronous event service. We describe the requirements and design of the framework, and how they were implemented in the Python language. We also illustrate the flexibility of the framework by providing examples of different types of simulations that utilize various features of the IPS.« less
Computer Simulation for Emergency Incident Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, D L
2004-12-03
This report describes the findings and recommendations resulting from the Department of Homeland Security (DHS) Incident Management Simulation Workshop held by the DHS Advanced Scientific Computing Program in May 2004. This workshop brought senior representatives of the emergency response and incident-management communities together with modeling and simulation technologists from Department of Energy laboratories. The workshop provided an opportunity for incident responders to describe the nature and substance of the primary personnel roles in an incident response, to identify current and anticipated roles of modeling and simulation in support of incident response, and to begin a dialog between the incident responsemore » and simulation technology communities that will guide and inform planned modeling and simulation development for incident response. This report provides a summary of the discussions at the workshop as well as a summary of simulation capabilities that are relevant to incident-management training, and recommendations for the use of simulation in both incident management and in incident management training, based on the discussions at the workshop. In addition, the report discusses areas where further research and development will be required to support future needs in this area.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alexander J.
There is a lack of state-of-the-art HPC simulation tools for simulating general quantum computing. Furthermore, there are no real software tools that integrate current quantum computers into existing classical HPC workflows. This product, the Quantum Virtual Machine (QVM), solves this problem by providing an extensible framework for pluggable virtual, or physical, quantum processing units (QPUs). It enables the execution of low level quantum assembly codes and returns the results of such executions.
Teaching Ionic Solvation Structure with a Monte Carlo Liquid Simulation Program
ERIC Educational Resources Information Center
Serrano, Agostinho; Santos, Flavia M. T.; Greca, Ileana M.
2004-01-01
The use of molecular dynamics and Monte Carlo methods has provided efficient means to stimulate the behavior of molecular liquids and solutions. A Monte Carlo simulation program is used to compute the structure of liquid water and of water as a solvent to Na(super +), Cl(super -), and Ar on a personal computer to show that it is easily feasible to…
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alexander J.
There is a lack of state-of-the-art quantum computing simulation software that scales on heterogeneous systems like Titan. Tensor Network Quantum Virtual Machine (TNQVM) provides a quantum simulator that leverages a distributed network of GPUs to simulate quantum circuits in a manner that leverages recent results from tensor network theory.
Electric/Hybrid Vehicle Simulation
NASA Technical Reports Server (NTRS)
Slusser, R. A.; Chapman, C. P.; Brennand, J. P.
1985-01-01
ELVEC computer program provides vehicle designer with simulation tool for detailed studies of electric and hybrid vehicle performance and cost. ELVEC simulates performance of user-specified electric or hybrid vehicle under user specified driving schedule profile or operating schedule. ELVEC performs vehicle design and life cycle cost analysis.
CatSim: a new computer assisted tomography simulation environment
NASA Astrophysics Data System (ADS)
De Man, Bruno; Basu, Samit; Chandra, Naveen; Dunham, Bruce; Edic, Peter; Iatrou, Maria; McOlash, Scott; Sainath, Paavana; Shaughnessy, Charlie; Tower, Brendon; Williams, Eugene
2007-03-01
We present a new simulation environment for X-ray computed tomography, called CatSim. CatSim provides a research platform for GE researchers and collaborators to explore new reconstruction algorithms, CT architectures, and X-ray source or detector technologies. The main requirements for this simulator are accurate physics modeling, low computation times, and geometrical flexibility. CatSim allows simulating complex analytic phantoms, such as the FORBILD phantoms, including boxes, ellipsoids, elliptical cylinders, cones, and cut planes. CatSim incorporates polychromaticity, realistic quantum and electronic noise models, finite focal spot size and shape, finite detector cell size, detector cross-talk, detector lag or afterglow, bowtie filtration, finite detector efficiency, non-linear partial volume, scatter (variance-reduced Monte Carlo), and absorbed dose. We present an overview of CatSim along with a number of validation experiments.
Delivering Insight The History of the Accelerated Strategic Computing Initiative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larzelere II, A R
2007-01-03
The history of the Accelerated Strategic Computing Initiative (ASCI) tells of the development of computational simulation into a third fundamental piece of the scientific method, on a par with theory and experiment. ASCI did not invent the idea, nor was it alone in bringing it to fruition. But ASCI provided the wherewithal - hardware, software, environment, funding, and, most of all, the urgency - that made it happen. On October 1, 2005, the Initiative completed its tenth year of funding. The advances made by ASCI over its first decade are truly incredible. Lawrence Livermore, Los Alamos, and Sandia National Laboratories,more » along with leadership provided by the Department of Energy's Defense Programs Headquarters, fundamentally changed computational simulation and how it is used to enable scientific insight. To do this, astounding advances were made in simulation applications, computing platforms, and user environments. ASCI dramatically changed existing - and forged new - relationships, both among the Laboratories and with outside partners. By its tenth anniversary, despite daunting challenges, ASCI had accomplished all of the major goals set at its beginning. The history of ASCI is about the vision, leadership, endurance, and partnerships that made these advances possible.« less
POLLUX: a program for simulated cloning, mutagenesis and database searching of DNA constructs.
Dayringer, H E; Sammons, S A
1991-04-01
Computer support for research in biotechnology has developed rapidly and has provided several tools to aid the researcher. This report describes the capabilities of new computer software developed in this laboratory to aid in the documentation and planning of experiments in molecular biology. The program, POLLUX, provides a graphical medium for the entry, edit and manipulation of DNA constructs and a textual format for display and edit of construct descriptive data. Program operation and procedures are designed to mimic the actual laboratory experiments with respect to capability and the order in which they are performed. Flexible control over the content of the computer-generated displays and program facilities is provided by a mouse-driven menu interface. Programmed facilities for mutagenesis, simulated cloning and searching of the database from networked workstations are described.
Diesel Bus Performance Simulation Program
DOT National Transportation Integrated Search
1979-04-01
A diesel bus performance computer simulation program was developed. This program provides information on acceleration, velocity, horsepower, distance traveled, and fuel consumption as a function of time from the originating station. The program was w...
Coupling MD Simulations and X-ray Absorption Spectroscopy to Study Ions in Solution
NASA Astrophysics Data System (ADS)
Marcos, E. Sánchez; Beret, E. C.; Martínez, J. M.; Pappalardo, R. R.; Ayala, R.; Muñoz-Páez, A.
2007-12-01
The structure of ionic solutions is a key-point in understanding physicochemical properties of electrolyte solutions. Among the reduced number of experimental techniques which can supply direct information on the ion environment, X-ray Absorption techniques (XAS) have gained importance during the last decades although they are not free of difficulties associated to the data analysis leading to provide reliable structures. Computer simulations of ions in solution is a theoretical alternative to provide information on the solvation structure. Thus, the use of computational chemistry can increase the understanding of these systems although an accurate description of ionic solvation phenomena represents nowadays a significant challenge to theoretical chemistry. We present: (a) the assignment of features in the XANES spectrum to well defined structural motif in the ion environment, (b) MD-based evaluation of EXAFS parameters used in the fitting procedure to make easier the structural resolution, and (c) the use of the agreement between experimental and simulated XANES spectra to help in the choice of a given intermolecular potential for Computer Simulations. Chemical problems examined are: (a) the identification of the second hydration shell in dilute aqueous solutions of highly-charged cations, such as Cr3+, Rh3+, Ir3+, (b) the invisibility by XAS of certain structures characterized by Computer Simulations but exhibiting high dynamical behavior and (c) the solvation of Br- in acetonitrile.
Coupling MD Simulations and X-ray Absorption Spectroscopy to Study Ions in Solution
NASA Astrophysics Data System (ADS)
Marcos, E. Sánchez; Beret, E. C.; Martínez, J. M.; Pappalardo, R. R.; Ayala, R.; Muñoz-Páez, A.
2007-11-01
The structure of ionic solutions is a key-point in understanding physicochemical properties of electrolyte solutions. Among the reduced number of experimental techniques which can supply direct information on the ion environment, X-ray Absorption techniques (XAS) have gained importance during the last decades although they are not free of difficulties associated to the data analysis leading to provide reliable structures. Computer simulations of ions in solution is a theoretical alternative to provide information on the solvation structure. Thus, the use of computational chemistry can increase the understanding of these systems although an accurate description of ionic solvation phenomena represents nowadays a significant challenge to theoretical chemistry. We present: (a) the assignment of features in the XANES spectrum to well defined structural motif in the ion environment, (b) MD-based evaluation of EXAFS parameters used in the fitting procedure to make easier the structural resolution, and (c) the use of the agreement between experimental and simulated XANES spectra to help in the choice of a given intermolecular potential for Computer Simulations. Chemical problems examined are: (a) the identification of the second hydration shell in dilute aqueous solutions of highly-charged cations, such as Cr3+, Rh3+, Ir3+, (b) the invisibility by XAS of certain structures characterized by Computer Simulations but exhibiting high dynamical behavior and (c) the solvation of Br- in acetonitrile.
Parallel Simulation of Unsteady Turbulent Flames
NASA Technical Reports Server (NTRS)
Menon, Suresh
1996-01-01
Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.
Evaluating the Influence of the Client Behavior in Cloud Computing.
Souza Pardo, Mário Henrique; Centurion, Adriana Molina; Franco Eustáquio, Paulo Sérgio; Carlucci Santana, Regina Helena; Bruschi, Sarita Mazzini; Santana, Marcos José
2016-01-01
This paper proposes a novel approach for the implementation of simulation scenarios, providing a client entity for cloud computing systems. The client entity allows the creation of scenarios in which the client behavior has an influence on the simulation, making the results more realistic. The proposed client entity is based on several characteristics that affect the performance of a cloud computing system, including different modes of submission and their behavior when the waiting time between requests (think time) is considered. The proposed characterization of the client enables the sending of either individual requests or group of Web services to scenarios where the workload takes the form of bursts. The client entity is included in the CloudSim, a framework for modelling and simulation of cloud computing. Experimental results show the influence of the client behavior on the performance of the services executed in a cloud computing system.
Evaluating the Influence of the Client Behavior in Cloud Computing
Centurion, Adriana Molina; Franco Eustáquio, Paulo Sérgio; Carlucci Santana, Regina Helena; Bruschi, Sarita Mazzini; Santana, Marcos José
2016-01-01
This paper proposes a novel approach for the implementation of simulation scenarios, providing a client entity for cloud computing systems. The client entity allows the creation of scenarios in which the client behavior has an influence on the simulation, making the results more realistic. The proposed client entity is based on several characteristics that affect the performance of a cloud computing system, including different modes of submission and their behavior when the waiting time between requests (think time) is considered. The proposed characterization of the client enables the sending of either individual requests or group of Web services to scenarios where the workload takes the form of bursts. The client entity is included in the CloudSim, a framework for modelling and simulation of cloud computing. Experimental results show the influence of the client behavior on the performance of the services executed in a cloud computing system. PMID:27441559
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowery, P.S.; Lessor, D.L.
Waste glass melter and in situ vitrification (ISV) processes represent the combination of electrical thermal, and fluid flow phenomena to produce a stable waste-from product. Computational modeling of the thermal and fluid flow aspects of these processes provides a useful tool for assessing the potential performance of proposed system designs. These computations can be performed at a fraction of the cost of experiment. Consequently, computational modeling of vitrification systems can also provide and economical means for assessing the suitability of a proposed process application. The computational model described in this paper employs finite difference representations of the basic continuum conservationmore » laws governing the thermal, fluid flow, and electrical aspects of the vitrification process -- i.e., conservation of mass, momentum, energy, and electrical charge. The resulting code is a member of the TEMPEST family of codes developed at the Pacific Northwest Laboratory (operated by Battelle for the US Department of Energy). This paper provides an overview of the numerical approach employed in TEMPEST. In addition, results from several TEMPEST simulations of sample waste glass melter and ISV processes are provided to illustrate the insights to be gained from computational modeling of these processes. 3 refs., 13 figs.« less
Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators
Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew
2014-01-01
Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950
DAKOTA Design Analysis Kit for Optimization and Terascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.
2010-02-24
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less
Nonadiabatic holonomic quantum computation using Rydberg blockade
NASA Astrophysics Data System (ADS)
Kang, Yi-Hao; Chen, Ye-Hong; Shi, Zhi-Cheng; Huang, Bi-Hua; Song, Jie; Xia, Yan
2018-04-01
In this paper, we propose a scheme for realizing nonadiabatic holonomic computation assisted by two atoms and the shortcuts to adiabaticity (STA). The blockade effect induced by strong Rydberg-mediated interaction between two Rydberg atoms provides us the possibility to simplify the dynamics of the system, and the STA helps us design pulses for implementing the holonomic computation with high fidelity. Numerical simulations show the scheme is noise immune and decoherence resistant. Therefore, the current scheme may provide some useful perspectives for realizing nonadiabatic holonomic computation.
Computational studies of physical properties of Nb-Si based alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouyang, Lizhi
2015-04-16
The overall goal is to provide physical properties data supplementing experiments for thermodynamic modeling and other simulations such as phase filed simulation for microstructure and continuum simulations for mechanical properties. These predictive computational modeling and simulations may yield insights that can be used to guide materials design, processing, and manufacture. Ultimately, they may lead to usable Nb-Si based alloy which could play an important role in current plight towards greener energy. The main objectives of the proposed projects are: (1) developing a first principles method based supercell approach for calculating thermodynamic and mechanic properties of ordered crystals and disordered latticesmore » including solid solution; (2) application of the supercell approach to Nb-Si base alloy to compute physical properties data that can be used for thermodynamic modeling and other simulations to guide the optimal design of Nb-Si based alloy.« less
NASA Astrophysics Data System (ADS)
van Poppel, Bret; Owkes, Mark; Nelson, Thomas; Lee, Zachary; Sowell, Tyler; Benson, Michael; Vasquez Guzman, Pablo; Fahrig, Rebecca; Eaton, John; Kurman, Matthew; Kweon, Chol-Bum; Bravo, Luis
2014-11-01
In this work, we present high-fidelity Computational Fluid Dynamics (CFD) results of liquid fuel injection from a pressure-swirl atomizer and compare the simulations to experimental results obtained using both shadowgraphy and phase-averaged X-ray computed tomography (CT) scans. The CFD and experimental results focus on the dense near-nozzle region to identify the dominant mechanisms of breakup during primary atomization. Simulations are performed using the NGA code of Desjardins et al (JCP 227 (2008)) and employ the volume of fluid (VOF) method proposed by Owkes and Desjardins (JCP 270 (2013)), a second order accurate, un-split, conservative, three-dimensional VOF scheme providing second order density fluxes and capable of robust and accurate high density ratio simulations. Qualitative features and quantitative statistics are assessed and compared for the simulation and experimental results, including the onset of atomization, spray cone angle, and drop size and distribution.
Souris, Kevin; Lee, John Aldo; Sterpin, Edmond
2016-04-01
Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.
NASA Astrophysics Data System (ADS)
Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik
2018-05-01
Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications.
Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik
2018-05-10
Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications.
Recent achievements in real-time computational seismology in Taiwan
NASA Astrophysics Data System (ADS)
Lee, S.; Liang, W.; Huang, B.
2012-12-01
Real-time computational seismology is currently possible to be achieved which needs highly connection between seismic database and high performance computing. We have developed a real-time moment tensor monitoring system (RMT) by using continuous BATS records and moment tensor inversion (CMT) technique. The real-time online earthquake simulation service is also ready to open for researchers and public earthquake science education (ROS). Combine RMT with ROS, the earthquake report based on computational seismology can provide within 5 minutes after an earthquake occurred (RMT obtains point source information < 120 sec; ROS completes a 3D simulation < 3 minutes). All of these computational results are posted on the internet in real-time now. For more information, welcome to visit real-time computational seismology earthquake report webpage (RCS).
System analysis for the Huntsville Operation Support Center distributed computer system
NASA Technical Reports Server (NTRS)
Ingels, F. M.
1986-01-01
A simulation model of the NASA Huntsville Operational Support Center (HOSC) was developed. This simulation model emulates the HYPERchannel Local Area Network (LAN) that ties together the various computers of HOSC. The HOSC system is a large installation of mainframe computers such as the Perkin Elmer 3200 series and the Dec VAX series. A series of six simulation exercises of the HOSC model is described using data sets provided by NASA. The analytical analysis of the ETHERNET LAN and the video terminals (VTs) distribution system are presented. An interface analysis of the smart terminal network model which allows the data flow requirements due to VTs on the ETHERNET LAN to be estimated, is presented.
NASA Technical Reports Server (NTRS)
Yanosy, J. L.; Rowell, L. F.
1985-01-01
Efforts to make increasingly use of suitable computer programs in the design of hardware have the potential to reduce expenditures. In this context, NASA has evaluated the benefits provided by software tools through an application to the Environmental Control and Life Support (ECLS) system. The present paper is concerned with the benefits obtained by an employment of simulation tools in the case of the Air Revitalization System (ARS) of a Space Station life support system. Attention is given to the ARS functions and components, a computer program overview, a SAND (solid amine water desorbed) bed model description, a model validation, and details regarding the simulation benefits.
PROTO-PLASM: parallel language for adaptive and scalable modelling of biosystems.
Bajaj, Chandrajit; DiCarlo, Antonio; Paoluzzi, Alberto
2008-09-13
This paper discusses the design goals and the first developments of PROTO-PLASM, a novel computational environment to produce libraries of executable, combinable and customizable computer models of natural and synthetic biosystems, aiming to provide a supporting framework for predictive understanding of structure and behaviour through multiscale geometric modelling and multiphysics simulations. Admittedly, the PROTO-PLASM platform is still in its infancy. Its computational framework--language, model library, integrated development environment and parallel engine--intends to provide patient-specific computational modelling and simulation of organs and biosystem, exploiting novel functionalities resulting from the symbolic combination of parametrized models of parts at various scales. PROTO-PLASM may define the model equations, but it is currently focused on the symbolic description of model geometry and on the parallel support of simulations. Conversely, CellML and SBML could be viewed as defining the behavioural functions (the model equations) to be used within a PROTO-PLASM program. Here we exemplify the basic functionalities of PROTO-PLASM, by constructing a schematic heart model. We also discuss multiscale issues with reference to the geometric and physical modelling of neuromuscular junctions.
Proto-Plasm: parallel language for adaptive and scalable modelling of biosystems
Bajaj, Chandrajit; DiCarlo, Antonio; Paoluzzi, Alberto
2008-01-01
This paper discusses the design goals and the first developments of Proto-Plasm, a novel computational environment to produce libraries of executable, combinable and customizable computer models of natural and synthetic biosystems, aiming to provide a supporting framework for predictive understanding of structure and behaviour through multiscale geometric modelling and multiphysics simulations. Admittedly, the Proto-Plasm platform is still in its infancy. Its computational framework—language, model library, integrated development environment and parallel engine—intends to provide patient-specific computational modelling and simulation of organs and biosystem, exploiting novel functionalities resulting from the symbolic combination of parametrized models of parts at various scales. Proto-Plasm may define the model equations, but it is currently focused on the symbolic description of model geometry and on the parallel support of simulations. Conversely, CellML and SBML could be viewed as defining the behavioural functions (the model equations) to be used within a Proto-Plasm program. Here we exemplify the basic functionalities of Proto-Plasm, by constructing a schematic heart model. We also discuss multiscale issues with reference to the geometric and physical modelling of neuromuscular junctions. PMID:18559320
Theory and Simulation of Multicomponent Osmotic Systems
Karunaweera, Sadish; Gee, Moon Bae; Weerasinghe, Samantha; Smith, Paul E.
2012-01-01
Most cellular processes occur in systems containing a variety of components many of which are open to material exchange. However, computer simulations of biological systems are almost exclusively performed in systems closed to material exchange. In principle, the behavior of biomolecules in open and closed systems will be different. Here, we provide a rigorous framework for the analysis of experimental and simulation data concerning open and closed multicomponent systems using the Kirkwood-Buff (KB) theory of solutions. The results are illustrated using computer simulations for various concentrations of the solutes Gly, Gly2 and Gly3 in both open and closed systems, and in the absence or presence of NaCl as a cosolvent. In addition, KB theory is used to help rationalize the aggregation properties of the solutes. Here one observes that the picture of solute association described by the KB integrals, which are directly related to the solution thermodynamics, and that provided by more physical clustering approaches are different. It is argued that the combination of KB theory and simulation data provides a simple and powerful tool for the analysis of complex multicomponent open and closed systems. PMID:23329894
Providing a parallel and distributed capability for JMASS using SPEEDES
NASA Astrophysics Data System (ADS)
Valinski, Maria; Driscoll, Jonathan; McGraw, Robert M.; Meyer, Bob
2002-07-01
The Joint Modeling And Simulation System (JMASS) is a Tri-Service simulation environment that supports engineering and engagement-level simulations. As JMASS is expanded to support other Tri-Service domains, the current set of modeling services must be expanded for High Performance Computing (HPC) applications by adding support for advanced time-management algorithms, parallel and distributed topologies, and high speed communications. By providing support for these services, JMASS can better address modeling domains requiring parallel computationally intense calculations such clutter, vulnerability and lethality calculations, and underwater-based scenarios. A risk reduction effort implementing some HPC services for JMASS using the SPEEDES (Synchronous Parallel Environment for Emulation and Discrete Event Simulation) Simulation Framework has recently concluded. As an artifact of the JMASS-SPEEDES integration, not only can HPC functionality be brought to the JMASS program through SPEEDES, but an additional HLA-based capability can be demonstrated that further addresses interoperability issues. The JMASS-SPEEDES integration provided a means of adding HLA capability to preexisting JMASS scenarios through an implementation of the standard JMASS port communication mechanism that allows players to communicate.
Tactical Action Officer Intelligent Tutoring System (TAO ITS)
2006-01-01
scenario. As well as the intrinsic feedback that free - play simulations naturally provide a student, the TAO ITS provides detailed, useful extrinsic feedback...incorporate use of free - play simulators into their curriculum, affordably. This is a major shortcoming of conventional CBT as student manipulation of...tutoring systems are ideal for incorporating desktop free - play simulators into computer-based training since the software can stand in for a human
Computational Cosmology at the Bleeding Edge
NASA Astrophysics Data System (ADS)
Habib, Salman
2013-04-01
Large-area sky surveys are providing a wealth of cosmological information to address the mysteries of dark energy and dark matter. Observational probes based on tracking the formation of cosmic structure are essential to this effort, and rely crucially on N-body simulations that solve the Vlasov-Poisson equation in an expanding Universe. As statistical errors from survey observations continue to shrink, and cosmological probes increase in number and complexity, simulations are entering a new regime in their use as tools for scientific inference. Changes in supercomputer architectures provide another rationale for developing new parallel simulation and analysis capabilities that can scale to computational concurrency levels measured in the millions to billions. In this talk I will outline the motivations behind the development of the HACC (Hardware/Hybrid Accelerated Cosmology Code) extreme-scale cosmological simulation framework and describe its essential features. By exploiting a novel algorithmic structure that allows flexible tuning across diverse computer architectures, including accelerated and many-core systems, HACC has attained a performance of 14 PFlops on the IBM BG/Q Sequoia system at 69% of peak, using more than 1.5 million cores.
Paracousti-UQ: A Stochastic 3-D Acoustic Wave Propagation Algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph
Acoustic full waveform algorithms, such as Paracousti, provide deterministic solutions in complex, 3-D variable environments. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected sound levels within an environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. Performing Monte Carlo (MC) simulations is one method of assessing this uncertainty, but it can quickly become computationally intractable for realistic problems. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a fractionmore » of the computational cost of MC. Paracousti-UQ solves the SPDE system of 3-D acoustic wave propagation equations and provides estimates of the uncertainty of the output simulated wave field (e.g., amplitudes, waveforms) based on estimated probability distributions of the input medium and source parameters. This report describes the derivation of the stochastic partial differential equations, their implementation, and comparison of Paracousti-UQ results with MC simulations using simple models.« less
Simulators and virtual reality in surgical education.
Chou, Betty; Handa, Victoria L
2006-06-01
This article explores the pros and cons of virtual reality simulators, their abilities to train and assess surgical skills, and their potential future applications. Computer-based virtual reality simulators and more conventional box trainers are compared and contrasted. The virtual reality simulator provides objective assessment of surgical skills and immediate feedback further to enhance training. With this ability to provide standardized, unbiased assessment of surgical skills, the virtual reality trainer has the potential to be a tool for selecting, instructing, certifying, and recertifying gynecologists.
Simulations in a Science and Society Course.
ERIC Educational Resources Information Center
Maier, Mark H.; Venanzi, Thomas
1984-01-01
Provides a course outline which includes simulation exercises designed as in-class activities related to science and society interactions. Simulations focus on the IQ debate, sociobiology, nuclear weapons and nulcear strategy, nuclear power and radiation, computer explosion, and cosmology. Indicates that learning improves when students take active…
Computational Flow Modeling of Human Upper Airway Breathing
NASA Astrophysics Data System (ADS)
Mylavarapu, Goutham
Computational modeling of biological systems have gained a lot of interest in biomedical research, in the recent past. This thesis focuses on the application of computational simulations to study airflow dynamics in human upper respiratory tract. With advancements in medical imaging, patient specific geometries of anatomically accurate respiratory tracts can now be reconstructed from Magnetic Resonance Images (MRI) or Computed Tomography (CT) scans, with better and accurate details than traditional cadaver cast models. Computational studies using these individualized geometrical models have advantages of non-invasiveness, ease, minimum patient interaction, improved accuracy over experimental and clinical studies. Numerical simulations can provide detailed flow fields including velocities, flow rates, airway wall pressure, shear stresses, turbulence in an airway. Interpretation of these physical quantities will enable to develop efficient treatment procedures, medical devices, targeted drug delivery etc. The hypothesis for this research is that computational modeling can predict the outcomes of a surgical intervention or a treatment plan prior to its application and will guide the physician in providing better treatment to the patients. In the current work, three different computational approaches Computational Fluid Dynamics (CFD), Flow-Structure Interaction (FSI) and Particle Flow simulations were used to investigate flow in airway geometries. CFD approach assumes airway wall as rigid, and relatively easy to simulate, compared to the more challenging FSI approach, where interactions of airway wall deformations with flow are also accounted. The CFD methodology using different turbulence models is validated against experimental measurements in an airway phantom. Two case-studies using CFD, to quantify a pre and post-operative airway and another, to perform virtual surgery to determine the best possible surgery in a constricted airway is demonstrated. The unsteady Large Eddy simulations (LES) and a steady Reynolds Averaged Navier Stokes (RANS) approaches in CFD modeling are discussed. The more challenging FSI approach is modeled first in simple two-dimensional anatomical geometry and then extended to simplified three dimensional geometry and finally in three dimensionally accurate geometries. The concepts of virtual surgery and the differences to CFD are discussed. Finally, the influence of various drug delivery parameters on particle deposition efficiency in airway anatomy are investigated through particle-flow simulations in a nasal airway model.
Research in parallel computing
NASA Technical Reports Server (NTRS)
Ortega, James M.; Henderson, Charles
1994-01-01
This report summarizes work on parallel computations for NASA Grant NAG-1-1529 for the period 1 Jan. - 30 June 1994. Short summaries on highly parallel preconditioners, target-specific parallel reductions, and simulation of delta-cache protocols are provided.
ERIC Educational Resources Information Center
Jackett, Dwane
1990-01-01
Described is a science activity which illustrates the principle of uncertainty using a computer simulation of bacterial reproduction. Procedures and results are discussed. Several illustrations of results are provided. The availability of a computer program is noted. (CW)
NASA Technical Reports Server (NTRS)
Badler, N. I.; Fishwick, P.; Taft, N.; Agrawala, M.
1985-01-01
The use of computer graphics to simulate the movement of articulated animals and mechanisms has a number of uses ranging over many fields. Human motion simulation systems can be useful in education, medicine, anatomy, physiology, and dance. In biomechanics, computer displays help to understand and analyze performance. Simulations can be used to help understand the effect of external or internal forces. Similarly, zero-gravity simulation systems should provide a means of designing and exploring the capabilities of hypothetical zero-gravity situations before actually carrying out such actions. The advantage of using a simulation of the motion is that one can experiment with variations of a maneuver before attempting to teach it to an individual. The zero-gravity motion simulation problem can be divided into two broad areas: human movement and behavior in zero-gravity, and simulation of articulated mechanisms.
The application of virtual reality systems as a support of digital manufacturing and logistics
NASA Astrophysics Data System (ADS)
Golda, G.; Kampa, A.; Paprocka, I.
2016-08-01
Modern trends in development of computer aided techniques are heading toward the integration of design competitive products and so-called "digital manufacturing and logistics", supported by computer simulation software. All phases of product lifecycle: starting from design of a new product, through planning and control of manufacturing, assembly, internal logistics and repairs, quality control, distribution to customers and after-sale service, up to its recycling or utilization should be aided and managed by advanced packages of product lifecycle management software. Important problems for providing the efficient flow of materials in supply chain management of whole product lifecycle, using computer simulation will be described on that paper. Authors will pay attention to the processes of acquiring relevant information and correct data, necessary for virtual modeling and computer simulation of integrated manufacturing and logistics systems. The article describes possibilities of use an applications of virtual reality software for modeling and simulation the production and logistics processes in enterprise in different aspects of product lifecycle management. The authors demonstrate effective method of creating computer simulations for digital manufacturing and logistics and show modeled and programmed examples and solutions. They pay attention to development trends and show options of the applications that go beyond enterprise.
Computational Methods Development at Ames
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Smith, Charles A. (Technical Monitor)
1998-01-01
This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
Emulation/Simulation Computer Model (ESCM) computes the transient performance of a Space Station air revitalization subsystem with carbon dioxide removal provided by a solid amine water desorbed subsystem called SAWD. This manual describes the mathematical modeling and equations used in the ESCM. For the system as a whole and for each individual component, the fundamental physical and chemical laws which govern their operations are presented. Assumptions are stated, and when necessary, data is presented to support empirically developed relationships.
A Queue Simulation Tool for a High Performance Scientific Computing Center
NASA Technical Reports Server (NTRS)
Spear, Carrie; McGalliard, James
2007-01-01
The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.
Temporal acceleration of spatially distributed kinetic Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Abhijit; Vlachos, Dionisios G.
The computational intensity of kinetic Monte Carlo (KMC) simulation is a major impediment in simulating large length and time scales. In recent work, an approximate method for KMC simulation of spatially uniform systems, termed the binomial {tau}-leap method, was introduced [A. Chatterjee, D.G. Vlachos, M.A. Katsoulakis, Binomial distribution based {tau}-leap accelerated stochastic simulation, J. Chem. Phys. 122 (2005) 024112], where molecular bundles instead of individual processes are executed over coarse-grained time increments. This temporal coarse-graining can lead to significant computational savings but its generalization to spatially lattice KMC simulation has not been realized yet. Here we extend the binomial {tau}-leapmore » method to lattice KMC simulations by combining it with spatially adaptive coarse-graining. Absolute stability and computational speed-up analyses for spatial systems along with simulations provide insights into the conditions where accuracy and substantial acceleration of the new spatio-temporal coarse-graining method are ensured. Model systems demonstrate that the r-time increment criterion of Chatterjee et al. obeys the absolute stability limit for values of r up to near 1.« less
Simulation and analysis of a geopotential research mission
NASA Technical Reports Server (NTRS)
Schutz, B. E.
1987-01-01
Computer simulations were performed for a Geopotential Research Mission (GRM) to enable the study of the gravitational sensitivity of the range rate measurements between the two satellites and to provide a set of simulated measurements to assist in the evaluation of techniques developed for the determination of the gravity field. The simulations were conducted with two satellites in near circular, frozen orbits at 160 km altitudes separated by 300 km. High precision numerical integration of the polar orbits were used with a gravitational field complete to degree and order 360. The set of simulated data for a mission duration of about 32 days was generated on a Cray X-MP computer. The results presented cover the most recent simulation, S8703, and includes a summary of the numerical integration of the simulated trajectories, a summary of the requirements to compute nominal reference trajectories to meet the initial orbit determination requirements for the recovery of the geopotential, an analysis of the nature of the one way integrated Doppler measurements associated with the simulation, and a discussion of the data set to be made available.
JETSPIN: A specific-purpose open-source software for simulations of nanofiber electrospinning
NASA Astrophysics Data System (ADS)
Lauricella, Marco; Pontrelli, Giuseppe; Coluzza, Ivan; Pisignano, Dario; Succi, Sauro
2015-12-01
We present the open-source computer program JETSPIN, specifically designed to simulate the electrospinning process of nanofibers. Its capabilities are shown with proper reference to the underlying model, as well as a description of the relevant input variables and associated test-case simulations. The various interactions included in the electrospinning model implemented in JETSPIN are discussed in detail. The code is designed to exploit different computational architectures, from single to parallel processor workstations. This paper provides an overview of JETSPIN, focusing primarily on its structure, parallel implementations, functionality, performance, and availability.
Interactive computer simulations of knee-replacement surgery.
Gunther, Stephen B; Soto, Gabriel E; Colman, William W
2002-07-01
Current surgical training programs in the United States are based on an apprenticeship model. This model is outdated because it does not provide conceptual scaffolding, promote collaborative learning, or offer constructive reinforcement. Our objective was to create a more useful approach by preparing students and residents for operative cases using interactive computer simulations of surgery. Total-knee-replacement surgery (TKR) is an ideal procedure to model on the computer because there is a systematic protocol for the procedure. Also, this protocol is difficult to learn by the apprenticeship model because of the multiple instruments that must be used in a specific order. We designed an interactive computer tutorial to teach medical students and residents how to perform knee-replacement surgery. We also aimed to reinforce the specific protocol of the operative procedure. Our final goal was to provide immediate, constructive feedback. We created a computer tutorial by generating three-dimensional wire-frame models of the surgical instruments. Next, we applied a surface to the wire-frame models using three-dimensional modeling. Finally, the three-dimensional models were animated to simulate the motions of an actual TKR. The tutorial is a step-by-step tutorial that teaches and tests the correct sequence of steps in a TKR. The student or resident must select the correct instruments in the correct order. The learner is encouraged to learn the stepwise surgical protocol through repetitive use of the computer simulation. Constructive feedback is acquired through a grading system, which rates the student's or resident's ability to perform the task in the correct order. The grading system also accounts for the time required to perform the simulated procedure. We evaluated the efficacy of this teaching technique by testing medical students who learned by the computer simulation and those who learned by reading the surgical protocol manual. Both groups then performed TKR on manufactured bone models using real instruments. Their technique was graded with the standard protocol. The students who learned on the computer simulation performed the task in a shorter time and with fewer errors than the control group. They were also more engaged in the learning process. Surgical training programs generally lack a consistent approach to preoperative education related to surgical procedures. This interactive computer tutorial has allowed us to make a quantum leap in medical student and resident teaching in our orthopedic department because the students actually participate in the entire process. Our technique provides a linear, sequential method of skill acquisition and direct feedback, which is ideally suited for learning stepwise surgical protocols. Since our initial evaluation has shown the efficacy of this program, we have implemented this teaching tool into our orthopedic curriculum. Our plans for future work with this simulator include modeling procedures involving other anatomic areas of interest, such as the hip and shoulder.
Simulator for neural networks and action potentials.
Baxter, Douglas A; Byrne, John H
2007-01-01
A key challenge for neuroinformatics is to devise methods for representing, accessing, and integrating vast amounts of diverse and complex data. A useful approach to represent and integrate complex data sets is to develop mathematical models [Arbib (The Handbook of Brain Theory and Neural Networks, pp. 741-745, 2003); Arbib and Grethe (Computing the Brain: A Guide to Neuroinformatics, 2001); Ascoli (Computational Neuroanatomy: Principles and Methods, 2002); Bower and Bolouri (Computational Modeling of Genetic and Biochemical Networks, 2001); Hines et al. (J. Comput. Neurosci. 17, 7-11, 2004); Shepherd et al. (Trends Neurosci. 21, 460-468, 1998); Sivakumaran et al. (Bioinformatics 19, 408-415, 2003); Smolen et al. (Neuron 26, 567-580, 2000); Vadigepalli et al. (OMICS 7, 235-252, 2003)]. Models of neural systems provide quantitative and modifiable frameworks for representing data and analyzing neural function. These models can be developed and solved using neurosimulators. One such neurosimulator is simulator for neural networks and action potentials (SNNAP) [Ziv (J. Neurophysiol. 71, 294-308, 1994)]. SNNAP is a versatile and user-friendly tool for developing and simulating models of neurons and neural networks. SNNAP simulates many features of neuronal function, including ionic currents and their modulation by intracellular ions and/or second messengers, and synaptic transmission and synaptic plasticity. SNNAP is written in Java and runs on most computers. Moreover, SNNAP provides a graphical user interface (GUI) and does not require programming skills. This chapter describes several capabilities of SNNAP and illustrates methods for simulating neurons and neural networks. SNNAP is available at http://snnap.uth.tmc.edu .
Novel 3D/VR interactive environment for MD simulations, visualization and analysis.
Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P
2014-12-18
The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.
Novel 3D/VR Interactive Environment for MD Simulations, Visualization and Analysis
Doblack, Benjamin N.; Allis, Tim; Dávila, Lilian P.
2014-01-01
The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced. PMID:25549300
Combining high performance simulation, data acquisition, and graphics display computers
NASA Technical Reports Server (NTRS)
Hickman, Robert J.
1989-01-01
Issues involved in the continuing development of an advanced simulation complex are discussed. This approach provides the capability to perform the majority of tests on advanced systems, non-destructively. The controlled test environments can be replicated to examine the response of the systems under test to alternative treatments of the system control design, or test the function and qualification of specific hardware. Field tests verify that the elements simulated in the laboratories are sufficient. The digital computer is hosted by a Digital Equipment Corp. MicroVAX computer with an Aptec Computer Systems Model 24 I/O computer performing the communication function. An Applied Dynamics International AD100 performs the high speed simulation computing and an Evans and Sutherland PS350 performs on-line graphics display. A Scientific Computer Systems SCS40 acts as a high performance FORTRAN program processor to support the complex, by generating numerous large files from programs coded in FORTRAN that are required for the real time processing. Four programming languages are involved in the process, FORTRAN, ADSIM, ADRIO, and STAPLE. FORTRAN is employed on the MicroVAX host to initialize and terminate the simulation runs on the system. The generation of the data files on the SCS40 also is performed with FORTRAN programs. ADSIM and ADIRO are used to program the processing elements of the AD100 and its IOCP processor. STAPLE is used to program the Aptec DIP and DIA processors.
The Politics of City Planning Simulations.
ERIC Educational Resources Information Center
Kolson, Kenneth
This research paper presents an analysis of the computer simulation, SimCity, used for an urban city planning class. The data were gathered by actual use of the simulation and an electronic mail network was employed to secure impressions from users of the simulation. SimCity (developed by Maxis) provides the player with rules of human factors,…
Hygrothermal Simulation: A Tool for Building Envelope Design Analysis
Samuel V. Glass; Anton TenWolde; Samuel L. Zelinka
2013-01-01
Is it possible to gauge the risk of moisture problems while designing the building envelope? This article provides a brief introduction to computer-based hygrothermal (heat and moisture) simulation, shows how simulation can be useful as a design tool, and points out a number of im-portant considerations regarding model inputs and limita-tions. Hygrothermal simulation...
Simulation: an evolving methodology for health administration education.
Taylor, J K; Moore, J A; Holland, M G
1985-01-01
Simulation provides a valuable addition to a university's teaching methods. Computer-assisted gaming is especially effective in teaching advanced business strategy and corporate policy when the nature and complexity of the simulation permit. The potential for using simulation techniques in postgraduate professional education and in managerial self-assessment appears to be significant over the next several years.
Computer-intensive simulation of solid-state NMR experiments using SIMPSON.
Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas
2014-09-01
Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.
Hayes, Kathryn J; Eljiz, Kathy; Dadich, Ann; Fitzgerald, Janna-Anneke; Sloan, Terry
2015-01-01
The purpose of this paper is to provide a retrospective analysis of computer simulation's role in accelerating individual innovation adoption decisions. The process innovation examined is Lean Systems Thinking, and the organizational context is the imaging department of an Australian public hospital. Intrinsic case study methods including observation, interviews with radiology and emergency personnel about scheduling procedures, mapping patient appointment processes and document analysis were used over three years and then complemented with retrospective interviews with key hospital staff. The multiple data sources and methods were combined in a pragmatic and reflexive manner to explore an extreme case that provides potential to act as an instructive template for effective change. Computer simulation of process change ideas offered by staff to improve patient-flow accelerated the adoption of the process changes, largely because animated computer simulation permitted experimentation (trialability), provided observable predictions of change results (observability) and minimized perceived risk. The difficulty of making accurate comparisons between time periods in a health care setting is acknowledged. This work has implications for policy, practice and theory, particularly for inducing the rapid diffusion of process innovations to address challenges facing health service organizations and national health systems. Originality/value - The research demonstrates the value of animated computer simulation in presenting the need for change, identifying options, and predicting change outcomes and is the first work to indicate the importance of trialability, observability and risk reduction in individual adoption decisions in health services.
CARES/LIFE Software Commercialization
NASA Technical Reports Server (NTRS)
1995-01-01
The NASA Lewis Research Center has entered into a letter agreement with BIOSYM Technologies Inc. (now merged with Molecular Simulations Inc. (MSI)). Under this agreement, NASA will provide a developmental copy of the CARES/LIFE computer program to BIOSYM for evaluation. This computer code predicts the time-dependent reliability of a thermomechanically loaded component. BIOSYM will become familiar with CARES/LIFE, provide results of computations useful in validating the code, evaluate it for potential commercialization, and submit suggestions for improvements or extensions to the code or its documentation. If BIOSYM/Molecular Simulations reaches a favorable evaluation of CARES/LIFE, NASA will enter into negotiations for a cooperative agreement with BIOSYM/Molecular Simulations to further develop the code--adding features such as a user-friendly interface and other improvements. This agreement would give BIOSYM intellectual property rights in the modified codes, which they could protect and then commercialize. NASA would provide BIOSYM with the NASA-developed source codes and would agree to cooperate with BIOSYM in further developing the code. In return, NASA would receive certain use rights in the modified CARES/LIFE program. Presently BIOSYM Technologies Inc. has been involved with integration issues concerning its merger with Molecular Simulations Inc., since both companies used to compete in the computational chemistry market, and to some degree, in the materials market. Consequently, evaluation of the CARES/LIFE software is on hold for a month or two while the merger is finalized. Their interest in CARES continues, however, and they expect to get back to the evaluation by early November 1995.
Biocellion: accelerating computer simulation of multicellular biological system models.
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-11-01
Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Educationally and Cost Effective: Computers in the Classroom.
ERIC Educational Resources Information Center
Agee, Roy
1986-01-01
The author states that the educational community must provide programs that assure students they will be able to learn how to use and control computers. He discusses micro labs, prerequisites to computer literacy, curriculum development, teaching methods, simulation projects, a systems analysis project, new job titles, and primary basic skills…
GATE Monte Carlo simulation of dose distribution using MapReduce in a cloud computing environment.
Liu, Yangchuan; Tang, Yuguo; Gao, Xin
2017-12-01
The GATE Monte Carlo simulation platform has good application prospects of treatment planning and quality assurance. However, accurate dose calculation using GATE is time consuming. The purpose of this study is to implement a novel cloud computing method for accurate GATE Monte Carlo simulation of dose distribution using MapReduce. An Amazon Machine Image installed with Hadoop and GATE is created to set up Hadoop clusters on Amazon Elastic Compute Cloud (EC2). Macros, the input files for GATE, are split into a number of self-contained sub-macros. Through Hadoop Streaming, the sub-macros are executed by GATE in Map tasks and the sub-results are aggregated into final outputs in Reduce tasks. As an evaluation, GATE simulations were performed in a cubical water phantom for X-ray photons of 6 and 18 MeV. The parallel simulation on the cloud computing platform is as accurate as the single-threaded simulation on a local server and the simulation correctness is not affected by the failure of some worker nodes. The cloud-based simulation time is approximately inversely proportional to the number of worker nodes. For the simulation of 10 million photons on a cluster with 64 worker nodes, time decreases of 41× and 32× were achieved compared to the single worker node case and the single-threaded case, respectively. The test of Hadoop's fault tolerance showed that the simulation correctness was not affected by the failure of some worker nodes. The results verify that the proposed method provides a feasible cloud computing solution for GATE.
Beyond standard model calculations with Sherpa
Höche, Stefan; Kuttimalai, Silvan; Schumann, Steffen; ...
2015-03-24
We present a fully automated framework as part of the Sherpa event generator for the computation of tree-level cross sections in beyond Standard Model scenarios, making use of model information given in the Universal FeynRules Output format. Elementary vertices are implemented into C++ code automatically and provided to the matrix-element generator Comix at runtime. Widths and branching ratios for unstable particles are computed from the same building blocks. The corresponding decays are simulated with spin correlations. Parton showers, QED radiation and hadronization are added by Sherpa, providing a full simulation of arbitrary BSM processes at the hadron level.
Beyond standard model calculations with Sherpa.
Höche, Stefan; Kuttimalai, Silvan; Schumann, Steffen; Siegert, Frank
We present a fully automated framework as part of the Sherpa event generator for the computation of tree-level cross sections in Beyond Standard Model scenarios, making use of model information given in the Universal FeynRules Output format. Elementary vertices are implemented into C++ code automatically and provided to the matrix-element generator Comix at runtime. Widths and branching ratios for unstable particles are computed from the same building blocks. The corresponding decays are simulated with spin correlations. Parton showers, QED radiation and hadronization are added by Sherpa, providing a full simulation of arbitrary BSM processes at the hadron level.
DIATOM (Data Initialization and Modification) Library Version 7.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, David A.; Schmitt, Robert G.; Hensinger, David M.
DIATOM is a library that provides numerical simulation software with a computational geometry front end that can be used to build up complex problem geometries from collections of simpler shapes. The library provides a parser which allows for application-independent geometry descriptions to be embedded in simulation software input decks. Descriptions take the form of collections of primitive shapes and/or CAD input files and material properties that can be used to describe complex spatial and temporal distributions of numerical quantities (often called “database variables” or “fields”) to help define starting conditions for numerical simulations. The capability is designed to be generalmore » purpose, robust and computationally efficient. By using a combination of computational geometry and recursive divide-and-conquer approximation techniques, a wide range of primitive shapes are supported to arbitrary degrees of fidelity, controllable through user input and limited only by machine resources. Through the use of call-back functions, numerical simulation software can request the value of a field at any time or location in the problem domain. Typically, this is used only for defining initial conditions, but the capability is not limited to just that use. The most recent version of DIATOM provides the ability to import the solution field from one numerical solution as input for another.« less
STEPS: A Simulated, Tutorable Physics Student.
ERIC Educational Resources Information Center
Ur, Sigalit; VanLehn, Kurt
1995-01-01
Describes a simulated student that learns by interacting with a human tutor. Tests suggest that simulated students, when developed past the prototype stage, could be valuable for training human tutors. Provides a computational cognitive task analysis of the skill of learning from a tutor that is useful for designing intelligent tutoring systems.…
The Energy-Environment Simulator as a Classroom Aid.
ERIC Educational Resources Information Center
Sell, Nancy J.; Van Koevering, Thomas E.
1981-01-01
Energy-Environment Simulators, provided by the U.S. Department of Energy, can be used to help individuals experience the effects of unbridled energy consumption for the next century on a national or worldwide scale. The simulator described is a specially designed analog computer which models the real-world energy situation. (MP)
Reinforce Networking Theory with OPNET Simulation
ERIC Educational Resources Information Center
Guo, Jinhua; Xiang, Weidong; Wang, Shengquan
2007-01-01
As networking systems have become more complex and expensive, hands-on experiments based on networking simulation have become essential for teaching the key computer networking topics to students. The simulation approach is the most cost effective and highly useful because it provides a virtual environment for an assortment of desirable features…
Analyzing Interaction Patterns to Verify a Simulation/Game Model
ERIC Educational Resources Information Center
Myers, Rodney Dean
2012-01-01
In order for simulations and games to be effective for learning, instructional designers must verify that the underlying computational models being used have an appropriate degree of fidelity to the conceptual models of their real-world counterparts. A simulation/game that provides incorrect feedback is likely to promote misunderstanding and…
Parameter Sweep and Optimization of Loosely Coupled Simulations Using the DAKOTA Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elwasif, Wael R; Bernholdt, David E; Pannala, Sreekanth
2012-01-01
The increasing availability of large scale computing capabilities has accelerated the development of high-fidelity coupled simulations. Such simulations typically involve the integration of models that implement various aspects of the complex phenomena under investigation. Coupled simulations are playing an integral role in fields such as climate modeling, earth systems modeling, rocket simulations, computational chemistry, fusion research, and many other computational fields. Model coupling provides scientists with systematic ways to virtually explore the physical, mathematical, and computational aspects of the problem. Such exploration is rarely done using a single execution of a simulation, but rather by aggregating the results from manymore » simulation runs that, together, serve to bring to light novel knowledge about the system under investigation. Furthermore, it is often the case (particularly in engineering disciplines) that the study of the underlying system takes the form of an optimization regime, where the control parameter space is explored to optimize an objective functions that captures system realizability, cost, performance, or a combination thereof. Novel and flexible frameworks that facilitate the integration of the disparate models into a holistic simulation are used to perform this research, while making efficient use of the available computational resources. In this paper, we describe the integration of the DAKOTA optimization and parameter sweep toolkit with the Integrated Plasma Simulator (IPS), a component-based framework for loosely coupled simulations. The integration allows DAKOTA to exploit the internal task and resource management of the IPS to dynamically instantiate simulation instances within a single IPS instance, allowing for greater control over the trade-off between efficiency of resource utilization and time to completion. We present a case study showing the use of the combined DAKOTA-IPS system to aid in the design of a lithium ion battery (LIB) cell, by studying a coupled system involving the electrochemistry and ion transport at the lower length scales and thermal energy transport at the device scales. The DAKOTA-IPS system provides a flexible tool for use in optimization and parameter sweep studies involving loosely coupled simulations that is suitable for use in situations where changes to the constituent components in the coupled simulation are impractical due to intellectual property or code heritage issues.« less
Cosmic Reionization on Computers: Properties of the Post-reionization IGM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gnedin, Nickolay Y.; Becker, George D.; Fan, Xiaohui
Here, we present a comparison between several observational tests of the post-reionization IGM and the numerical simulations of reionization completed under the Cosmic Reionization On Computers (CROC) project. The CROC simulations match the gap distribution reasonably well, and also provide a good match for the distribution of peak heights, but there is a notable lack of wide peaks in the simulated spectra and the flux PDFs are poorly matched in the narrow redshift interval 5.5 < z < 5.7, with the match at other redshifts being significantly better, albeit not exact. Both discrepancies are related: simulations show more opacity thanmore » the data.« less
Cosmic Reionization on Computers: Properties of the Post-reionization IGM
Gnedin, Nickolay Y.; Becker, George D.; Fan, Xiaohui
2017-05-19
Here, we present a comparison between several observational tests of the post-reionization IGM and the numerical simulations of reionization completed under the Cosmic Reionization On Computers (CROC) project. The CROC simulations match the gap distribution reasonably well, and also provide a good match for the distribution of peak heights, but there is a notable lack of wide peaks in the simulated spectra and the flux PDFs are poorly matched in the narrow redshift interval 5.5 < z < 5.7, with the match at other redshifts being significantly better, albeit not exact. Both discrepancies are related: simulations show more opacity thanmore » the data.« less
Parallelization of a Fully-Distributed Hydrologic Model using Sub-basin Partitioning
NASA Astrophysics Data System (ADS)
Vivoni, E. R.; Mniszewski, S.; Fasel, P.; Springer, E.; Ivanov, V. Y.; Bras, R. L.
2005-12-01
A primary obstacle towards advances in watershed simulations has been the limited computational capacity available to most models. The growing trend of model complexity, data availability and physical representation has not been matched by adequate developments in computational efficiency. This situation has created a serious bottleneck which limits existing distributed hydrologic models to small domains and short simulations. In this study, we present novel developments in the parallelization of a fully-distributed hydrologic model. Our work is based on the TIN-based Real-time Integrated Basin Simulator (tRIBS), which provides continuous hydrologic simulation using a multiple resolution representation of complex terrain based on a triangulated irregular network (TIN). While the use of TINs reduces computational demand, the sequential version of the model is currently limited over large basins (>10,000 km2) and long simulation periods (>1 year). To address this, a parallel MPI-based version of the tRIBS model has been implemented and tested using high performance computing resources at Los Alamos National Laboratory. Our approach utilizes domain decomposition based on sub-basin partitioning of the watershed. A stream reach graph based on the channel network structure is used to guide the sub-basin partitioning. Individual sub-basins or sub-graphs of sub-basins are assigned to separate processors to carry out internal hydrologic computations (e.g. rainfall-runoff transformation). Routed streamflow from each sub-basin forms the major hydrologic data exchange along the stream reach graph. Individual sub-basins also share subsurface hydrologic fluxes across adjacent boundaries. We demonstrate how the sub-basin partitioning provides computational feasibility and efficiency for a set of test watersheds in northeastern Oklahoma. We compare the performance of the sequential and parallelized versions to highlight the efficiency gained as the number of processors increases. We also discuss how the coupled use of TINs and parallel processing can lead to feasible long-term simulations in regional watersheds while preserving basin properties at high-resolution.
Workflow Management Systems for Molecular Dynamics on Leadership Computers
NASA Astrophysics Data System (ADS)
Wells, Jack; Panitkin, Sergey; Oleynik, Danila; Jha, Shantenu
Molecular Dynamics (MD) simulations play an important role in a range of disciplines from Material Science to Biophysical systems and account for a large fraction of cycles consumed on computing resources. Increasingly science problems require the successful execution of ''many'' MD simulations as opposed to a single MD simulation. There is a need to provide scalable and flexible approaches to the execution of the workload. We present preliminary results on the Titan computer at the Oak Ridge Leadership Computing Facility that demonstrate a general capability to manage workload execution agnostic of a specific MD simulation kernel or execution pattern, and in a manner that integrates disparate grid-based and supercomputing resources. Our results build upon our extensive experience of distributed workload management in the high-energy physics ATLAS project using PanDA (Production and Distributed Analysis System), coupled with recent conceptual advances in our understanding of workload management on heterogeneous resources. We will discuss how we will generalize these initial capabilities towards a more production level service on DOE leadership resources. This research is sponsored by US DOE/ASCR and used resources of the OLCF computing facility.
REVEAL: An Extensible Reduced Order Model Builder for Simulation and Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Khushbu; Sharma, Poorva; Ma, Jinliang
2013-04-30
Many science domains need to build computationally efficient and accurate representations of high fidelity, computationally expensive simulations. These computationally efficient versions are known as reduced-order models. This paper presents the design and implementation of a novel reduced-order model (ROM) builder, the REVEAL toolset. This toolset generates ROMs based on science- and engineering-domain specific simulations executed on high performance computing (HPC) platforms. The toolset encompasses a range of sampling and regression methods that can be used to generate a ROM, automatically quantifies the ROM accuracy, and provides support for an iterative approach to improve ROM accuracy. REVEAL is designed to bemore » extensible in order to utilize the core functionality with any simulator that has published input and output formats. It also defines programmatic interfaces to include new sampling and regression techniques so that users can ‘mix and match’ mathematical techniques to best suit the characteristics of their model. In this paper, we describe the architecture of REVEAL and demonstrate its usage with a computational fluid dynamics model used in carbon capture.« less
Computational modeling of cardiac hemodynamics: Current status and future outlook
NASA Astrophysics Data System (ADS)
Mittal, Rajat; Seo, Jung Hee; Vedula, Vijay; Choi, Young J.; Liu, Hang; Huang, H. Howie; Jain, Saurabh; Younes, Laurent; Abraham, Theodore; George, Richard T.
2016-01-01
The proliferation of four-dimensional imaging technologies, increasing computational speeds, improved simulation algorithms, and the widespread availability of powerful computing platforms is enabling simulations of cardiac hemodynamics with unprecedented speed and fidelity. Since cardiovascular disease is intimately linked to cardiovascular hemodynamics, accurate assessment of the patient's hemodynamic state is critical for the diagnosis and treatment of heart disease. Unfortunately, while a variety of invasive and non-invasive approaches for measuring cardiac hemodynamics are in widespread use, they still only provide an incomplete picture of the hemodynamic state of a patient. In this context, computational modeling of cardiac hemodynamics presents as a powerful non-invasive modality that can fill this information gap, and significantly impact the diagnosis as well as the treatment of cardiac disease. This article reviews the current status of this field as well as the emerging trends and challenges in cardiovascular health, computing, modeling and simulation and that are expected to play a key role in its future development. Some recent advances in modeling and simulations of cardiac flow are described by using examples from our own work as well as the research of other groups.
Computer image generation: Reconfigurability as a strategy in high fidelity space applications
NASA Technical Reports Server (NTRS)
Bartholomew, Michael J.
1989-01-01
The demand for realistic, high fidelity, computer image generation systems to support space simulation is well established. However, as the number and diversity of space applications increase, the complexity and cost of computer image generation systems also increase. One strategy used to harmonize cost with varied requirements is establishment of a reconfigurable image generation system that can be adapted rapidly and easily to meet new and changing requirements. The reconfigurability strategy through the life cycle of system conception, specification, design, implementation, operation, and support for high fidelity computer image generation systems are discussed. The discussion is limited to those issues directly associated with reconfigurability and adaptability of a specialized scene generation system in a multi-faceted space applications environment. Examples and insights gained through the recent development and installation of the Improved Multi-function Scene Generation System at Johnson Space Center, Systems Engineering Simulator are reviewed and compared with current simulator industry practices. The results are clear; the strategy of reconfigurability applied to space simulation requirements provides a viable path to supporting diverse applications with an adaptable computer image generation system.
NASA Technical Reports Server (NTRS)
Kim, B. F.; Moorjani, K.; Phillips, T. E.; Adrian, F. J.; Bohandy, J.; Dolecek, Q. E.
1993-01-01
A method for characterization of granular superconducting thin films has been developed which encompasses both the morphological state of the sample and its fabrication process parameters. The broad scope of this technique is due to the synergism between experimental measurements and their interpretation using numerical simulation. Two novel technologies form the substance of this system: the magnetically modulated resistance method for characterizing superconductors; and a powerful new computer peripheral, the Parallel Information Processor card, which provides enhanced computing capability for PC computers. This enhancement allows PC computers to operate at speeds approaching that of supercomputers. This makes atomic scale simulations possible on low cost machines. The present development of this system involves the integration of these two technologies using mesoscale simulations of thin film growth. A future stage of development will incorporate atomic scale modeling.
Development of a fourth generation predictive capability maturity model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hills, Richard Guy; Witkowski, Walter R.; Urbina, Angel
2013-09-01
The Predictive Capability Maturity Model (PCMM) is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated for an intended application. The primary application of this tool at Sandia National Laboratories (SNL) has been for physics-based computational simulations in support of nuclear weapons applications. The two main goals of a PCMM evaluation are 1) the communication of computational simulation capability, accurately and transparently, and 2) the development of input for effective planning. As a result of the increasing importance of computational simulation to SNLs mission, themore » PCMM has evolved through multiple generations with the goal to provide more clarity, rigor, and completeness in its application. This report describes the approach used to develop the fourth generation of the PCMM.« less
NASA Technical Reports Server (NTRS)
Welp, D. W.; Brown, R. A.; Ullman, D. G.; Kuhner, M. B.
1974-01-01
A computer simulation program which models a commercial short-haul aircraft operating in the civil air system was developed. The purpose of the program is to evaluate the effect of a given aircraft avionics capability on the ability of the aircraft to perform on-time carrier operations. The program outputs consist primarily of those quantities which can be used to determine direct operating costs. These include: (1) schedule reliability or delays, (2) repairs/replacements, (3) fuel consumption, and (4) cancellations. More comprehensive models of the terminal area environment were added and a simulation of an existing airline operation was conducted to obtain a form of model verification. The capability of the program to provide comparative results (sensitivity analysis) was then demonstrated by modifying the aircraft avionics capability for additional computer simulations.
NASA Astrophysics Data System (ADS)
Painter, S.; Moulton, J. D.; Berndt, M.; Coon, E.; Garimella, R.; Lewis, K. C.; Manzini, G.; Mishra, P.; Travis, B. J.; Wilson, C. J.
2012-12-01
The frozen soils of the Arctic and subarctic regions contain vast amounts of stored organic carbon. This carbon is vulnerable to release to the atmosphere as temperatures warm and permafrost degrades. Understanding the response of the subsurface and surface hydrologic system to degrading permafrost is key to understanding the rate, timing, and chemical form of potential carbon releases to the atmosphere. Simulating the hydrologic system in degrading permafrost regions is challenging because of the potential for topographic evolution and associated drainage network reorganization as permafrost thaws and massive ground ice melts. The critical process models required for simulating hydrology include subsurface thermal hydrology of freezing/thawing soils, thermal processes within ice wedges, mechanical deformation processes, overland flow, and surface energy balances including snow dynamics. A new simulation tool, the Arctic Terrestrial Simulator (ATS), is being developed to simulate these coupled processes. The computational infrastructure must accommodate fully unstructured grids that track evolving topography, allow accurate solutions on distorted grids, provide robust and efficient solutions on highly parallel computer architectures, and enable flexibility in the strategies for coupling among the various processes. The ATS is based on Amanzi (Moulton et al. 2012), an object-oriented multi-process simulator written in C++ that provides much of the necessary computational infrastructure. Status and plans for the ATS including major hydrologic process models and validation strategies will be presented. Highly parallel simulations of overland flow using high-resolution digital elevation maps of polygonal patterned ground landscapes demonstrate the feasibility of the approach. Simulations coupling three-phase subsurface thermal hydrology with a simple thaw-induced subsidence model illustrate the strong feedbacks among the processes. D. Moulton, M. Berndt, M. Day, J. Meza, et al., High-Level Design of Amanzi, the Multi-Process High Performance Computing Simulator, Technical Report ASCEM-HPC-2011-03-1, DOE Environmental Management, 2012.
Simulation tools for robotics research and assessment
NASA Astrophysics Data System (ADS)
Fields, MaryAnne; Brewer, Ralph; Edge, Harris L.; Pusey, Jason L.; Weller, Ed; Patel, Dilip G.; DiBerardino, Charles A.
2016-05-01
The Robotics Collaborative Technology Alliance (RCTA) program focuses on four overlapping technology areas: Perception, Intelligence, Human-Robot Interaction (HRI), and Dexterous Manipulation and Unique Mobility (DMUM). In addition, the RCTA program has a requirement to assess progress of this research in standalone as well as integrated form. Since the research is evolving and the robotic platforms with unique mobility and dexterous manipulation are in the early development stage and very expensive, an alternate approach is needed for efficient assessment. Simulation of robotic systems, platforms, sensors, and algorithms, is an attractive alternative to expensive field-based testing. Simulation can provide insight during development and debugging unavailable by many other means. This paper explores the maturity of robotic simulation systems for applications to real-world problems in robotic systems research. Open source (such as Gazebo and Moby), commercial (Simulink, Actin, LMS), government (ANVEL/VANE), and the RCTA-developed RIVET simulation environments are examined with respect to their application in the robotic research domains of Perception, Intelligence, HRI, and DMUM. Tradeoffs for applications to representative problems from each domain are presented, along with known deficiencies and disadvantages. In particular, no single robotic simulation environment adequately covers the needs of the robotic researcher in all of the domains. Simulation for DMUM poses unique constraints on the development of physics-based computational models of the robot, the environment and objects within the environment, and the interactions between them. Most current robot simulations focus on quasi-static systems, but dynamic robotic motion places an increased emphasis on the accuracy of the computational models. In order to understand the interaction of dynamic multi-body systems, such as limbed robots, with the environment, it may be necessary to build component-level computational models to provide the necessary simulation fidelity for accuracy. However, the Perception domain remains the most problematic for adequate simulation performance due to the often cartoon nature of computer rendering and the inability to model realistic electromagnetic radiation effects, such as multiple reflections, in real-time.
Conservative boundary conditions for 3D gas dynamics problems
NASA Technical Reports Server (NTRS)
Gerasimov, B. P.; Karagichev, A. B.; Semushin, S. A.
1986-01-01
A method is described for 3D-gas dynamics computer simulation in regions of complicated shape by means of nonadjusted rectangular grids providing unified treatment of various problems. Some test problem computation results are given.
ERIC Educational Resources Information Center
Moore, John W., Ed.
1981-01-01
Provides short descriptions of chemists' applications of computers in instruction: an interactive instructional program for Instrumental-Qualitative Organic Analysis; question-and-answer exercises in organic chemistry; computerized organic nomenclature drills; integration of theoretical and descriptive materials; acid-base titration simulation;…
Program For Analysis Of Metal-Matrix Composites
NASA Technical Reports Server (NTRS)
Murthy, P. L. N.; Mital, S. K.
1994-01-01
METCAN (METal matrix Composite ANalyzer) is computer program used to simulate computationally nonlinear behavior of high-temperature metal-matrix composite structural components in specific applications, providing comprehensive analyses of thermal and mechanical performances. Written in FORTRAN 77.
Quantum Simulation of Helium Hydride Cation in a Solid-State Spin Register.
Wang, Ya; Dolde, Florian; Biamonte, Jacob; Babbush, Ryan; Bergholm, Ville; Yang, Sen; Jakobi, Ingmar; Neumann, Philipp; Aspuru-Guzik, Alán; Whitfield, James D; Wrachtrup, Jörg
2015-08-25
Ab initio computation of molecular properties is one of the most promising applications of quantum computing. While this problem is widely believed to be intractable for classical computers, efficient quantum algorithms exist which have the potential to vastly accelerate research throughput in fields ranging from material science to drug discovery. Using a solid-state quantum register realized in a nitrogen-vacancy (NV) defect in diamond, we compute the bond dissociation curve of the minimal basis helium hydride cation, HeH(+). Moreover, we report an energy uncertainty (given our model basis) of the order of 10(-14) hartree, which is 10 orders of magnitude below the desired chemical precision. As NV centers in diamond provide a robust and straightforward platform for quantum information processing, our work provides an important step toward a fully scalable solid-state implementation of a quantum chemistry simulator.
An immersed boundary method for modeling a dirty geometry data
NASA Astrophysics Data System (ADS)
Onishi, Keiji; Tsubokura, Makoto
2017-11-01
We present a robust, fast, and low preparation cost immersed boundary method (IBM) for simulating an incompressible high Re flow around highly complex geometries. The method is achieved by the dispersion of the momentum by the axial linear projection and the approximate domain assumption satisfying the mass conservation around the wall including cells. This methodology has been verified against an analytical theory and wind tunnel experiment data. Next, we simulate the problem of flow around a rotating object and demonstrate the ability of this methodology to the moving geometry problem. This methodology provides the possibility as a method for obtaining a quick solution at a next large scale supercomputer. This research was supported by MEXT as ``Priority Issue on Post-K computer'' (Development of innovative design and production processes) and used computational resources of the K computer provided by the RIKEN Advanced Institute for Computational Science.
NASA Technical Reports Server (NTRS)
Ferzali, Wassim; Zacharakis, Vassilis; Upadhyay, Triveni; Weed, Dennis; Burke, Gregory
1995-01-01
The ICAO Aeronautical Mobile Communications Panel (AMCP) completed the drafting of the Aeronautical Mobile Satellite Service (AMSS) Standards and Recommended Practices (SARP's) and the associated Guidance Material and submitted these documents to ICAO Air Navigation Commission (ANC) for ratification in May 1994. This effort, encompassed an extensive, multi-national SARP's validation. As part of this activity, the US Federal Aviation Administration (FAA) sponsored an effort to validate the SARP's via computer simulation. This paper provides a description of this effort. Specifically, it describes: (1) the approach selected for the creation of a high-fidelity AMSS computer model; (2) the test traffic generation scenarios; and (3) the resultant AMSS performance assessment. More recently, the AMSS computer model was also used to provide AMSS performance statistics in support of the RTCA standardization activities. This paper describes this effort as well.
A parallel implementation of an off-lattice individual-based model of multicellular populations
NASA Astrophysics Data System (ADS)
Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe
2015-07-01
As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.
Improving operational plume forecasts
NASA Astrophysics Data System (ADS)
Balcerak, Ernie
2012-04-01
Forecasting how plumes of particles, such as radioactive particles from a nuclear disaster, will be transported and dispersed in the atmosphere is an important but computationally challenging task. During the Fukushima nuclear disaster in Japan, operational plume forecasts were produced each day, but as the emissions continued, previous emissions were not included in the simulations used for forecasts because it became impractical to rerun the simulations each day from the beginning of the accident. Draxler and Rolph examine whether it is possible to improve plume simulation speed and flexibility as conditions and input data change. The authors use a method known as a transfer coefficient matrix approach that allows them to simulate many radionuclides using only a few generic species for the computation. Their simulations work faster by dividing the computation into separate independent segments in such a way that the most computationally time consuming pieces of the calculation need to be done only once. This makes it possible to provide real-time operational plume forecasts by continuously updating the previous simulations as new data become available. They tested their method using data from the Fukushima incident to show that it performed well. (Journal of Geophysical Research-Atmospheres, doi:10.1029/2011JD017205, 2012)
Software systems for modeling articulated figures
NASA Technical Reports Server (NTRS)
Phillips, Cary B.
1989-01-01
Research in computer animation and simulation of human task performance requires sophisticated geometric modeling and user interface tools. The software for a research environment should present the programmer with a powerful but flexible substrate of facilities for displaying and manipulating geometric objects, yet insure that future tools have a consistent and friendly user interface. Jack is a system which provides a flexible and extensible programmer and user interface for displaying and manipulating complex geometric figures, particularly human figures in a 3D working environment. It is a basic software framework for high-performance Silicon Graphics IRIS workstations for modeling and manipulating geometric objects in a general but powerful way. It provides a consistent and user-friendly interface across various applications in computer animation and simulation of human task performance. Currently, Jack provides input and control for applications including lighting specification and image rendering, anthropometric modeling, figure positioning, inverse kinematics, dynamic simulation, and keyframe animation.
HYDES: A generalized hybrid computer program for studying turbojet or turbofan engine dynamics
NASA Technical Reports Server (NTRS)
Szuch, J. R.
1974-01-01
This report describes HYDES, a hybrid computer program capable of simulating one-spool turbojet, two-spool turbojet, or two-spool turbofan engine dynamics. HYDES is also capable of simulating two- or three-stream turbofans with or without mixing of the exhaust streams. The program is intended to reduce the time required for implementing dynamic engine simulations. HYDES was developed for running on the Lewis Research Center's Electronic Associates (EAI) 690 Hybrid Computing System and satisfies the 16384-word core-size and hybrid-interface limits of that machine. The program could be modified for running on other computing systems. The use of HYDES to simulate a single-spool turbojet and a two-spool, two-stream turbofan engine is demonstrated. The form of the required input data is shown and samples of output listings (teletype) and transient plots (x-y plotter) are provided. HYDES is shown to be capable of performing both steady-state design and off-design analyses and transient analyses.
Computer Simulation Shows the Effect of Communication on Day of Surgery Patient Flow.
Taaffe, Kevin; Fredendall, Lawrence; Huynh, Nathan; Franklin, Jennifer
2015-07-01
To improve patient flow in a surgical environment, practitioners and academicians often use process mapping and simulation as tools to evaluate and recommend changes. We used simulations to help staff visualize the effect of communication and coordination delays that occur on the day of surgery. Perioperative services staff participated in tabletop exercises in which they chose the delays that were most important to eliminate. Using a day-of-surgery computer simulation model, the elimination of delays was tested and the results were shared with the group. This exercise, repeated for multiple groups of staff, provided an understanding of not only the dynamic events taking place, but also how small communication delays can contribute to a significant loss in efficiency and the ability to provide timely care. Survey results confirmed these understandings. Copyright © 2015 AORN, Inc. Published by Elsevier Inc. All rights reserved.
Shakhawath Hossain, Md; Bergstrom, D J; Chen, X B
2015-12-01
The in vitro chondrocyte cell culture for cartilage tissue regeneration in a perfusion bioreactor is a complex process. Mathematical modeling and computational simulation can provide important insights into the culture process, which would be helpful for selecting culture conditions to improve the quality of the developed tissue constructs. However, simulation of the cell culture process is a challenging task due to the complicated interaction between the cells and local fluid flow and nutrient transport inside the complex porous scaffolds. In this study, a mathematical model and computational framework has been developed to simulate the three-dimensional (3D) cell growth in a porous scaffold placed inside a bi-directional flow perfusion bioreactor. The model was developed by taking into account the two-way coupling between the cell growth and local flow field and associated glucose concentration, and then used to perform a resolved-scale simulation based on the lattice Boltzmann method (LBM). The simulation predicts the local shear stress, glucose concentration, and 3D cell growth inside the porous scaffold for a period of 30 days of cell culture. The predicted cell growth rate was in good overall agreement with the experimental results available in the literature. This study demonstrates that the bi-directional flow perfusion culture system can enhance the homogeneity of the cell growth inside the scaffold. The model and computational framework developed is capable of providing significant insight into the culture process, thus providing a powerful tool for the design and optimization of the cell culture process. © 2015 Wiley Periodicals, Inc.
Study of Wind Effects on Unique Buildings
NASA Astrophysics Data System (ADS)
Olenkov, V.; Puzyrev, P.
2017-11-01
The article deals with a numerical simulation of wind effects on the building of the Church of the Intercession of the Holy Virgin in the village Bulzi of the Chelyabinsk region. We presented a calculation algorithm and obtained pressure fields, velocity fields and the fields of kinetic energy of a wind stream, as well as streamlines. Computational fluid dynamic (CFD) evolved three decades ago at the interfaces of calculus mathematics and theoretical hydromechanics and has become a separate branch of science the subject of which is a numerical simulation of different fluid and gas flows as well as the solution of arising problems with the help of methods that involve computer systems. This scientific field which is of a great practical value is intensively developing. The increase in CFD-calculations is caused by the improvement of computer technologies, creation of multipurpose easy-to-use CFD-packagers that are available to a wide group of researchers and cope with various tasks. Such programs are not only competitive in comparison with physical experiments but sometimes they provide the only opportunity to answer the research questions. The following advantages of computer simulation can be pointed out: a) Reduction in time spent on design and development of a model in comparison with a real experiment (variation of boundary conditions). b) Numerical experiment allows for the simulation of conditions that are not reproducible with environmental tests (use of ideal gas as environment). c) Use of computational gas dynamics methods provides a researcher with a complete and ample information that is necessary to fully describe different processes of the experiment. d) Economic efficiency of computer calculations is more attractive than an experiment. e) Possibility to modify a computational model which ensures efficient timing (change of the sizes of wall layer cells in accordance with the chosen turbulence model).
Algorithms and architecture for multiprocessor based circuit simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deutsch, J.T.
Accurate electrical simulation is critical to the design of high performance integrated circuits. Logic simulators can verify function and give first-order timing information. Switch level simulators are more effective at dealing with charge sharing than standard logic simulators, but cannot provide accurate timing information or discover DC problems. Delay estimation techniques and cell level simulation can be used in constrained design methods, but must be tuned for each application, and circuit simulation must still be used to generate the cell models. None of these methods has the guaranteed accuracy that many circuit designers desire, and none can provide detailed waveformmore » information. Detailed electrical-level simulation can predict circuit performance if devices and parasitics are modeled accurately. However, the computational requirements of conventional circuit simulators make it impractical to simulate current large circuits. In this dissertation, the implementation of Iterated Timing Analysis (ITA), a relaxation-based technique for accurate circuit simulation, on a special-purpose multiprocessor is presented. The ITA method is an SOR-Newton, relaxation-based method which uses event-driven analysis and selective trace to exploit the temporal sparsity of the electrical network. Because event-driven selective trace techniques are employed, this algorithm lends itself to implementation on a data-driven computer.« less
Lang, Alon; Melzer, Ehud; Bar-Meir, Simon; Eliakim, Rami; Ziv, Amitai
2006-11-01
The continuing development in computer-based medical simulators provides an ideal platform for simulator-assisted training programs for medical trainees. Computer-based endoscopic simulators provide a virtual reality environment for training endoscopic procedures. This study illustrates the use of a comprehensive training model combining the use of endoscopic simulators with simulated (actor) patients (SP). To evaluate the effectiveness of a comprehensive simulation workshop from the trainee perspective. Four case studies were developed with emphasis on communication skills. Three workshops with 10 fellows in each were conducted. During each workshop the trainees spent half of the time in SP case studies and the remaining half working with computerized endoscopic simulators with continuous guidance by an expert endoscopist. Questionnaires were completed by the fellows at the end of the workshop. Seventy percent of the fellows felt that the endoscopic simulator was close or very close to reality for gastroscopy and 63% for colonoscopy. Eighty eight percent thought the close guidance was important for the learning process with the simulator. Eighty percent felt that the case studies were an important learning experience for risk management. Further evaluation of multi-modality simulation workshops in gastroenterologist training is needed to identify how best to incorporate this form of instruction into training for gastroenterologists.
Terascale Visualization: Multi-resolution Aspirin for Big-Data Headaches
NASA Astrophysics Data System (ADS)
Duchaineau, Mark
2001-06-01
Recent experience on the Accelerated Strategic Computing Initiative (ASCI) computers shows that computational physicists are successfully producing a prodigious collection of numbers on several thousand processors. But with this wealth of numbers comes an unprecedented difficulty in processing and moving them to provide useful insight and analysis. In this talk, a few simulations are highlighted where recent advancements in multiple-resolution mathematical representations and algorithms have provided some hope of seeing most of the physics of interest while keeping within the practical limits of the post-simulation storage and interactive data-exploration resources. A whole host of visualization research activities was spawned by the 1999 Gordon Bell Prize-winning computation of a shock-tube experiment showing Richtmyer-Meshkov turbulent instabilities. This includes efforts for the entire data pipeline from running simulation to interactive display: wavelet compression of field data, multi-resolution volume rendering and slice planes, out-of-core extraction and simplification of mixing-interface surfaces, shrink-wrapping to semi-regularize the surfaces, semi-structured surface wavelet compression, and view-dependent display-mesh optimization. More recently on the 12 TeraOps ASCI platform, initial results from a 5120-processor, billion-atom molecular dynamics simulation showed that 30-to-1 reductions in storage size can be achieved with no human-observable errors for the analysis required in simulations of supersonic crack propagation. This made it possible to store the 25 trillion bytes worth of simulation numbers in the available storage, which was under 1 trillion bytes. While multi-resolution methods and related systems are still in their infancy, for the largest-scale simulations there is often no other choice should the science require detailed exploration of the results.
Study of the Imaging Capabilities of SPIRIT/SPECS Concept Interferometers
NASA Technical Reports Server (NTRS)
Allen, Ronald J.
2002-01-01
Several new space science mission concepts under development at NASA-GSFC for astronomy are intended to carry out synthetic imaging using Michelson interferometers or direct (Fizeau) imaging with sparse apertures. Examples of these mission concepts include the Stellar Imager (SI), the Space Infrared Interferometric Telescope (SPIRIT), the Submillimeter Probe of the Evolution of Cosmic Structure (SPECS), and the Fourier-Kelvin Stellar Interferometer (FKSI). We have been developing computer-based simulators for these missions. These simulators are aimed at providing a quantitative evaluation of the imaging capabilities of the mission by modeling the performance on different realistic targets in terms of sensitivity, angular resolution, and dynamic range. Both Fizeau and Michelson modes of operation can be considered. Our work is based on adapting a computer simulator called imSIM which was initially written for the Space Interferometer Mission in order to simulate the imaging mode of new missions such as those listed. This report covers the activities we have undertaken to provide a preliminary version of a simulator for the SPIRIT mission concept.
Atomistic and coarse-grained computer simulations of raft-like lipid mixtures.
Pandit, Sagar A; Scott, H Larry
2007-01-01
Computer modeling can provide insights into the existence, structure, size, and thermodynamic stability of localized raft-like regions in membranes. However, the challenges in the construction and simulation of accurate models of heterogeneous membranes are great. The primary obstacle in modeling the lateral organization within a membrane is the relatively slow lateral diffusion rate for lipid molecules. Microsecond or longer time-scales are needed to fully model the formation and stability of a raft in a membra ne. Atomistic simulations currently are not able to reach this scale, but they do provide quantitative information on the intermolecular forces and correlations that are involved in lateral organization. In this chapter, the steps needed to carry out and analyze atomistic simulations of hydrated lipid bilayers having heterogeneous composition are outlined. It is then shown how the data from a molecular dynamics simulation can be used to construct a coarse-grained model for the heterogeneous bilayer that can predict the lateral organization and stability of rafts at up to millisecond time-scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slepoy, Alexander; Mitchell, Scott A.; Backus, George A.
2008-09-01
Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worthmore » of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model evaluation in situations of high consequence decision-making.« less
Genetic data simulators and their applications: an overview
Peng, Bo; Chen, Huann-Sheng; Mechanic, Leah E.; Racine, Ben; Clarke, John; Gillanders, Elizabeth; Feuer, Eric J.
2016-01-01
Computer simulations have played an indispensable role in the development and application of statistical models and methods for genetic studies across multiple disciplines. The need to simulate complex evolutionary scenarios and pseudo-datasets for various studies has fueled the development of dozens of computer programs with varying reliability, performance, and application areas. To help researchers compare and choose the most appropriate simulators for their studies, we have created the Genetic Simulation Resources (GSR) website, which allows authors of simulation software to register their applications and describe them with more than 160 defined attributes. This article summarizes the properties of 93 simulators currently registered at GSR and provides an overview of the development and applications of genetic simulators. Unlike other review articles that address technical issues or compare simulators for particular application areas, we focus on software development, maintenance, and features of simulators, often from a historical perspective. Publications that cite these simulators are used to summarize both the applications of genetic simulations and the utilization of simulators. PMID:25504286
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.
2014-01-01
Computational Aerodynamic simulations of a 1484 ft/sec tip speed quiet high-speed fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, low-noise research fan/nacelle model that has undergone experimental testing in the 9- by 15-foot Low Speed Wind Tunnel at the NASA Glenn Research Center. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating points simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, which includes a core duct and a bypass duct that merge upstream of the fan system nozzle. As a result, only fan rotational speed and the system bypass ratio, set by means of a translating nozzle plug, were adjusted in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. Computed blade row flow fields at all fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the computed flow fields reveals no excessive or critical boundary layer separations or related secondary-flow problems, with the exception of the hub boundary layer at the core duct entrance. At that location a significant flow separation is present. The region of local flow recirculation extends through a mixing plane, however, which for the particular mixing-plane model used is now known to exaggerate the recirculation. In any case, the flow separation has relatively little impact on the computed rotor and FEGV flow fields.
Sun, Peng; Zhou, Haoyin; Ha, Seongmin; Hartaigh, Bríain ó; Truong, Quynh A.; Min, James K.
2016-01-01
In clinical cardiology, both anatomy and physiology are needed to diagnose cardiac pathologies. CT imaging and computer simulations provide valuable and complementary data for this purpose. However, it remains challenging to gain useful information from the large amount of high-dimensional diverse data. The current tools are not adequately integrated to visualize anatomic and physiologic data from a complete yet focused perspective. We introduce a new computer-aided diagnosis framework, which allows for comprehensive modeling and visualization of cardiac anatomy and physiology from CT imaging data and computer simulations, with a primary focus on ischemic heart disease. The following visual information is presented: (1) Anatomy from CT imaging: geometric modeling and visualization of cardiac anatomy, including four heart chambers, left and right ventricular outflow tracts, and coronary arteries; (2) Function from CT imaging: motion modeling, strain calculation, and visualization of four heart chambers; (3) Physiology from CT imaging: quantification and visualization of myocardial perfusion and contextual integration with coronary artery anatomy; (4) Physiology from computer simulation: computation and visualization of hemodynamics (e.g., coronary blood velocity, pressure, shear stress, and fluid forces on the vessel wall). Substantially, feedback from cardiologists have confirmed the practical utility of integrating these features for the purpose of computer-aided diagnosis of ischemic heart disease. PMID:26863663
Funaki, Ayumu; Ohkubo, Masaki; Wada, Shinichi; Murao, Kohei; Matsumoto, Toru; Niizuma, Shinji
2012-07-01
With the wide dissemination of computed tomography (CT) screening for lung cancer, measuring the nodule volume accurately with computer-aided volumetry software is increasingly important. Many studies for determining the accuracy of volumetry software have been performed using a phantom with artificial nodules. These phantom studies are limited, however, in their ability to reproduce the nodules both accurately and in the variety of sizes and densities required. Therefore, we propose a new approach of using computer-simulated nodules based on the point spread function measured in a CT system. The validity of the proposed method was confirmed by the excellent agreement obtained between computer-simulated nodules and phantom nodules regarding the volume measurements. A practical clinical evaluation of the accuracy of volumetry software was achieved by adding simulated nodules onto clinical lung images, including noise and artifacts. The tested volumetry software was revealed to be accurate within an error of 20 % for nodules >5 mm and with the difference between nodule density and background (lung) (CT value) being 400-600 HU. Such a detailed analysis can provide clinically useful information on the use of volumetry software in CT screening for lung cancer. We concluded that the proposed method is effective for evaluating the performance of computer-aided volumetry software.
Multi-scale modeling in cell biology
Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick
2009-01-01
Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808
System Analysis for the Huntsville Operation Support Center, Distributed Computer System
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Massey, D.
1985-01-01
HOSC as a distributed computing system, is responsible for data acquisition and analysis during Space Shuttle operations. HOSC also provides computing services for Marshall Space Flight Center's nonmission activities. As mission and nonmission activities change, so do the support functions of HOSC change, demonstrating the need for some method of simulating activity at HOSC in various configurations. The simulation developed in this work primarily models the HYPERchannel network. The model simulates the activity of a steady state network, reporting statistics such as, transmitted bits, collision statistics, frame sequences transmitted, and average message delay. These statistics are used to evaluate such performance indicators as throughout, utilization, and delay. Thus the overall performance of the network is evaluated, as well as predicting possible overload conditions.
A study of application of remote sensing to river forecasting. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
1975-01-01
A project is described whose goal was to define, implement and evaluate a pilot demonstration test to show the practicability of applying remotely sensed data to operational river forecasting in gaged or previously ungaged watersheds. A secondary objective was to provide NASA with documentation describing the computer programs that comprise the streamflow forecasting simulation model used. A computer-based simulation model was adapted to a streamflow forecasting application and implemented in an IBM System/360 Model 44 computer, operating in a dedicated mode, with operator interactive control through a Model 2250 keyboard/graphic CRT terminal. The test site whose hydrologic behavior was simulated is a small basin (365 square kilometers) designated Town Creek near Geraldine, Alabama.
NASA Technical Reports Server (NTRS)
Otto, John C.; Paraschivoiu, Marius; Yesilyurt, Serhat; Patera, Anthony T.
1995-01-01
Engineering design and optimization efforts using computational systems rapidly become resource intensive. The goal of the surrogate-based approach is to perform a complete optimization with limited resources. In this paper we present a Bayesian-validated approach that informs the designer as to how well the surrogate performs; in particular, our surrogate framework provides precise (albeit probabilistic) bounds on the errors incurred in the surrogate-for-simulation substitution. The theory and algorithms of our computer{simulation surrogate framework are first described. The utility of the framework is then demonstrated through two illustrative examples: maximization of the flowrate of fully developed ow in trapezoidal ducts; and design of an axisymmetric body that achieves a target Stokes drag.
A review of computer-aided oral and maxillofacial surgery: planning, simulation and navigation.
Chen, Xiaojun; Xu, Lu; Sun, Yi; Politis, Constantinus
2016-11-01
Currently, oral and maxillofacial surgery (OMFS) still poses a significant challenge for surgeons due to the anatomic complexity and limited field of view of the oral cavity. With the great development of computer technologies, he computer-aided surgery has been widely used for minimizing the risks and improving the precision of surgery. Areas covered: The major goal of this paper is to provide a comprehensive reference source of current and future development of computer-aided OMFS including surgical planning, simulation and navigation for relevant researchers. Expert commentary: Compared with the traditional OMFS, computer-aided OMFS overcomes the disadvantage that the treatment on the region of anatomically complex maxillofacial depends almost exclusively on the experience of the surgeon.
Advanced simulation and analysis of a geopotential research mission
NASA Technical Reports Server (NTRS)
Schutz, B. E.
1988-01-01
Computer simulations have been performed for an orbital gradiometer mission to assist in the study of high degree and order gravity field recovery. The simulations were conducted for a satellite in near-circular, frozen orbit at a 160-km altitude using a gravitational field complete to degree and order 360. The mission duration is taken to be 32 days. The simulation provides a set of measurements to assist in the evaluation of techniques developed for the determination of the gravity field. Also, the simulation provides an ephemeris to study available tracking systems to satisfy the orbit determination requirements of the mission.
Resource Provisioning in SLA-Based Cluster Computing
NASA Astrophysics Data System (ADS)
Xiong, Kaiqi; Suh, Sang
Cluster computing is excellent for parallel computation. It has become increasingly popular. In cluster computing, a service level agreement (SLA) is a set of quality of services (QoS) and a fee agreed between a customer and an application service provider. It plays an important role in an e-business application. An application service provider uses a set of cluster computing resources to support e-business applications subject to an SLA. In this paper, the QoS includes percentile response time and cluster utilization. We present an approach for resource provisioning in such an environment that minimizes the total cost of cluster computing resources used by an application service provider for an e-business application that often requires parallel computation for high service performance, availability, and reliability while satisfying a QoS and a fee negotiated between a customer and the application service provider. Simulation experiments demonstrate the applicability of the approach.
Architectural Improvements and New Processing Tools for the Open XAL Online Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, Christopher K; Pelaia II, Tom; Freed, Jonathan M
The online model is the component of Open XAL providing accelerator modeling, simulation, and dynamic synchronization to live hardware. Significant architectural changes and feature additions have been recently made in two separate areas: 1) the managing and processing of simulation data, and 2) the modeling of RF cavities. Simulation data and data processing have been completely decoupled. A single class manages all simulation data while standard tools were developed for processing the simulation results. RF accelerating cavities are now modeled as composite structures where parameter and dynamics computations are distributed. The beam and hardware models both maintain their relative phasemore » information, which allows for dynamic phase slip and elapsed time computation.« less
Comprehensive silicon solar cell computer modeling
NASA Technical Reports Server (NTRS)
Lamorte, M. F.
1984-01-01
The development of an efficient, comprehensive Si solar cell modeling program that has the capability of simulation accuracy of 5 percent or less is examined. A general investigation of computerized simulation is provided. Computer simulation programs are subdivided into a number of major tasks: (1) analytical method used to represent the physical system; (2) phenomena submodels that comprise the simulation of the system; (3) coding of the analysis and the phenomena submodels; (4) coding scheme that results in efficient use of the CPU so that CPU costs are low; and (5) modularized simulation program with respect to structures that may be analyzed, addition and/or modification of phenomena submodels as new experimental data become available, and the addition of other photovoltaic materials.
Simulation based planning of surgical interventions in pediatric cardiology
NASA Astrophysics Data System (ADS)
Marsden, Alison L.
2013-10-01
Hemodynamics plays an essential role in the progression and treatment of cardiovascular disease. However, while medical imaging provides increasingly detailed anatomical information, clinicians often have limited access to hemodynamic data that may be crucial to patient risk assessment and treatment planning. Computational simulations can now provide detailed hemodynamic data to augment clinical knowledge in both adult and pediatric applications. There is a particular need for simulation tools in pediatric cardiology, due to the wide variation in anatomy and physiology in congenital heart disease patients, necessitating individualized treatment plans. Despite great strides in medical imaging, enabling extraction of flow information from magnetic resonance and ultrasound imaging, simulations offer predictive capabilities that imaging alone cannot provide. Patient specific simulations can be used for in silico testing of new surgical designs, treatment planning, device testing, and patient risk stratification. Furthermore, simulations can be performed at no direct risk to the patient. In this paper, we outline the current state of the art in methods for cardiovascular blood flow simulation and virtual surgery. We then step through pressing challenges in the field, including multiscale modeling, boundary condition selection, optimization, and uncertainty quantification. Finally, we summarize simulation results of two representative examples from pediatric cardiology: single ventricle physiology, and coronary aneurysms caused by Kawasaki disease. These examples illustrate the potential impact of computational modeling tools in the clinical setting.
ERIC Educational Resources Information Center
Elmore, Donald E.; Guayasamin, Ryann C.; Kieffer, Madeleine E.
2010-01-01
As computational modeling plays an increasingly central role in biochemical research, it is important to provide students with exposure to common modeling methods in their undergraduate curriculum. This article describes a series of computer labs designed to introduce undergraduate students to energy minimization, molecular dynamics simulations,…
Minimum-complexity helicopter simulation math model
NASA Technical Reports Server (NTRS)
Heffley, Robert K.; Mnich, Marc A.
1988-01-01
An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.
NASA HPCC Technology for Aerospace Analysis and Design
NASA Technical Reports Server (NTRS)
Schulbach, Catherine H.
1999-01-01
The Computational Aerosciences (CAS) Project is part of NASA's High Performance Computing and Communications Program. Its primary goal is to accelerate the availability of high-performance computing technology to the US aerospace community-thus providing the US aerospace community with key tools necessary to reduce design cycle times and increase fidelity in order to improve safety, efficiency and capability of future aerospace vehicles. A complementary goal is to hasten the emergence of a viable commercial market within the aerospace community for the advantage of the domestic computer hardware and software industry. The CAS Project selects representative aerospace problems (especially design) and uses them to focus efforts on advancing aerospace algorithms and applications, systems software, and computing machinery to demonstrate vast improvements in system performance and capability over the life of the program. Recent demonstrations have served to assess the benefits of possible performance improvements while reducing the risk of adopting high-performance computing technology. This talk will discuss past accomplishments in providing technology to the aerospace community, present efforts, and future goals. For example, the times to do full combustor and compressor simulations (of aircraft engines) have been reduced by factors of 320:1 and 400:1 respectively. While this has enabled new capabilities in engine simulation, the goal of an overnight, dynamic, multi-disciplinary, 3-dimensional simulation of an aircraft engine is still years away and will require new generations of high-end technology.
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.
1992-01-01
The presentation gives a partial overview of research and development underway in the Structures Division of LeRC, which collectively is referred to as the Computational Structures Technology Program. The activities in the program are diverse and encompass four major categories: (1) composite materials and structures; (2) probabilistic analysis and reliability; (3) design optimization and expert systems; and (4) computational methods and simulation. The approach of the program is comprehensive and entails exploration of fundamental theories of structural mechanics to accurately represent the complex physics governing engine structural performance, formulation, and implementation of computational techniques and integrated simulation strategies to provide accurate and efficient solutions of the governing theoretical models by exploiting the emerging advances in computer technology, and validation and verification through numerical and experimental tests to establish confidence and define the qualities and limitations of the resulting theoretical models and computational solutions. The program comprises both in-house and sponsored research activities. The remainder of the presentation provides a sample of activities to illustrate the breadth and depth of the program and to demonstrate the accomplishments and benefits that have resulted.
Man-in-the-control-loop simulation of manipulators
NASA Technical Reports Server (NTRS)
Chang, J. L.; Lin, Tsung-Chieh; Yae, K. Harold
1989-01-01
A method to achieve man-in-the-control-loop simulation is presented. Emerging real-time dynamics simulation suggests a potential for creating an interactive design workstation with a human operator in the control loop. The recursive formulation for multibody dynamics simulation is studied to determine requirements for man-in-the-control-loop simulation. High speed computer graphics techniques provides realistic visual cues for the simulator. Backhoe and robot arm simulations are implemented to demonstrate the capability of man-in-the-control-loop simulation.
A LANGUAGE FOR MODULAR SPATIO-TEMPORAL SIMULATION (R824766)
Creating an effective environment for collaborative spatio-temporal model development will require computational systems that provide support for the user in three key areas: (1) Support for modular, hierarchical model construction and archiving/linking of simulation modules; (2)...
Community as client: environmental issues in the real world. A SimCity computer simulation.
Bareford, C G
2001-01-01
The ability to think critically has become a crucial part of professional practice and education. SimCity, a popular computer simulation game, provides an opportunity to practice community assessment and interventions using a systems approach. SimCity is an interactive computer simulation game in which the player takes an active part in community planning. SimCity is supported on either a Windows 95/98 or a Macintosh platform and is available on CD-ROM at retail stores or at www.simcity.com. Students complete a tutorial and then apply a selected scenario in SimCity. Scenarios consist of hypothetical communities that have varying types and degrees of environmental problems, e.g., traffic, crime, nuclear meltdown, flooding, fire, and earthquakes. In problem solving with the simulated scenarios, students (a) identify systems and subsystems within the community that are critical factors impacting the environmental health of the community, (b) create changes in the systems and subsystems in an effort to solve the environmental health problem, and (c) evaluate the effectiveness of interventions based on the game score, demographic and fiscal data, and amount of community support. Because the consequences of planned intervention are part of the simulation, nursing students are able to develop critical-thinking skills. The simulation provides essential content in community planning in an interesting and interactive format.
Xie, Tianwu; Zaidi, Habib
2016-01-01
The development of multimodality preclinical imaging techniques and the rapid growth of realistic computer simulation tools have promoted the construction and application of computational laboratory animal models in preclinical research. Since the early 1990s, over 120 realistic computational animal models have been reported in the literature and used as surrogates to characterize the anatomy of actual animals for the simulation of preclinical studies involving the use of bioluminescence tomography, fluorescence molecular tomography, positron emission tomography, single-photon emission computed tomography, microcomputed tomography, magnetic resonance imaging, and optical imaging. Other applications include electromagnetic field simulation, ionizing and nonionizing radiation dosimetry, and the development and evaluation of new methodologies for multimodality image coregistration, segmentation, and reconstruction of small animal images. This paper provides a comprehensive review of the history and fundamental technologies used for the development of computational small animal models with a particular focus on their application in preclinical imaging as well as nonionizing and ionizing radiation dosimetry calculations. An overview of the overall process involved in the design of these models, including the fundamental elements used for the construction of different types of computational models, the identification of original anatomical data, the simulation tools used for solving various computational problems, and the applications of computational animal models in preclinical research. The authors also analyze the characteristics of categories of computational models (stylized, voxel-based, and boundary representation) and discuss the technical challenges faced at the present time as well as research needs in the future.
Demonstrating Computer Simulation Development for Intermediate and Middle School Applications.
ERIC Educational Resources Information Center
Fyffe, Darrel W.; And Others
This discussion of the use of microcomputers to simulate complex situations for classroom use describes the advantages of using simulations, including their adaptability to many subject areas and content fields, their power to explain complex concepts, and their ability to provide variations for individual users. As an example, seven objectives…
Data mining through simulation.
Lytton, William W; Stewart, Mark
2007-01-01
Data integration is particularly difficult in neuroscience; we must organize vast amounts of data around only a few fragmentary functional hypotheses. It has often been noted that computer simulation, by providing explicit hypotheses for a particular system and bridging across different levels of organization, can provide an organizational focus, which can be leveraged to form substantive hypotheses. Simulations lend meaning to data and can be updated and adapted as further data come in. The use of simulation in this context suggests the need for simulator adjuncts to manage and evaluate data. We have developed a neural query system (NQS) within the NEURON simulator, providing a relational database system, a query function, and basic data-mining tools. NQS is used within the simulation context to manage, verify, and evaluate model parameterizations. More importantly, it is used for data mining of simulation data and comparison with neurophysiology.
Data Serving Climate Simulation Science at the NASA Center for Climate Simulation
NASA Technical Reports Server (NTRS)
Salmon, Ellen M.
2011-01-01
The NASA Center for Climate Simulation (NCCS) provides high performance computational resources, a multi-petabyte archive, and data services in support of climate simulation research and other NASA-sponsored science. This talk describes the NCCS's data-centric architecture and processing, which are evolving in anticipation of researchers' growing requirements for higher resolution simulations and increased data sharing among NCCS users and the external science community.
NASA Technical Reports Server (NTRS)
Bedrossian, Nazareth; Jang, Jiann-Woei; McCants, Edward; Omohundro, Zachary; Ring, Tom; Templeton, Jeremy; Zoss, Jeremy; Wallace, Jonathan; Ziegler, Philip
2011-01-01
Draper Station Analysis Tool (DSAT) is a computer program, built on commercially available software, for simulating and analyzing complex dynamic systems. Heretofore used in designing and verifying guidance, navigation, and control systems of the International Space Station, DSAT has a modular architecture that lends itself to modification for application to spacecraft or terrestrial systems. DSAT consists of user-interface, data-structures, simulation-generation, analysis, plotting, documentation, and help components. DSAT automates the construction of simulations and the process of analysis. DSAT provides a graphical user interface (GUI), plus a Web-enabled interface, similar to the GUI, that enables a remotely located user to gain access to the full capabilities of DSAT via the Internet and Webbrowser software. Data structures are used to define the GUI, the Web-enabled interface, simulations, and analyses. Three data structures define the type of analysis to be performed: closed-loop simulation, frequency response, and/or stability margins. DSAT can be executed on almost any workstation, desktop, or laptop computer. DSAT provides better than an order of magnitude improvement in cost, schedule, and risk assessment for simulation based design and verification of complex dynamic systems.
Physical Processes and Applications of the Monte Carlo Radiative Energy Deposition (MRED) Code
NASA Astrophysics Data System (ADS)
Reed, Robert A.; Weller, Robert A.; Mendenhall, Marcus H.; Fleetwood, Daniel M.; Warren, Kevin M.; Sierawski, Brian D.; King, Michael P.; Schrimpf, Ronald D.; Auden, Elizabeth C.
2015-08-01
MRED is a Python-language scriptable computer application that simulates radiation transport. It is the computational engine for the on-line tool CRÈME-MC. MRED is based on c++ code from Geant4 with additional Fortran components to simulate electron transport and nuclear reactions with high precision. We provide a detailed description of the structure of MRED and the implementation of the simulation of physical processes used to simulate radiation effects in electronic devices and circuits. Extensive discussion and references are provided that illustrate the validation of models used to implement specific simulations of relevant physical processes. Several applications of MRED are summarized that demonstrate its ability to predict and describe basic physical phenomena associated with irradiation of electronic circuits and devices. These include effects from single particle radiation (including both direct ionization and indirect ionization effects), dose enhancement effects, and displacement damage effects. MRED simulations have also helped to identify new single event upset mechanisms not previously observed by experiment, but since confirmed, including upsets due to muons and energetic electrons.
A One Dimensional, Time Dependent Inlet/Engine Numerical Simulation for Aircraft Propulsion Systems
NASA Technical Reports Server (NTRS)
Garrard, Doug; Davis, Milt, Jr.; Cole, Gary
1999-01-01
The NASA Lewis Research Center (LeRC) and the Arnold Engineering Development Center (AEDC) have developed a closely coupled computer simulation system that provides a one dimensional, high frequency inlet/engine numerical simulation for aircraft propulsion systems. The simulation system, operating under the LeRC-developed Application Portable Parallel Library (APPL), closely coupled a supersonic inlet with a gas turbine engine. The supersonic inlet was modeled using the Large Perturbation Inlet (LAPIN) computer code, and the gas turbine engine was modeled using the Aerodynamic Turbine Engine Code (ATEC). Both LAPIN and ATEC provide a one dimensional, compressible, time dependent flow solution by solving the one dimensional Euler equations for the conservation of mass, momentum, and energy. Source terms are used to model features such as bleed flows, turbomachinery component characteristics, and inlet subsonic spillage while unstarted. High frequency events, such as compressor surge and inlet unstart, can be simulated with a high degree of fidelity. The simulation system was exercised using a supersonic inlet with sixty percent of the supersonic area contraction occurring internally, and a GE J85-13 turbojet engine.
2011-01-01
Background The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research can be accurately described and combined. PMID:22172142
Proposal for grid computing for nuclear applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.
2014-02-12
The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.
Virtual reality neurosurgery: a simulator blueprint.
Spicer, Mark A; van Velsen, Martin; Caffrey, John P; Apuzzo, Michael L J
2004-04-01
This article details preliminary studies undertaken to integrate the most relevant advancements across multiple disciplines in an effort to construct a highly realistic neurosurgical simulator based on a distributed computer architecture. Techniques based on modified computational modeling paradigms incorporating finite element analysis are presented, as are current and projected efforts directed toward the implementation of a novel bidirectional haptic device. Patient-specific data derived from noninvasive magnetic resonance imaging sequences are used to construct a computational model of the surgical region of interest. Magnetic resonance images of the brain may be coregistered with those obtained from magnetic resonance angiography, magnetic resonance venography, and diffusion tensor imaging to formulate models of varying anatomic complexity. The majority of the computational burden is encountered in the presimulation reduction of the computational model and allows realization of the required threshold rates for the accurate and realistic representation of real-time visual animations. Intracranial neurosurgical procedures offer an ideal testing site for the development of a totally immersive virtual reality surgical simulator when compared with the simulations required in other surgical subspecialties. The material properties of the brain as well as the typically small volumes of tissue exposed in the surgical field, coupled with techniques and strategies to minimize computational demands, provide unique opportunities for the development of such a simulator. Incorporation of real-time haptic and visual feedback is approached here and likely will be accomplished soon.
End-to-end plasma bubble PIC simulations on GPUs
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Fox, William; Matteucci, Jackson; Bhattacharjee, Amitava
2017-10-01
Accelerator technologies play a crucial role in eventually achieving exascale computing capabilities. The current and upcoming leadership machines at ORNL (Titan and Summit) employ Nvidia GPUs, which provide vast computational power but also need specifically adapted computational kernels to fully exploit them. In this work, we will show end-to-end particle-in-cell simulations of the formation, evolution and coalescence of laser-generated plasma bubbles. This work showcases the GPU capabilities of the PSC particle-in-cell code, which has been adapted for this problem to support particle injection, a heating operator and a collision operator on GPUs.
Improving Quantum Gate Simulation using a GPU
NASA Astrophysics Data System (ADS)
Gutierrez, Eladio; Romero, Sergio; Trenas, Maria A.; Zapata, Emilio L.
2008-11-01
Due to the increasing computing power of the graphics processing units (GPU), they are becoming more and more popular when solving general purpose algorithms. As the simulation of quantum computers results on a problem with exponential complexity, it is advisable to perform a parallel computation, such as the one provided by the SIMD multiprocessors present in recent GPUs. In this paper, we focus on an important quantum algorithm, the quantum Fourier transform (QTF), in order to evaluate different parallelization strategies on a novel GPU architecture. Our implementation makes use of the new CUDA software/hardware architecture developed recently by NVIDIA.
Spinello, Angelo; Magistrato, Alessandra
2017-08-01
Metallo-drugs have attracted enormous interest for cancer treatment. The achievements of this drug-type are summarized by the success story of cisplatin. That being said, there have been many drawbacks with its clinical use, which prompted decades worth of research efforts to move towards safer and more effective agents, either containing platinum or different metals. Areas covered: In this review, the authors provide an atomistic picture of the molecular mechanisms involving selected metallo-drugs from structural and molecular simulation studies. They also provide an omics perspective, pointing out many unsettled aspects of the most relevant families of metallo-drugs at an epigenetic level. Expert opinion: Molecular simulations are able to provide detailed information at atomistic and temporal (ps) resolutions that are rarely accessible to experiments. The increasing accuracy of computational methods and the growing performance of computational platforms, allow us to mirror wet lab experiments in silico. Consequently, the molecular mechanisms of drugs action/failure can be directly viewed on a computer screen, like a 'computational microscope', allowing us to harness this knowledge for the design of the next-generation of metallo-drugs.
Propulsion Simulations with the Unstructured-Grid CFD Tool TetrUSS
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Pandya, Mohagna J.
2002-01-01
A computational investigation has been completed to assess the capability of the NASA Tetrahedral Unstructured Software System (TetrUSS) for simulation of exhaust nozzle flows. Three configurations were chosen for this study: (1) a fluidic jet effects model, (2) an isolated nacelle with a supersonic cruise nozzle, and (3) a fluidic pitchthrust- vectoring nozzle. These configurations were chosen because existing data provided a means for measuring the ability of the TetrUSS flow solver USM3D for simulating complex nozzle flows. Fluidic jet effects model simulations were compared with structured-grid CFD (computational fluid dynamics) data at Mach numbers from 0.3 to 1.2 at nozzle pressure ratios up to 7.2. Simulations of an isolated nacelle with a supersonic cruise nozzle were compared with wind tunnel experimental data and structured-grid CFD data at Mach numbers of 0.9 and 1.2, with a nozzle pressure ratio of 5. Fluidic pitch-thrust-vectoring nozzle simulations were compared with static experimental data and structured-grid CFD data at static freestream conditions and nozzle pressure ratios from 3 to 10. A fluidic injection case was computed with the third configuration at a nozzle pressure ratio of 4.6 and a secondary pressure ratio of 0.7. Results indicate that USM3D with the S-A turbulence model provides accurate exhaust nozzle simulations at on-design conditions, but does not predict internal shock location at overexpanded conditions or pressure recovery along a boattail at transonic conditions.
NASA Technical Reports Server (NTRS)
Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.; Wolfe, W. J.; Nguyen, T.
1986-01-01
The purpose of the Robotic Simulation (ROBSIM) program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotic systems. ROBSIM is programmed in FORTRAM 77 and implemented on a VAX 11/750 computer using the VMS operating system. The programmer's guide describes the ROBSIM implementation and program logic flow, and the functions and structures of the different subroutines. With the manual and the in-code documentation, an experienced programmer can incorporate additional routines and modify existing ones to add desired capabilities.
NASA Technical Reports Server (NTRS)
Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.
1984-01-01
The purpose of the Robotics Simulation (ROBSIM) program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotic systems. ROBSIM is programmed in FORTRAN 77 and implemented on a VAX 11/750 computer using the VMS operating system. This programmer's guide describes the ROBSIM implementation and program logic flow, and the functions and structures of the different subroutines. With this manual and the in-code documentation, and experienced programmer can incorporate additional routines and modify existing ones to add desired capabilities.
Simulating Quantile Models with Applications to Economics and Management
NASA Astrophysics Data System (ADS)
Machado, José A. F.
2010-05-01
The massive increase in the speed of computers over the past forty years changed the way that social scientists, applied economists and statisticians approach their trades and also the very nature of the problems that they could feasibly tackle. The new methods that use intensively computer power go by the names of "computer-intensive" or "simulation". My lecture will start with bird's eye view of the uses of simulation in Economics and Statistics. Then I will turn out to my own research on uses of computer- intensive methods. From a methodological point of view the question I address is how to infer marginal distributions having estimated a conditional quantile process, (Counterfactual Decomposition of Changes in Wage Distributions using Quantile Regression," Journal of Applied Econometrics 20, 2005). Illustrations will be provided of the use of the method to perform counterfactual analysis in several different areas of knowledge.
High performance real-time flight simulation at NASA Langley
NASA Technical Reports Server (NTRS)
Cleveland, Jeff I., II
1994-01-01
In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be deterministic and be completed in as short a time as possible. This includes simulation mathematical model computational and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, personnel at NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to a standard input/output system to provide for high bandwidth, low latency data acquisition and distribution. The Computer Automated Measurement and Control technology (IEEE standard 595) was extended to meet the performance requirements for real-time simulation. This technology extension increased the effective bandwidth by a factor of ten and increased the performance of modules necessary for simulator communications. This technology is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications of this technology are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC have completed the development of the use of supercomputers for simulation mathematical model computational to support real-time flight simulation. This includes the development of a real-time operating system and the development of specialized software and hardware for the CAMAC simulator network. This work, coupled with the use of an open systems software architecture, has advanced the state of the art in real time flight simulation. The data acquisition technology innovation and experience with recent developments in this technology are described.
Navarro, Rafael; Palos, Fernando; Lanchares, Elena; Calvo, Begoña; Cristóbal, José A
2009-01-01
To develop a realistic model of the optomechanical behavior of the cornea after curved relaxing incisions to simulate the induced astigmatic change and predict the optical aberrations produced by the incisions. ICMA Consejo Superior de Investigaciones Científicas and Universidad de Zaragoza, Zaragoza, Spain. A 3-dimensional finite element model of the anterior hemisphere of the ocular surface was used. The corneal tissue was modeled as a quasi-incompressible, anisotropic hyperelastic constitutive behavior strongly dependent on the physiological collagen fibril distribution. Similar behaviors were assigned to the limbus and sclera. With this model, some corneal incisions were computer simulated after the Lindstrom nomogram. The resulting geometry of the biomechanical simulation was analyzed in the optical zone, and finite ray tracing was performed to compute refractive power and higher-order aberrations (HOAs). The finite-element simulation provided new geometry of the corneal surfaces, from which elevation topographies were obtained. The surgically induced astigmatism (SIA) of the simulated incisions according to the Lindstrom nomogram was computed by finite ray tracing. However, paraxial computations would yield slightly different results (undercorrection of astigmatism). In addition, arcuate incisions would induce significant amounts of HOAs. Finite-element models, together with finite ray-tracing computations, yielded realistic simulations of the biomechanical and optical changes induced by relaxing incisions. The model reproduced the SIA indicated by the Lindstrom nomogram for the simulated incisions and predicted a significant increase in optical aberrations induced by arcuate keratotomy.
Oxygen transport properties estimation by DSMC-CT simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruno, Domenico; Frezzotti, Aldo; Ghiroldi, Gian Pietro
Coupling DSMC simulations with classical trajectories calculations is emerging as a powerful tool to improve predictive capabilities of computational rarefied gas dynamics. The considerable increase of computational effort outlined in the early application of the method (Koura,1997) can be compensated by running simulations on massively parallel computers. In particular, GPU acceleration has been found quite effective in reducing computing time (Ferrigni,2012; Norman et al.,2013) of DSMC-CT simulations. The aim of the present work is to study rarefied Oxygen flows by modeling binary collisions through an accurate potential energy surface, obtained by molecular beams scattering (Aquilanti, et al.,1999). The accuracy ofmore » the method is assessed by calculating molecular Oxygen shear viscosity and heat conductivity following three different DSMC-CT simulation methods. In the first one, transport properties are obtained from DSMC-CT simulations of spontaneous fluctuation of an equilibrium state (Bruno et al, Phys. Fluids, 23, 093104, 2011). In the second method, the collision trajectory calculation is incorporated in a Monte Carlo integration procedure to evaluate the Taxman’s expressions for the transport properties of polyatomic gases (Taxman,1959). In the third, non-equilibrium zero and one-dimensional rarefied gas dynamic simulations are adopted and the transport properties are computed from the non-equilibrium fluxes of momentum and energy. The three methods provide close values of the transport properties, their estimated statistical error not exceeding 3%. The experimental values are slightly underestimated, the percentage deviation being, again, few percent.« less
pWeb: A High-Performance, Parallel-Computing Framework for Web-Browser-Based Medical Simulation.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2014-01-01
This work presents a pWeb - a new language and compiler for parallelization of client-side compute intensive web applications such as surgical simulations. The recently introduced HTML5 standard has enabled creating unprecedented applications on the web. Low performance of the web browser, however, remains the bottleneck of computationally intensive applications including visualization of complex scenes, real time physical simulations and image processing compared to native ones. The new proposed language is built upon web workers for multithreaded programming in HTML5. The language provides fundamental functionalities of parallel programming languages as well as the fork/join parallel model which is not supported by web workers. The language compiler automatically generates an equivalent parallel script that complies with the HTML5 standard. A case study on realistic rendering for surgical simulations demonstrates enhanced performance with a compact set of instructions.
Computational Methods for HSCT-Inlet Controls/CFD Interdisciplinary Research
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Melcher, Kevin J.; Chicatelli, Amy K.; Hartley, Tom T.; Chung, Joongkee
1994-01-01
A program aimed at facilitating the use of computational fluid dynamics (CFD) simulations by the controls discipline is presented. The objective is to reduce the development time and cost for propulsion system controls by using CFD simulations to obtain high-fidelity system models for control design and as numerical test beds for control system testing and validation. An interdisciplinary team has been formed to develop analytical and computational tools in three discipline areas: controls, CFD, and computational technology. The controls effort has focused on specifying requirements for an interface between the controls specialist and CFD simulations and a new method for extracting linear, reduced-order control models from CFD simulations. Existing CFD codes are being modified to permit time accurate execution and provide realistic boundary conditions for controls studies. Parallel processing and distributed computing techniques, along with existing system integration software, are being used to reduce CFD execution times and to support the development of an integrated analysis/design system. This paper describes: the initial application for the technology being developed, the high speed civil transport (HSCT) inlet control problem; activities being pursued in each discipline area; and a prototype analysis/design system in place for interactive operation and visualization of a time-accurate HSCT-inlet simulation.
Simulating realistic predator signatures in quantitative fatty acid signature analysis
Bromaghin, Jeffrey F.
2015-01-01
Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.
Neuronify: An Educational Simulator for Neural Circuits.
Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Våvang Solbrå, Andreas; Tennøe, Simen; Hafreager, Anders; Malthe-Sørenssen, Anders; Fyhn, Marianne; Hafting, Torkel; Einevoll, Gaute T
2017-01-01
Educational software (apps) can improve science education by providing an interactive way of learning about complicated topics that are hard to explain with text and static illustrations. However, few educational apps are available for simulation of neural networks. Here, we describe an educational app, Neuronify, allowing the user to easily create and explore neural networks in a plug-and-play simulation environment. The user can pick network elements with adjustable parameters from a menu, i.e., synaptically connected neurons modelled as integrate-and-fire neurons and various stimulators (current sources, spike generators, visual, and touch) and recording devices (voltmeter, spike detector, and loudspeaker). We aim to provide a low entry point to simulation-based neuroscience by allowing students with no programming experience to create and simulate neural networks. To facilitate the use of Neuronify in teaching, a set of premade common network motifs is provided, performing functions such as input summation, gain control by inhibition, and detection of direction of stimulus movement. Neuronify is developed in C++ and QML using the cross-platform application framework Qt and runs on smart phones (Android, iOS) and tablet computers as well personal computers (Windows, Mac, Linux).
Neuronify: An Educational Simulator for Neural Circuits
Hafreager, Anders; Malthe-Sørenssen, Anders; Fyhn, Marianne
2017-01-01
Abstract Educational software (apps) can improve science education by providing an interactive way of learning about complicated topics that are hard to explain with text and static illustrations. However, few educational apps are available for simulation of neural networks. Here, we describe an educational app, Neuronify, allowing the user to easily create and explore neural networks in a plug-and-play simulation environment. The user can pick network elements with adjustable parameters from a menu, i.e., synaptically connected neurons modelled as integrate-and-fire neurons and various stimulators (current sources, spike generators, visual, and touch) and recording devices (voltmeter, spike detector, and loudspeaker). We aim to provide a low entry point to simulation-based neuroscience by allowing students with no programming experience to create and simulate neural networks. To facilitate the use of Neuronify in teaching, a set of premade common network motifs is provided, performing functions such as input summation, gain control by inhibition, and detection of direction of stimulus movement. Neuronify is developed in C++ and QML using the cross-platform application framework Qt and runs on smart phones (Android, iOS) and tablet computers as well personal computers (Windows, Mac, Linux). PMID:28321440
NiftySim: A GPU-based nonlinear finite element package for simulation of soft tissue biomechanics.
Johnsen, Stian F; Taylor, Zeike A; Clarkson, Matthew J; Hipwell, John; Modat, Marc; Eiben, Bjoern; Han, Lianghao; Hu, Yipeng; Mertzanidou, Thomy; Hawkes, David J; Ourselin, Sebastien
2015-07-01
NiftySim, an open-source finite element toolkit, has been designed to allow incorporation of high-performance soft tissue simulation capabilities into biomedical applications. The toolkit provides the option of execution on fast graphics processing unit (GPU) hardware, numerous constitutive models and solid-element options, membrane and shell elements, and contact modelling facilities, in a simple to use library. The toolkit is founded on the total Lagrangian explicit dynamics (TLEDs) algorithm, which has been shown to be efficient and accurate for simulation of soft tissues. The base code is written in C[Formula: see text], and GPU execution is achieved using the nVidia CUDA framework. In most cases, interaction with the underlying solvers can be achieved through a single Simulator class, which may be embedded directly in third-party applications such as, surgical guidance systems. Advanced capabilities such as contact modelling and nonlinear constitutive models are also provided, as are more experimental technologies like reduced order modelling. A consistent description of the underlying solution algorithm, its implementation with a focus on GPU execution, and examples of the toolkit's usage in biomedical applications are provided. Efficient mapping of the TLED algorithm to parallel hardware results in very high computational performance, far exceeding that available in commercial packages. The NiftySim toolkit provides high-performance soft tissue simulation capabilities using GPU technology for biomechanical simulation research applications in medical image computing, surgical simulation, and surgical guidance applications.
Parallel Tensor Compression for Large-Scale Scientific Data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan
As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memorymore » parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.« less
Parallel Signal Processing and System Simulation using aCe
NASA Technical Reports Server (NTRS)
Dorband, John E.; Aburdene, Maurice F.
2003-01-01
Recently, networked and cluster computation have become very popular for both signal processing and system simulation. A new language is ideally suited for parallel signal processing applications and system simulation since it allows the programmer to explicitly express the computations that can be performed concurrently. In addition, the new C based parallel language (ace C) for architecture-adaptive programming allows programmers to implement algorithms and system simulation applications on parallel architectures by providing them with the assurance that future parallel architectures will be able to run their applications with a minimum of modification. In this paper, we will focus on some fundamental features of ace C and present a signal processing application (FFT).
Protein free energy landscapes from long equilibrium simulations
NASA Astrophysics Data System (ADS)
Piana-Agostinetti, Stefano
Many computational techniques based on molecular dynamics (MD) simulation can be used to generate data to aid in the construction of protein free energy landscapes with atomistic detail. Unbiased, long, equilibrium MD simulations--although computationally very expensive--are particularly appealing, as they can provide direct kinetic and thermodynamic information on the transitions between the states that populate a protein free energy surface. It can be challenging to know how to analyze and interpret even results generated by this direct technique, however. I will discuss approaches we have employed, using equilibrium MD simulation data, to obtain descriptions of the free energy landscapes of proteins ranging in size from tens to thousands of amino acids.
Shuttle environmental and thermal control/life support system computer program, supplement 1
NASA Technical Reports Server (NTRS)
Ayotte, W. J.
1975-01-01
The computer programs developed to simulate the RSECS (Representative Shuttle Environmental Control System) were described. These programs were prepared to provide pretest predictions, post-test analysis and real time problem analysis for RSECS test planning and evaluation.
Designing and using computer simulations in medical education and training: an introduction.
Friedl, Karl E; O'Neil, Harold F
2013-10-01
Computer-based technologies informed by the science of learning are becoming increasingly prevalent in education and training. For the Department of Defense (DoD), this presents a great potential advantage to the effective preparation of a new generation of technologically enabled service members. Military medicine has broad education and training challenges ranging from first aid and personal protective skills for every service member to specialized combat medic training; many of these challenges can be met with gaming and simulation technologies that this new generation has embraced. However, comprehensive use of medical games and simulation to augment expert mentorship is still limited to elite medical provider training programs, but can be expected to become broadly used in the training of first responders and allied health care providers. The purpose of this supplement is to review the use of computer games and simulation to teach and assess medical knowledge and skills. This review and other DoD research policy sources will form the basis for development of a research and development road map and guidelines for use of this technology in military medicine. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
Rim, Yonghoon; Laing, Susan T; McPherson, David D; Kim, Hyunggun
2014-01-01
Mitral valve (MV) repair using expanded polytetrafluoroethylene sutures is an established and preferred interventional method to resolve the complex pathophysiologic problems associated with chordal rupture. We developed a novel computational evaluation protocol to determine the effect of the artificial sutures on restoring MV function following valve repair. A virtual MV was created using three-dimensional echocardiographic data in a patient with ruptured mitral chordae tendineae (RMCT). Virtual repairs were designed by adding artificial sutures between the papillary muscles and the posterior leaflet where the native chordae were ruptured. Dynamic finite element simulations were performed to evaluate pre- and post-repair MV function. Abnormal posterior leaflet prolapse and mitral regurgitation was clearly demonstrated in the MV with ruptured chordae. Following virtual repair to reconstruct ruptured chordae, the severity of the posterior leaflet prolapse decreased and stress concentration was markedly reduced both in the leaflet tissue and the intact native chordae. Complete leaflet coaptation was restored when four or six sutures were utilized. Computational simulations provided quantitative information of functional improvement following MV repair. This novel simulation strategy may provide a powerful tool for evaluation and prediction of interventional treatment for RMCT.
Duality quantum algorithm efficiently simulates open quantum systems
Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu
2016-01-01
Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855
Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu
2014-12-01
High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.
A versatile model for soft patchy particles with various patch arrangements.
Li, Zhan-Wei; Zhu, You-Liang; Lu, Zhong-Yuan; Sun, Zhao-Yan
2016-01-21
We propose a simple and general mesoscale soft patchy particle model, which can felicitously describe the deformable and surface-anisotropic characteristics of soft patchy particles. This model can be used in dynamics simulations to investigate the aggregation behavior and mechanism of various types of soft patchy particles with tunable number, size, direction, and geometrical arrangement of the patches. To improve the computational efficiency of this mesoscale model in dynamics simulations, we give the simulation algorithm that fits the compute unified device architecture (CUDA) framework of NVIDIA graphics processing units (GPUs). The validation of the model and the performance of the simulations using GPUs are demonstrated by simulating several benchmark systems of soft patchy particles with 1 to 4 patches in a regular geometrical arrangement. Because of its simplicity and computational efficiency, the soft patchy particle model will provide a powerful tool to investigate the aggregation behavior of soft patchy particles, such as patchy micelles, patchy microgels, and patchy dendrimers, over larger spatial and temporal scales.
A users' guide to the trace contaminant control simulation computer program
NASA Technical Reports Server (NTRS)
Perry, J. L.
1994-01-01
The Trace Contaminant Control Simulation computer program is a tool for assessing the performance of various trace contaminant control technologies for removing trace chemical contamination from a spacecraft cabin atmosphere. The results obtained from the program can be useful in assessing different technology combinations, system sizing, system location with respect to other life support systems, and the overall life cycle economics of a trace contaminant control system. The user's manual is extracted in its entirety from NASA TM-108409 to provide a stand-alone reference for using any version of the program. The first publication of the manual as part of TM-108409 also included a detailed listing of version 8.0 of the program. As changes to the code were necessary, it became apparent that the user's manual should be separate from the computer code documentation and be general enough to provide guidance in using any version of the program. Provided in the guide are tips for input file preparation, general program execution, and output file manipulation. Information concerning source code listings of the latest version of the computer program may be obtained by contacting the author.
NASA Astrophysics Data System (ADS)
Chooramun, N.; Lawrence, P. J.; Galea, E. R.
2017-08-01
In all evacuation simulation tools, the space through which agents navigate and interact is represented by one the following methods, namely Coarse regions, Fine nodes and Continuous regions. Each of the spatial representation methods has its benefits and limitations. For instance, the Coarse approach allows simulations to be processed very rapidly, but is unable to represent the interactions of the agents from an individual perspective; the Continuous approach provides a detailed representation of agent movement and interaction but suffers from relatively poor computational performance. The Fine nodal approach presents a compromise between the Continuous and Coarse approaches such that it allows agent interaction to be modelled while providing good computational performance. Our approach for representing space in an evacuation simulation tool differs such that it allows evacuation simulations to be run using a combination of Coarse regions, Fine nodes and Continuous regions. This approach, which we call Hybrid Spatial Discretisation (HSD), is implemented within the buildingEXODUS evacuation simulation software. The HSD incorporates the benefits of each of the spatial representation methods whilst providing an optimal environment for representing agent movement and interaction. In this work, we demonstrate the effectiveness of the HSD through its application to a moderately large case comprising of an underground rail tunnel station with a population of 2,000 agents.
AESS: Accelerated Exact Stochastic Simulation
NASA Astrophysics Data System (ADS)
Jenkins, David D.; Peterson, Gregory D.
2011-12-01
The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution method: The Accelerated Exact Stochastic Simulation (AESS) tool provides implementations of a wide variety of popular variations on the Gillespie method. Users can select the specific algorithm considered most appropriate. Comparisons between the methods and with other available implementations indicate that AESS provides the fastest known implementation of Gillespie's method for a variety of test models. Users may wish to execute ensembles of simulations to sweep parameters or to obtain better statistical results, so AESS supports acceleration of ensembles of simulation using parallel processing with MPI, SSE vector units on x86 processors, and/or using NVIDIA GPUs with CUDA.
The TeraShake Computational Platform for Large-Scale Earthquake Simulations
NASA Astrophysics Data System (ADS)
Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas
Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.
Power combining in an array of microwave power rectifiers
NASA Technical Reports Server (NTRS)
Gutmann, R. J.; Borrego, J. M.
1979-01-01
This work analyzes the resultant efficiency degradation when identical rectifiers operate at different RF power levels as caused by the power beam taper. Both a closed-form analytical circuit model and a detailed computer-simulation model are used to obtain the output dc load line of the rectifier. The efficiency degradation is nearly identical with series and parallel combining, and the closed-form analytical model provides results which are similar to the detailed computer-simulation model.
NASA Astrophysics Data System (ADS)
Neyts, Erik C.; Yusupov, Maksudbek; Verlackt, Christof C.; Bogaerts, Annemie
2014-07-01
Plasma medicine is a rapidly evolving multidisciplinary field at the intersection of chemistry, biochemistry, physics, biology, medicine and bioengineering. It holds great potential in medical, health care, dentistry, surgical, food treatment and other applications. This multidisciplinary nature and variety of possible applications come along with an inherent and intrinsic complexity. Advancing plasma medicine to the stage that it becomes an everyday tool in its respective fields requires a fundamental understanding of the basic processes, which is lacking so far. However, some major advances have already been made through detailed experiments over the last 15 years. Complementary, computer simulations may provide insight that is difficult—if not impossible—to obtain through experiments. In this review, we aim to provide an overview of the various simulations that have been carried out in the context of plasma medicine so far, or that are relevant for plasma medicine. We focus our attention mostly on atomistic simulations dealing with plasma-biomolecule interactions. We also provide a perspective and tentative list of opportunities for future modelling studies that are likely to further advance the field.
Simulation System for Training in Laparoscopic Surgery
NASA Technical Reports Server (NTRS)
Basdogan, Cagatay; Ho, Chih-Hao
2003-01-01
A computer-based simulation system creates a visual and haptic virtual environment for training a medical practitioner in laparoscopic surgery. Heretofore, it has been common practice to perform training in partial laparoscopic surgical procedures by use of a laparoscopic training box that encloses a pair of laparoscopic tools, objects to be manipulated by the tools, and an endoscopic video camera. However, the surgical procedures simulated by use of a training box are usually poor imitations of the actual ones. The present computer-based system improves training by presenting a more realistic simulated environment to the trainee. The system includes a computer monitor that displays a real-time image of the affected interior region of the patient, showing laparoscopic instruments interacting with organs and tissues, as would be viewed by use of an endoscopic video camera and displayed to a surgeon during a laparoscopic operation. The system also includes laparoscopic tools that the trainee manipulates while observing the image on the computer monitor (see figure). The instrumentation on the tools consists of (1) position and orientation sensors that provide input data for the simulation and (2) actuators that provide force feedback to simulate the contact forces between the tools and tissues. The simulation software includes components that model the geometries of surgical tools, components that model the geometries and physical behaviors of soft tissues, and components that detect collisions between them. Using the measured positions and orientations of the tools, the software detects whether they are in contact with tissues. In the event of contact, the deformations of the tissues and contact forces are computed by use of the geometric and physical models. The image on the computer screen shows tissues deformed accordingly, while the actuators apply the corresponding forces to the distal ends of the tools. For the purpose of demonstration, the system has been set up to simulate the insertion of a flexible catheter in a bile duct. [As thus configured, the system can also be used to simulate other endoscopic procedures (e.g., bronchoscopy and colonoscopy) that include the insertion of flexible tubes into flexible ducts.] A hybrid approach has been followed in developing the software for real-time simulation of the visual and haptic interactions (1) between forceps and the catheter, (2) between the forceps and the duct, and (3) between the catheter and the duct. The deformations of the duct are simulated by finite-element and modalanalysis procedures, using only the most significant vibration modes of the duct for computing deformations and interaction forces. The catheter is modeled as a set of virtual particles uniformly distributed along the center line of the catheter and connected to each other via linear and torsional springs and damping elements. The interactions between the forceps and the duct as well as the catheter are simulated by use of a ray-based haptic-interaction- simulating technique in which the forceps are modeled as connected line segments.
Unstructured mesh adaptivity for urban flooding modelling
NASA Astrophysics Data System (ADS)
Hu, R.; Fang, F.; Salinas, P.; Pain, C. C.
2018-05-01
Over the past few decades, urban floods have been gaining more attention due to their increase in frequency. To provide reliable flooding predictions in urban areas, various numerical models have been developed to perform high-resolution flood simulations. However, the use of high-resolution meshes across the whole computational domain causes a high computational burden. In this paper, a 2D control-volume and finite-element flood model using adaptive unstructured mesh technology has been developed. This adaptive unstructured mesh technique enables meshes to be adapted optimally in time and space in response to the evolving flow features, thus providing sufficient mesh resolution where and when it is required. It has the advantage of capturing the details of local flows and wetting and drying front while reducing the computational cost. Complex topographic features are represented accurately during the flooding process. For example, the high-resolution meshes around the buildings and steep regions are placed when the flooding water reaches these regions. In this work a flooding event that happened in 2002 in Glasgow, Scotland, United Kingdom has been simulated to demonstrate the capability of the adaptive unstructured mesh flooding model. The simulations have been performed using both fixed and adaptive unstructured meshes, and then results have been compared with those published 2D and 3D results. The presented method shows that the 2D adaptive mesh model provides accurate results while having a low computational cost.
Computational Investigation of Fluidic Counterflow Thrust Vectoring
NASA Technical Reports Server (NTRS)
Hunter, Craig A.; Deere, Karen A.
1999-01-01
A computational study of fluidic counterflow thrust vectoring has been conducted. Two-dimensional numerical simulations were run using the computational fluid dynamics code PAB3D with two-equation turbulence closure and linear Reynolds stress modeling. For validation, computational results were compared to experimental data obtained at the NASA Langley Jet Exit Test Facility. In general, computational results were in good agreement with experimental performance data, indicating that efficient thrust vectoring can be obtained with low secondary flow requirements (less than 1% of the primary flow). An examination of the computational flowfield has revealed new details about the generation of a countercurrent shear layer, its relation to secondary suction, and its role in thrust vectoring. In addition to providing new information about the physics of counterflow thrust vectoring, this work appears to be the first documented attempt to simulate the counterflow thrust vectoring problem using computational fluid dynamics.
Relativistic interpretation of Newtonian simulations for cosmic structure formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fidler, Christian; Tram, Thomas; Crittenden, Robert
2016-09-01
The standard numerical tools for studying non-linear collapse of matter are Newtonian N -body simulations. Previous work has shown that these simulations are in accordance with General Relativity (GR) up to first order in perturbation theory, provided that the effects from radiation can be neglected. In this paper we show that the present day matter density receives more than 1% corrections from radiation on large scales if Newtonian simulations are initialised before z =50. We provide a relativistic framework in which unmodified Newtonian simulations are compatible with linear GR even in the presence of radiation. Our idea is to usemore » GR perturbation theory to keep track of the evolution of relativistic species and the relativistic space-time consistent with the Newtonian trajectories computed in N -body simulations. If metric potentials are sufficiently small, they can be computed using a first-order Einstein–Boltzmann code such as CLASS. We make this idea rigorous by defining a class of GR gauges, the Newtonian motion gauges, which are defined such that matter particles follow Newtonian trajectories. We construct a simple example of a relativistic space-time within which unmodified Newtonian simulations can be interpreted.« less
Configuration Management File Manager Developed for Numerical Propulsion System Simulation
NASA Technical Reports Server (NTRS)
Follen, Gregory J.
1997-01-01
One of the objectives of the High Performance Computing and Communication Project's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to provide a common and consistent way to manage applications, data, and engine simulations. The NPSS Configuration Management (CM) File Manager integrated with the Common Desktop Environment (CDE) window management system provides a common look and feel for the configuration management of data, applications, and engine simulations for U.S. engine companies. In addition, CM File Manager provides tools to manage a simulation. Features include managing input files, output files, textual notes, and any other material normally associated with simulation. The CM File Manager includes a generic configuration management Application Program Interface (API) that can be adapted for the configuration management repositories of any U.S. engine company.
Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S
2014-12-01
We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.
Simulation and analysis of a geopotential research mission
NASA Technical Reports Server (NTRS)
Schutz, B. E.
1986-01-01
A computer simulation was performed for a Geopotential Research Mission (GRM) to enable study of the gravitational sensitivity of the range/rate measurement between two satellites and to provide a set of simulated measurements to assist in the evaluation of techniques developed for the determination of the gravity field. The simulation, identified as SGRM 8511, was conducted with two satellites in near circular, frozen orbits at 160 km altitude and separated by 300 km. High precision numerical integration of the polar orbits was used with a gravitational field complete to degree and order 180 coefficients and to degree 300 in orders 0 to 10. The set of simulated data for a mission duration of about 32 days was generated on a Cray X-MP computer. The characteristics of the simulation and the nature of the results are described.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Software for Simulating a Complex Robot
NASA Technical Reports Server (NTRS)
Goza, S. Michael
2003-01-01
RoboSim (Robot Simulation) is a computer program that simulates the poses and motions of the Robonaut a developmental anthropomorphic robot that has a complex system of joints with 43 degrees of freedom and multiple modes of operation and control. RoboSim performs a full kinematic simulation of all degrees of freedom. It also includes interface components that duplicate the functionality of the real Robonaut interface with control software and human operators. Basically, users see no difference between the real Robonaut and the simulation. Consequently, new control algorithms can be tested by computational simulation, without risk to the Robonaut hardware, and without using excessive Robonaut-hardware experimental time, which is always at a premium. Previously developed software incorporated into RoboSim includes Enigma (for graphical displays), OSCAR (for kinematical computations), and NDDS (for communication between the Robonaut and external software). In addition, RoboSim incorporates unique inverse-kinematical algorithms for chains of joints that have fewer than six degrees of freedom (e.g., finger joints). In comparison with the algorithms of OSCAR, these algorithms are more readily adaptable and provide better results when using equivalent sets of data.
Web-Based Computational Chemistry Education with CHARMMing I: Lessons and Tutorial
Miller, Benjamin T.; Singh, Rishi P.; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S.; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R.; Woodcock, H. Lee
2014-01-01
This article describes the development, implementation, and use of web-based “lessons” to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that “point and click” simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance. PMID:25057988
Web-based computational chemistry education with CHARMMing I: Lessons and tutorial.
Miller, Benjamin T; Singh, Rishi P; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R; Woodcock, H Lee
2014-07-01
This article describes the development, implementation, and use of web-based "lessons" to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that "point and click" simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance.
Microstructure-Property-Design Relationships in the Simulation Era: An Introduction (PREPRINT)
2010-01-01
Astronautics (AIAA) paper #1026. 20. Dimiduk DM (1998) Systems engineering of gamma titanium aluminides : impact of fundamentals on development strategy...microstructure-sensitive design tools for single-crystal turbine blades provides an accessible glimpse into future computational tools and their data...requirements. 15. SUBJECT TERMS single-crystal turbine blades , computational methods, integrated computational materials 16. SECURITY
ERIC Educational Resources Information Center
Stotter, Philip L.; Culp, George H.
An experimental course in organic chemistry utilized computer-assisted instructional (CAI) techniques. The CAI lessons provided tutorial drill and practice and simulated experiments and reactions. The Conversational Language for Instruction and Computing was used, along with a CDC 6400-6600 system; students scheduled and completed the lessons at…
NASA Technical Reports Server (NTRS)
Macdonald, G.
1983-01-01
A prototype Air Traffic Control facility and multiman flight simulator facility was designed and one of the component simulators fabricated as a proof of concept. The facility was designed to provide a number of independent simple simulator cabs that would have the capability of some local, stand alone processing that would in turn interface with a larger host computer. The system can accommodate up to eight flight simulators (commercially available instrument trainers) which could be operated stand alone if no graphics were required or could operate in a common simulated airspace if connected to the host computer. A proposed addition to the original design is the capability of inputing pilot inputs and quantities displayed on the flight and navigation instruments to the microcomputer when the simulator operates in the stand alone mode to allow independent use of these commercially available instrument trainers for research. The conceptual design of the system and progress made to date on its implementation are described.
Monte Carlo simulation of biomolecular systems with BIOMCSIM
NASA Astrophysics Data System (ADS)
Kamberaj, H.; Helms, V.
2001-12-01
A new Monte Carlo simulation program, BIOMCSIM, is presented that has been developed in particular to simulate the behaviour of biomolecular systems, leading to insights and understanding of their functions. The computational complexity in Monte Carlo simulations of high density systems, with large molecules like proteins immersed in a solvent medium, or when simulating the dynamics of water molecules in a protein cavity, is enormous. The program presented in this paper seeks to provide these desirable features putting special emphasis on simulations in grand canonical ensembles. It uses different biasing techniques to increase the convergence of simulations, and periodic load balancing in its parallel version, to maximally utilize the available computer power. In periodic systems, the long-ranged electrostatic interactions can be treated by Ewald summation. The program is modularly organized, and implemented using an ANSI C dialect, so as to enhance its modifiability. Its performance is demonstrated in benchmark applications for the proteins BPTI and Cytochrome c Oxidase.
High-performance computational fluid dynamics: a custom-code approach
NASA Astrophysics Data System (ADS)
Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.
2016-07-01
We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier-Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.
Yamamoto, Takehiro; Ueda, Shuya
2013-01-01
Biofilm is a slime-like complex aggregate of microorganisms and their products, extracellular polymer substances, that grows on a solid surface. The growth phenomenon of biofilm is relevant to the corrosion and clogging of water pipes, the chemical processes in a bioreactor, and bioremediation. In these phenomena, the behavior of the biofilm under flow has an important role. Therefore, controlling the biofilm behavior in each process is important. To provide a computational tool for analyzing biofilm growth, the present study proposes a computational model for the simulation of biofilm growth in flows. This model accounts for the growth, decay, detachment and adhesion of biofilms. The proposed model couples the computation of the surrounding fluid flow, using the finite volume method, with the simulation of biofilm growth, using the cellular automaton approach, a relatively low-computational-cost method. Furthermore, a stochastic approach for considering the adhesion process is proposed. Numerical simulations for the biofilm growth on a planar wall and that in an L-shaped rectangular channel were carried out. A variety of biofilm structures were observed depending on the strength of the flow. Moreover, the importance of the detachment and adhesion processes was confirmed.
NASA Technical Reports Server (NTRS)
Geyser, L. C.
1978-01-01
A digital computer program, DYGABCD, was developed that generates linearized, dynamic models of simulated turbofan and turbojet engines. DYGABCD is based on an earlier computer program, DYNGEN, that is capable of calculating simulated nonlinear steady-state and transient performance of one- and two-spool turbojet engines or two- and three-spool turbofan engines. Most control design techniques require linear system descriptions. For multiple-input/multiple-output systems such as turbine engines, state space matrix descriptions of the system are often desirable. DYGABCD computes the state space matrices commonly referred to as the A, B, C, and D matrices required for a linear system description. The report discusses the analytical approach and provides a users manual, FORTRAN listings, and a sample case.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MCCLEAN, JARROD; HANER, THOMAS; STEIGER, DAMIAN
FermiLib is an open source software package designed to facilitate the development and testing of algorithms for simulations of fermionic systems on quantum computers. Fermionic simulations represent an important application of early quantum devices with a lot of potential high value targets, such as quantum chemistry for the development of new catalysts. This software strives to provide a link between the required domain expertise in specific fermionic applications and quantum computing to enable more users to directly interface with, and develop for, these applications. It is an extensible Python library designed to interface with the high performance quantum simulator, ProjectQ,more » as well as application specific software such as PSI4 from the domain of quantum chemistry. Such software is key to enabling effective user facilities in quantum computation research.« less
USE OF COMPUTER-AIDED PROCESS ENGINEERING TOOL IN POLLUTION PREVENTION
Computer-Aided Process Engineering has become established in industry as a design tool. With the establishment of the CAPE-OPEN software specifications for process simulation environments. CAPE-OPEN provides a set of "middleware" standards that enable software developers to acces...
PCs: Key to the Future. Business Center Provides Sound Skills and Good Attitudes.
ERIC Educational Resources Information Center
Pay, Renee W.
1991-01-01
The Advanced Computing/Management Training Program at Jordan Technical Center (Sandy, Utah) simulates an automated office to teach five sets of skills: computer architecture and operating systems, word processing, data processing, communications skills, and management principles. (SK)
Improved Field-Effect Transistor Equations for Computer Simulation.
ERIC Educational Resources Information Center
Kidd, Richard; Ardini, James
1979-01-01
Presents a laboratory experiment that was developed to acquaint physics students with field-effect transistor characteristics and circuits. Computer-drawn curves supplementing student laboratory exercises can be generated to provide more permanent, usable data than those taken from a curve tracer. (HM)
Seismic waveform modeling over cloud
NASA Astrophysics Data System (ADS)
Luo, Cong; Friederich, Wolfgang
2016-04-01
With the fast growing computational technologies, numerical simulation of seismic wave propagation achieved huge successes. Obtaining the synthetic waveforms through numerical simulation receives an increasing amount of attention from seismologists. However, computational seismology is a data-intensive research field, and the numerical packages usually come with a steep learning curve. Users are expected to master considerable amount of computer knowledge and data processing skills. Training users to use the numerical packages, correctly access and utilize the computational resources is a troubled task. In addition to that, accessing to HPC is also a common difficulty for many users. To solve these problems, a cloud based solution dedicated on shallow seismic waveform modeling has been developed with the state-of-the-art web technologies. It is a web platform integrating both software and hardware with multilayer architecture: a well designed SQL database serves as the data layer, HPC and dedicated pipeline for it is the business layer. Through this platform, users will no longer need to compile and manipulate various packages on the local machine within local network to perform a simulation. By providing users professional access to the computational code through its interfaces and delivering our computational resources to the users over cloud, users can customize the simulation at expert-level, submit and run the job through it.
Lee, Kyung Eun; Lee, Seo Ho; Shin, Eun-Seok; Shim, Eun Bo
2017-06-26
Hemodynamic simulation for quantifying fractional flow reserve (FFR) is often performed in a patient-specific geometry of coronary arteries reconstructed from the images from various imaging modalities. Because optical coherence tomography (OCT) images can provide more precise vascular lumen geometry, regardless of stenotic severity, hemodynamic simulation based on OCT images may be effective. The aim of this study is to perform OCT-FFR simulations by coupling a 3D CFD model from geometrically correct OCT images with a LPM based on vessel lengths extracted from CAG data with clinical validations for the present method. To simulate coronary hemodynamics, we developed a fast and accurate method that combined a computational fluid dynamics (CFD) model of an OCT-based region of interest (ROI) with a lumped parameter model (LPM) of the coronary microvasculature and veins. Here, the LPM was based on vessel lengths extracted from coronary X-ray angiography (CAG) images. Based on a vessel length-based approach, we describe a theoretical formulation for the total resistance of the LPM from a three-dimensional (3D) CFD model of the ROI. To show the utility of this method, we present calculated examples of FFR from OCT images. To validate the OCT-based FFR calculation (OCT-FFR) clinically, we compared the computed OCT-FFR values for 17 vessels of 13 patients with clinically measured FFR (M-FFR) values. A novel formulation for the total resistance of LPM is introduced to accurately simulate a 3D CFD model of the ROI. The simulated FFR values compared well with clinically measured ones, showing the accuracy of the method. Moreover, the present method is fast in terms of computational time, enabling clinicians to provide solutions handled within the hospital.
SIM_ADJUST -- A computer code that adjusts simulated equivalents for observations or predictions
Poeter, Eileen P.; Hill, Mary C.
2008-01-01
This report documents the SIM_ADJUST computer code. SIM_ADJUST surmounts an obstacle that is sometimes encountered when using universal model analysis computer codes such as UCODE_2005 (Poeter and others, 2005), PEST (Doherty, 2004), and OSTRICH (Matott, 2005; Fredrick and others (2007). These codes often read simulated equivalents from a list in a file produced by a process model such as MODFLOW that represents a system of interest. At times values needed by the universal code are missing or assigned default values because the process model could not produce a useful solution. SIM_ADJUST can be used to (1) read a file that lists expected observation or prediction names and possible alternatives for the simulated values; (2) read a file produced by a process model that contains space or tab delimited columns, including a column of simulated values and a column of related observation or prediction names; (3) identify observations or predictions that have been omitted or assigned a default value by the process model; and (4) produce an adjusted file that contains a column of simulated values and a column of associated observation or prediction names. The user may provide alternatives that are constant values or that are alternative simulated values. The user may also provide a sequence of alternatives. For example, the heads from a series of cells may be specified to ensure that a meaningful value is available to compare with an observation located in a cell that may become dry. SIM_ADJUST is constructed using modules from the JUPITER API, and is intended for use on any computer operating system. SIM_ADJUST consists of algorithms programmed in Fortran90, which efficiently performs numerical calculations.
A simulation study of hardwood rootstock populations in young loblolly pine plantations
David R. Weise; Glenn R. Glover
1988-01-01
A computer program to simulate spatial distribution of hardwood rootstock populations is presented. Nineteen 3 to 6 yearold loblolly pine (Pinus taeda L.) plantations in Alabama and Georgia were measured to provide information for the simulator. Spatial pattern, expressed as Pielou's nonrandomness index (PNI), ranged from 0.47 to 2.45. Scatterplots illustrated no...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, J.; Mowrey, J.
1995-12-01
This report describes the design, development and testing of process controls for selected system operations in the Browns Ferry Nuclear Plant (BFNP) Reactor Water Cleanup System (RWCU) using a Computer Simulation Platform which simulates the RWCU System and the BFNP Integrated Computer System (ICS). This system was designed to demonstrate the feasibility of the soft control (video touch screen) of nuclear plant systems through an operator console. The BFNP Integrated Computer System, which has recently. been installed at BFNP Unit 2, was simulated to allow for operator control functions of the modeled RWCU system. The BFNP Unit 2 RWCU systemmore » was simulated using the RELAP5 Thermal/Hydraulic Simulation Model, which provided the steady-state and transient RWCU process variables and simulated the response of the system to control system inputs. Descriptions of the hardware and software developed are also included in this report. The testing and acceptance program and results are also detailed in this report. A discussion of potential installation of an actual RWCU process control system in BFNP Unit 2 is included. Finally, this report contains a section on industry issues associated with installation of process control systems in nuclear power plants.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Souris, Kevin, E-mail: kevin.souris@uclouvain.be; Lee, John Aldo; Sterpin, Edmond
2016-04-15
Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithmmore » of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10{sup 7} primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.« less
Systems Biology in Immunology – A Computational Modeling Perspective
Germain, Ronald N.; Meier-Schellersheim, Martin; Nita-Lazar, Aleksandra; Fraser, Iain D. C.
2011-01-01
Systems biology is an emerging discipline that combines high-content, multiplexed measurements with informatic and computational modeling methods to better understand biological function at various scales. Here we present a detailed review of the methods used to create computational models and conduct simulations of immune function, We provide descriptions of the key data gathering techniques employed to generate the quantitative and qualitative data required for such modeling and simulation and summarize the progress to date in applying these tools and techniques to questions of immunological interest, including infectious disease. We include comments on what insights modeling can provide that complement information obtained from the more familiar experimental discovery methods used by most investigators and why quantitative methods are needed to eventually produce a better understanding of immune system operation in health and disease. PMID:21219182
Enabling Co-Design of Multi-Layer Exascale Storage Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carothers, Christopher
Growing demands for computing power in applications such as energy production, climate analysis, computational chemistry, and bioinformatics have propelled computing systems toward the exascale: systems with 10 18 floating-point operations per second. These systems, to be designed and constructed over the next decade, will create unprecedented challenges in component counts, power consumption, resource limitations, and system complexity. Data storage and access are an increasingly important and complex component in extreme-scale computing systems, and significant design work is needed to develop successful storage hardware and software architectures at exascale. Co-design of these systems will be necessary to find the best possiblemore » design points for exascale systems. The goal of this work has been to enable the exploration and co-design of exascale storage systems by providing a detailed, accurate, and highly parallel simulation of exascale storage and the surrounding environment. Specifically, this simulation has (1) portrayed realistic application checkpointing and analysis workloads, (2) captured the complexity, scale, and multilayer nature of exascale storage hardware and software, and (3) executed in a timeframe that enables “what if'” exploration of design concepts. We developed models of the major hardware and software components in an exascale storage system, as well as the application I/O workloads that drive them. We used our simulation system to investigate critical questions in reliability and concurrency at exascale, helping guide the design of future exascale hardware and software architectures. Additionally, we provided this system to interested vendors and researchers so that others can explore the design space. We validated the capabilities of our simulation environment by configuring the simulation to represent the Argonne Leadership Computing Facility Blue Gene/Q system and comparing simulation results for application I/O patterns to the results of executions of these I/O kernels on the actual system.« less
Hydrodynamic Simulations of Protoplanetary Disks with GIZMO
NASA Astrophysics Data System (ADS)
Rice, Malena; Laughlin, Greg
2018-01-01
Over the past several decades, the field of computational fluid dynamics has rapidly advanced as the range of available numerical algorithms and computationally feasible physical problems has expanded. The development of modern numerical solvers has provided a compelling opportunity to reconsider previously obtained results in search for yet undiscovered effects that may be revealed through longer integration times and more precise numerical approaches. In this study, we compare the results of past hydrodynamic disk simulations with those obtained from modern analytical resources. We focus our study on the GIZMO code (Hopkins 2015), which uses meshless methods to solve the homogeneous Euler equations of hydrodynamics while eliminating problems arising as a result of advection between grid cells. By comparing modern simulations with prior results, we hope to provide an improved understanding of the impact of fluid mechanics upon the evolution of protoplanetary disks.
NASA Technical Reports Server (NTRS)
Palkovic, R. A.
1974-01-01
A FORTRAN 4 computer program provides convenient simulation of an all-digital phase-lock loop (DPLL). The DPLL forms the heart of the Omega navigation receiver prototype. Through the DPLL, the phase of the 10.2 KHz Omega signal is estimated when the true signal phase is contaminated with noise. This investigation has provided a convenient means of evaluating loop performance in a variety of noise environments, and has proved to be a useful tool for evaluating design changes. The goals of the simulation are to: (1) analyze the circuit on a bit-by-bit level in order to evaluate the overall design; (2) see easily the effects of proposed design changes prior to actual breadboarding; and (3) determine the optimum integration time for the DPLL in an environment typical of general aviation conditions.
NASA Technical Reports Server (NTRS)
Brauer, G. L.; Cornick, D. E.; Stevenson, R.
1977-01-01
The capabilities and applications of the three-degree-of-freedom (3DOF) version and the six-degree-of-freedom (6DOF) version of the Program to Optimize Simulated Trajectories (POST) are summarized. The document supplements the detailed program manuals by providing additional information that motivates and clarifies basic capabilities, input procedures, applications and computer requirements of these programs. The information will enable prospective users to evaluate the programs, and to determine if they are applicable to their problems. Enough information is given to enable managerial personnel to evaluate the capabilities of the programs and describes the POST structure, formulation, input and output procedures, sample cases, and computer requirements. The report also provides answers to basic questions concerning planet and vehicle modeling, simulation accuracy, optimization capabilities, and general input rules. Several sample cases are presented.
NASA Technical Reports Server (NTRS)
Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelley, J. H.; Depkovich, T. M.; Wolfe, W. J.; Nguyen, T.
1986-01-01
The purpose of the Robotics Simulation Program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotics systems. ROBSIM is program in FORTRAN 77 for use on a VAX 11/750 computer under the VMS operating system. This user's guide describes the capabilities of the ROBSIM programs, including the system definition function, the analysis tools function and the postprocessor function. The options a user may encounter with each of these executables are explained in detail and the different program prompts appearing to the user are included. Some useful suggestions concerning the appropriate answers to be given by the user are provided. An example user interactive run in enclosed for each of the main program services, and some of the capabilities are illustrated.
NASA Technical Reports Server (NTRS)
Lee, Hyung B.; Ghia, Urmila; Bayyuk, Sami; Oberkampf, William L.; Roy, Christopher J.; Benek, John A.; Rumsey, Christopher L.; Powers, Joseph M.; Bush, Robert H.; Mani, Mortaza
2016-01-01
Computational fluid dynamics (CFD) and other advanced modeling and simulation (M&S) methods are increasingly relied on for predictive performance, reliability and safety of engineering systems. Analysts, designers, decision makers, and project managers, who must depend on simulation, need practical techniques and methods for assessing simulation credibility. The AIAA Guide for Verification and Validation of Computational Fluid Dynamics Simulations (AIAA G-077-1998 (2002)), originally published in 1998, was the first engineering standards document available to the engineering community for verification and validation (V&V) of simulations. Much progress has been made in these areas since 1998. The AIAA Committee on Standards for CFD is currently updating this Guide to incorporate in it the important developments that have taken place in V&V concepts, methods, and practices, particularly with regard to the broader context of predictive capability and uncertainty quantification (UQ) methods and approaches. This paper will provide an overview of the changes and extensions currently underway to update the AIAA Guide. Specifically, a framework for predictive capability will be described for incorporating a wide range of error and uncertainty sources identified during the modeling, verification, and validation processes, with the goal of estimating the total prediction uncertainty of the simulation. The Guide's goal is to provide a foundation for understanding and addressing major issues and concepts in predictive CFD. However, this Guide will not recommend specific approaches in these areas as the field is rapidly evolving. It is hoped that the guidelines provided in this paper, and explained in more detail in the Guide, will aid in the research, development, and use of CFD in engineering decision-making.
Augmenting Sand Simulation Environments through Subdivision and Particle Refinement
NASA Astrophysics Data System (ADS)
Clothier, M.; Bailey, M.
2012-12-01
Recent advances in computer graphics and parallel processing hardware have provided disciplines with new methods to evaluate and visualize data. These advances have proven useful for earth and planetary scientists as many researchers are using this hardware to process large amounts of data for analysis. As such, this has provided opportunities for collaboration between computer graphics and the earth sciences. Through collaboration with the Oregon Space Grant and IGERT Ecosystem Informatics programs, we are investigating techniques for simulating the behavior of sand. We are also collaborating with the Jet Propulsion Laboratory's (JPL) DARTS Lab to exchange ideas and gain feedback on our research. The DARTS Lab specializes in simulation of planetary vehicles, such as the Mars rovers. Their simulations utilize a virtual "sand box" to test how a planetary vehicle responds to different environments. Our research builds upon this idea to create a sand simulation framework so that planetary environments, such as the harsh, sandy regions on Mars, are more fully realized. More specifically, we are focusing our research on the interaction between a planetary vehicle, such as a rover, and the sand beneath it, providing further insight into its performance. Unfortunately, this can be a computationally complex problem, especially if trying to represent the enormous quantities of sand particles interacting with each other. However, through the use of high-performance computing, we have developed a technique to subdivide areas of actively participating sand regions across a large landscape. Similar to a Level of Detail (LOD) technique, we only subdivide regions of a landscape where sand particles are actively participating with another object. While the sand is within this subdivision window and moves closer to the surface of the interacting object, the sand region subdivides into smaller regions until individual sand particles are left at the surface. As an example, let's say there is a planetary rover interacting with our sand simulation environment. Sand that is actively interacting with a rover wheel will be represented as individual particles whereas sand that is further under the surface will be represented by larger regions of sand. The result of this technique allows for many particles to be represented without the computational complexity. In developing this method, we have further generalized these subdivision regions into any volumetric area suitable for use in the simulation. This is a further improvement of our method as it allows for more compact subdivision sand regions. This helps to fine tune the simulation so that more emphasis can be placed on regions of actively participating sand. We feel that through the generalization of our technique, our research can provide other opportunities within the earth and planetary sciences. Through collaboration with our academic colleagues, we continue to refine our technique and look for other opportunities to utilize our research.
Multi-objective optimization of GENIE Earth system models.
Price, Andrew R; Myerscough, Richard J; Voutchkov, Ivan I; Marsh, Robert; Cox, Simon J
2009-07-13
The tuning of parameters in climate models is essential to provide reliable long-term forecasts of Earth system behaviour. We apply a multi-objective optimization algorithm to the problem of parameter estimation in climate models. This optimization process involves the iterative evaluation of response surface models (RSMs), followed by the execution of multiple Earth system simulations. These computations require an infrastructure that provides high-performance computing for building and searching the RSMs and high-throughput computing for the concurrent evaluation of a large number of models. Grid computing technology is therefore essential to make this algorithm practical for members of the GENIE project.
NASA Technical Reports Server (NTRS)
Rochelle, W. C.; Liu, D. K.; Nunnery, W. J., Jr.; Brandli, A. E.
1975-01-01
This paper describes the application of the SINDA (systems improved numerical differencing analyzer) computer program to simulate the operation of the NASA/JSC MIUS integration and subsystems test (MIST) laboratory. The MIST laboratory is designed to test the integration capability of the following subsystems of a modular integrated utility system (MIUS): (1) electric power generation, (2) space heating and cooling, (3) solid waste disposal, (4) potable water supply, and (5) waste water treatment. The SINDA/MIST computer model is designed to simulate the response of these subsystems to externally impressed loads. The computer model determines the amount of recovered waste heat from the prime mover exhaust, water jacket and oil/aftercooler and from the incinerator. This recovered waste heat is used in the model to heat potable water, for space heating, absorption air conditioning, waste water sterilization, and to provide for thermal storage. The details of the thermal and fluid simulation of MIST including the system configuration, modes of operation modeled, SINDA model characteristics and the results of several analyses are described.
Wang, Guang-Ye; Huang, Wen-Jun; Song, Qi; Qin, Yun-Tian; Liang, Jin-Feng
2016-12-01
Acetabular fractures have always been very challenging for orthopedic surgeons; therefore, appropriate preoperative evaluation and planning are particularly important. This study aimed to explore the application methods and clinical value of preoperative computer simulation (PCS) in treating pelvic and acetabular fractures. Spiral computed tomography (CT) was performed on 13 patients with pelvic and acetabular fractures, and Digital Imaging and Communications in Medicine (DICOM) data were then input into Mimics software to reconstruct three-dimensional (3D) models of actual pelvic and acetabular fractures for preoperative simulative reduction and fixation, and to simulate each surgical procedure. The times needed for virtual surgical modeling and reduction and fixation were also recorded. The average fracture-modeling time was 45 min (30-70 min), and the average time for bone reduction and fixation was 28 min (16-45 min). Among the surgical approaches planned for these 13 patients, 12 were finally adopted; 12 cases used the simulated surgical fixation, and only 1 case used a partial planned fixation method. PCS can provide accurate surgical plans and data support for actual surgeries.
Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB.
Lee, Leng-Feng; Umberger, Brian R
2016-01-01
Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1-2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility.
Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB
Lee, Leng-Feng
2016-01-01
Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1–2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility. PMID:26835184
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Tianwu; Zaidi, Habib, E-mail: habib.zaidi@hcuge.ch; Geneva Neuroscience Center, Geneva University, Geneva CH-1205
The development of multimodality preclinical imaging techniques and the rapid growth of realistic computer simulation tools have promoted the construction and application of computational laboratory animal models in preclinical research. Since the early 1990s, over 120 realistic computational animal models have been reported in the literature and used as surrogates to characterize the anatomy of actual animals for the simulation of preclinical studies involving the use of bioluminescence tomography, fluorescence molecular tomography, positron emission tomography, single-photon emission computed tomography, microcomputed tomography, magnetic resonance imaging, and optical imaging. Other applications include electromagnetic field simulation, ionizing and nonionizing radiation dosimetry, and themore » development and evaluation of new methodologies for multimodality image coregistration, segmentation, and reconstruction of small animal images. This paper provides a comprehensive review of the history and fundamental technologies used for the development of computational small animal models with a particular focus on their application in preclinical imaging as well as nonionizing and ionizing radiation dosimetry calculations. An overview of the overall process involved in the design of these models, including the fundamental elements used for the construction of different types of computational models, the identification of original anatomical data, the simulation tools used for solving various computational problems, and the applications of computational animal models in preclinical research. The authors also analyze the characteristics of categories of computational models (stylized, voxel-based, and boundary representation) and discuss the technical challenges faced at the present time as well as research needs in the future.« less
Organization and use of a Software/Hardware Avionics Research Program (SHARP)
NASA Technical Reports Server (NTRS)
Karmarkar, J. S.; Kareemi, M. N.
1975-01-01
The organization and use is described of the software/hardware avionics research program (SHARP) developed to duplicate the automatic portion of the STOLAND simulator system, on a general-purpose computer system (i.e., IBM 360). The program's uses are: (1) to conduct comparative evaluation studies of current and proposed airborne and ground system concepts via single run or Monte Carlo simulation techniques, and (2) to provide a software tool for efficient algorithm evaluation and development for the STOLAND avionics computer.
Computational Materials: Modeling and Simulation of Nanostructured Materials and Systems
NASA Technical Reports Server (NTRS)
Gates, Thomas S.; Hinkley, Jeffrey A.
2003-01-01
The paper provides details on the structure and implementation of the Computational Materials program at the NASA Langley Research Center. Examples are given that illustrate the suggested approaches to predicting the behavior and influencing the design of nanostructured materials such as high-performance polymers, composites, and nanotube-reinforced polymers. Primary simulation and measurement methods applicable to multi-scale modeling are outlined. Key challenges including verification and validation of models are highlighted and discussed within the context of NASA's broad mission objectives.
A novel patient-specific model to compute coronary fractional flow reserve.
Kwon, Soon-Sung; Chung, Eui-Chul; Park, Jin-Seo; Kim, Gook-Tae; Kim, Jun-Woo; Kim, Keun-Hong; Shin, Eun-Seok; Shim, Eun Bo
2014-09-01
The fractional flow reserve (FFR) is a widely used clinical index to evaluate the functional severity of coronary stenosis. A computer simulation method based on patients' computed tomography (CT) data is a plausible non-invasive approach for computing the FFR. This method can provide a detailed solution for the stenosed coronary hemodynamics by coupling computational fluid dynamics (CFD) with the lumped parameter model (LPM) of the cardiovascular system. In this work, we have implemented a simple computational method to compute the FFR. As this method uses only coronary arteries for the CFD model and includes only the LPM of the coronary vascular system, it provides simpler boundary conditions for the coronary geometry and is computationally more efficient than existing approaches. To test the efficacy of this method, we simulated a three-dimensional straight vessel using CFD coupled with the LPM. The computed results were compared with those of the LPM. To validate this method in terms of clinically realistic geometry, a patient-specific model of stenosed coronary arteries was constructed from CT images, and the computed FFR was compared with clinically measured results. We evaluated the effect of a model aorta on the computed FFR and compared this with a model without the aorta. Computationally, the model without the aorta was more efficient than that with the aorta, reducing the CPU time required for computing a cardiac cycle to 43.4%. Copyright © 2014. Published by Elsevier Ltd.
Computational Aerothermodynamic Assessment of Space Shuttle Orbiter Tile Damage: Open Cavities
NASA Technical Reports Server (NTRS)
Pulsonetti, Maria; Wood, William
2005-01-01
Computational aerothermodynamic simulations of Orbiter windside tile damage in flight were performed in support of the Space Shuttle Return-to-Flight effort. The simulations were performed for both hypervelocity flight and low-enthalpy wind tunnel conditions and contributed to the Return-to-Flight program by providing information to support a variety of damage scenario analyses. Computations at flight conditions were performed at or very near the peak heating trajectory point for multiple damage scenarios involving damage windside acreage reaction cured glass (RCG) coated silica tile(s). The cavities formed by the missing tile examined in this study were relatively short leading to flow features which indicated open cavity behavior. Results of the computations indicated elevated heating bump factor levels predicted for flight over the predictions for wind tunnel conditions. The peak heating bump factors, defined as the local heating to a reference value upstream of the cavity, on the cavity floor for flight simulation were 67% larger than the peak wind tunnel simulation value. On the downstream face of the cavity the flight simulation values were 60% larger than the wind tunnel simulation values. On the outer mold line (OML) downstream of the cavity, the flight values are about 20% larger than the wind tunnel simulation values. The higher heating bump factors observed in the flight simulations were due to the larger driving potential in terms of energy entering the cavity for the flight simulations. This is evidenced by the larger rate of increase in the total enthalpy through the boundary layer prior to the cavity for the flight simulation.
Computational Fluid Dynamics (CFD) simulations provide a number of unique opportunities for expanding and improving capabilities for modeling exposures to environmental pollutants. The US Environmental Protection Agency's National Exposure Research Laboratory (NERL) has been c...
Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.
Lee, Won Hee; Bullmore, Ed; Frangou, Sophia
2017-02-01
There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Algorithms for Haptic Rendering of 3D Objects
NASA Technical Reports Server (NTRS)
Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam
2003-01-01
Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).
Antonioletti, Mario; Biktashev, Vadim N; Jackson, Adrian; Kharche, Sanjay R; Stary, Tomas; Biktasheva, Irina V
2017-01-01
The BeatBox simulation environment combines flexible script language user interface with the robust computational tools, in order to setup cardiac electrophysiology in-silico experiments without re-coding at low-level, so that cell excitation, tissue/anatomy models, stimulation protocols may be included into a BeatBox script, and simulation run either sequentially or in parallel (MPI) without re-compilation. BeatBox is a free software written in C language to be run on a Unix-based platform. It provides the whole spectrum of multi scale tissue modelling from 0-dimensional individual cell simulation, 1-dimensional fibre, 2-dimensional sheet and 3-dimensional slab of tissue, up to anatomically realistic whole heart simulations, with run time measurements including cardiac re-entry tip/filament tracing, ECG, local/global samples of any variables, etc. BeatBox solvers, cell, and tissue/anatomy models repositories are extended via robust and flexible interfaces, thus providing an open framework for new developments in the field. In this paper we give an overview of the BeatBox current state, together with a description of the main computational methods and MPI parallelisation approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sovinec, Carl
The objective of the Plasma Science and Innovation Center (PSI-Center) is to develop and deploy computational models that simulate conditions in smaller, concept-exploration plasma experiments. The PSIC group at the University of Wisconsin-Madison, led by Prof. Carl Sovinec, uses and enhances the Non-Ideal Magnetohydrodynamics with Rotation, Open Discussion (NIMROD) code, to simulate macroscopic plasma dynamics in a number of magnetic confinement configurations. These numerical simulations provide information on how magnetic fields and plasma flows evolve over all three spatial dimensions, which supplements the limited access of diagnostics in plasma experiments. The information gained from simulation helps explain how plasma evolves.more » It is also used to engineer more effective plasma confinement systems, reducing the need for building many experiments to cover the physical parameter space. The ultimate benefit is a more cost-effective approach to the development of fusion energy for peaceful power production. The supplemental funds provided by the American Recovery and Reinvestment Act of 2009 were used to purchase computer components that were assembled into a 48-core system with 256 Gb of shared memory. The system was engineered and constructed by the group's system administrator at the time, Anthony Hammond. It was successfully used by then graduate student, Dr. John O'Bryan, for computing magnetic relaxation dynamics that occur during experimental tests of non-inductive startup in the Pegasus Toroidal Experiment (pegasus.ep.wisc.edu). Dr. O'Bryan's simulations provided the first detailed explanation of how the driven helical filament of electrical current evolves into a toroidal tokamak-like plasma configuration.« less
Dynamic VMs placement for energy efficiency by PSO in cloud computing
NASA Astrophysics Data System (ADS)
Dashti, Seyed Ebrahim; Rahmani, Amir Masoud
2016-03-01
Recently, cloud computing is growing fast and helps to realise other high technologies. In this paper, we propose a hieratical architecture to satisfy both providers' and consumers' requirements in these technologies. We design a new service in the PaaS layer for scheduling consumer tasks. In the providers' perspective, incompatibility between specification of physical machine and user requests in cloud leads to problems such as energy-performance trade-off and large power consumption so that profits are decreased. To guarantee Quality of service of users' tasks, and reduce energy efficiency, we proposed to modify Particle Swarm Optimisation to reallocate migrated virtual machines in the overloaded host. We also dynamically consolidate the under-loaded host which provides power saving. Simulation results in CloudSim demonstrated that whatever simulation condition is near to the real environment, our method is able to save as much as 14% more energy and the number of migrations and simulation time significantly reduces compared with the previous works.
Multidimensional computer simulation of Stirling cycle engines
NASA Technical Reports Server (NTRS)
Hall, C. A.; Porsching, T. A.; Medley, J.; Tew, R. C.
1990-01-01
The computer code ALGAE (algorithms for the gas equations) treats incompressible, thermally expandable, or locally compressible flows in complicated two-dimensional flow regions. The solution method, finite differencing schemes, and basic modeling of the field equations in ALGAE are applicable to engineering design settings of the type found in Stirling cycle engines. The use of ALGAE to model multiple components of the space power research engine (SPRE) is reported. Videotape computer simulations of the transient behavior of the working gas (helium) in the heater-regenerator-cooler complex of the SPRE demonstrate the usefulness of such a program in providing information on thermal and hydraulic phenomena in multiple component sections of the SPRE.
Particle Hydrodynamics with Material Strength for Multi-Layer Orbital Debris Shield Design
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.
1999-01-01
Three dimensional simulation of oblique hypervelocity impact on orbital debris shielding places extreme demands on computer resources. Research to date has shown that particle models provide the most accurate and efficient means for computer simulation of shield design problems. In order to employ a particle based modeling approach to the wall plate impact portion of the shield design problem, it is essential that particle codes be augmented to represent strength effects. This report describes augmentation of a Lagrangian particle hydrodynamics code developed by the principal investigator, to include strength effects, allowing for the entire shield impact problem to be represented using a single computer code.
Simulating motivated cognition
NASA Technical Reports Server (NTRS)
Gevarter, William B.
1991-01-01
A research effort to develop a sophisticated computer model of human behavior is described. A computer framework of motivated cognition was developed. Motivated cognition focuses on the motivations or affects that provide the context and drive in human cognition and decision making. A conceptual architecture of the human decision-making approach from the perspective of information processing in the human brain is developed in diagrammatic form. A preliminary version of such a diagram is presented. This architecture is then used as a vehicle for successfully constructing a computer program simulation Dweck and Leggett's findings that relate how an individual's implicit theories orient them toward particular goals, with resultant cognitions, affects, and behavior.
PHREEQCI; a graphical user interface for the geochemical computer program PHREEQC
Charlton, Scott R.; Macklin, Clifford L.; Parkhurst, David L.
1997-01-01
PhreeqcI is a Windows-based graphical user interface for the geochemical computer program PHREEQC. PhreeqcI provides the capability to generate and edit input data files, run simulations, and view text files containing simulation results, all within the framework of a single interface. PHREEQC is a multipurpose geochemical program that can perform speciation, inverse, reaction-path, and 1D advective reaction-transport modeling. Interactive access to all of the capabilities of PHREEQC is available with PhreeqcI. The interface is written in Visual Basic and will run on personal computers under the Windows(3.1), Windows95, and WindowsNT operating systems.
Two-dimensional nonsteady viscous flow simulation on the Navier-Stokes computer miniNode
NASA Technical Reports Server (NTRS)
Nosenchuck, Daniel M.; Littman, Michael G.; Flannery, William
1986-01-01
The needs of large-scale scientific computation are outpacing the growth in performance of mainframe supercomputers. In particular, problems in fluid mechanics involving complex flow simulations require far more speed and capacity than that provided by current and proposed Class VI supercomputers. To address this concern, the Navier-Stokes Computer (NSC) was developed. The NSC is a parallel-processing machine, comprised of individual Nodes, each comparable in performance to current supercomputers. The global architecture is that of a hypercube, and a 128-Node NSC has been designed. New architectural features, such as a reconfigurable many-function ALU pipeline and a multifunction memory-ALU switch, have provided the capability to efficiently implement a wide range of algorithms. Efficient algorithms typically involve numerically intensive tasks, which often include conditional operations. These operations may be efficiently implemented on the NSC without, in general, sacrificing vector-processing speed. To illustrate the architecture, programming, and several of the capabilities of the NSC, the simulation of two-dimensional, nonsteady viscous flows on a prototype Node, called the miniNode, is presented.
Shao, Yu; Wang, Shumin
2016-12-01
The numerical simulation of acoustic scattering from elastic objects near a water-sand interface is critical to underwater target identification. Frequency-domain methods are computationally expensive, especially for large-scale broadband problems. A numerical technique is proposed to enable the efficient use of finite-difference time-domain method for broadband simulations. By incorporating a total-field/scattered-field boundary, the simulation domain is restricted inside a tightly bounded region. The incident field is further synthesized by the Fourier transform for both subcritical and supercritical incidences. Finally, the scattered far field is computed using a half-space Green's function. Numerical examples are further provided to demonstrate the accuracy and efficiency of the proposed technique.
Implementation of the force decomposition machine for molecular dynamics simulations.
Borštnik, Urban; Miller, Benjamin T; Brooks, Bernard R; Janežič, Dušanka
2012-09-01
We present the design and implementation of the force decomposition machine (FDM), a cluster of personal computers (PCs) that is tailored to running molecular dynamics (MD) simulations using the distributed diagonal force decomposition (DDFD) parallelization method. The cluster interconnect architecture is optimized for the communication pattern of the DDFD method. Our implementation of the FDM relies on standard commodity components even for networking. Although the cluster is meant for DDFD MD simulations, it remains general enough for other parallel computations. An analysis of several MD simulation runs on both the FDM and a standard PC cluster demonstrates that the FDM's interconnect architecture provides a greater performance compared to a more general cluster interconnect. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Slaughter, A. E.; Permann, C.; Peterson, J. W.; Gaston, D.; Andrs, D.; Miller, J.
2014-12-01
The Idaho National Laboratory (INL)-developed Multiphysics Object Oriented Simulation Environment (MOOSE; www.mooseframework.org), is an open-source, parallel computational framework for enabling the solution of complex, fully implicit multiphysics systems. MOOSE provides a set of computational tools that scientists and engineers can use to create sophisticated multiphysics simulations. Applications built using MOOSE have computed solutions for chemical reaction and transport equations, computational fluid dynamics, solid mechanics, heat conduction, mesoscale materials modeling, geomechanics, and others. To facilitate the coupling of diverse and highly-coupled physical systems, MOOSE employs the Jacobian-free Newton-Krylov (JFNK) method when solving the coupled nonlinear systems of equations arising in multiphysics applications. The MOOSE framework is written in C++, and leverages other high-quality, open-source scientific software packages such as LibMesh, Hypre, and PETSc. MOOSE uses a "hybrid parallel" model which combines both shared memory (thread-based) and distributed memory (MPI-based) parallelism to ensure efficient resource utilization on a wide range of computational hardware. MOOSE-based applications are inherently modular, which allows for simulation expansion (via coupling of additional physics modules) and the creation of multi-scale simulations. Any application developed with MOOSE supports running (in parallel) any other MOOSE-based application. Each application can be developed independently, yet easily communicate with other applications (e.g., conductivity in a slope-scale model could be a constant input, or a complete phase-field micro-structure simulation) without additional code being written. This method of development has proven effective at INL and expedites the development of sophisticated, sustainable, and collaborative simulation tools.
NASA Astrophysics Data System (ADS)
Fang, Ye; Feng, Sheng; Tam, Ka-Ming; Yun, Zhifeng; Moreno, Juana; Ramanujam, J.; Jarrell, Mark
2014-10-01
Monte Carlo simulations of the Ising model play an important role in the field of computational statistical physics, and they have revealed many properties of the model over the past few decades. However, the effect of frustration due to random disorder, in particular the possible spin glass phase, remains a crucial but poorly understood problem. One of the obstacles in the Monte Carlo simulation of random frustrated systems is their long relaxation time making an efficient parallel implementation on state-of-the-art computation platforms highly desirable. The Graphics Processing Unit (GPU) is such a platform that provides an opportunity to significantly enhance the computational performance and thus gain new insight into this problem. In this paper, we present optimization and tuning approaches for the CUDA implementation of the spin glass simulation on GPUs. We discuss the integration of various design alternatives, such as GPU kernel construction with minimal communication, memory tiling, and look-up tables. We present a binary data format, Compact Asynchronous Multispin Coding (CAMSC), which provides an additional 28.4% speedup compared with the traditionally used Asynchronous Multispin Coding (AMSC). Our overall design sustains a performance of 33.5 ps per spin flip attempt for simulating the three-dimensional Edwards-Anderson model with parallel tempering, which significantly improves the performance over existing GPU implementations.
The Structure and Properties of Silica Glass Nanostructures using Novel Computational Systems
NASA Astrophysics Data System (ADS)
Doblack, Benjamin N.
The structure and properties of silica glass nanostructures are examined using computational methods in this work. Standard synthesis methods of silica and its associated material properties are first discussed in brief. A review of prior experiments on this amorphous material is also presented. Background and methodology for the simulation of mechanical tests on amorphous bulk silica and nanostructures are later presented. A new computational system for the accurate and fast simulation of silica glass is also presented, using an appropriate interatomic potential for this material within the open-source molecular dynamics computer program LAMMPS. This alternative computational method uses modern graphics processors, Nvidia CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model select materials, this enhancement allows the addition of accelerated molecular dynamics simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal of this project is to investigate the structure and size dependent mechanical properties of silica glass nanohelical structures under tensile MD conditions using the innovative computational system. Specifically, silica nanoribbons and nanosprings are evaluated which revealed unique size dependent elastic moduli when compared to the bulk material. For the nanoribbons, the tensile behavior differed widely between the models simulated, with distinct characteristic extended elastic regions. In the case of the nanosprings simulated, more clear trends are observed. In particular, larger nanospring wire cross-sectional radii (r) lead to larger Young's moduli, while larger helical diameters (2R) resulted in smaller Young's moduli. Structural transformations and theoretical models are also analyzed to identify possible factors which might affect the mechanical response of silica nanostructures under tension. The work presented outlines an innovative simulation methodology, and discusses how results can be validated against prior experimental and simulation findings. The ultimate goal is to develop new computational methods for the study of nanostructures which will make the field of materials science more accessible, cost effective and efficient.
Utilization of a CRT display light pen in the design of feedback control systems
NASA Technical Reports Server (NTRS)
Thompson, J. G.; Young, K. R.
1972-01-01
A hierarchical structure of the interlinked programs was developed to provide a flexible computer-aided design tool. A graphical input technique and a data structure are considered which provide the capability of entering the control system model description into the computer in block diagram form. An information storage and retrieval system was developed to keep track of the system description, and analysis and simulation results, and to provide them to the correct routines for further manipulation or display. Error analysis and diagnostic capabilities are discussed, and a technique was developed to reduce a transfer function to a set of nested integrals suitable for digital simulation. A general, automated block diagram reduction procedure was set up to prepare the system description for the analysis routines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael Pernice
2010-09-01
INL has agreed to provide participants in the Nuclear Energy Advanced Mod- eling and Simulation (NEAMS) program with access to its high performance computing (HPC) resources under sponsorship of the Enabling Computational Technologies (ECT) program element. This report documents the process used to select applications and the software stack in place at INL.
A Communications Modeling System for Swarm-Based Sensors
2003-09-01
6-10 6.6. Digital and Swarm System Performance Measures . . . . . . . . . . 6-21 7.1. Simulation computing hardware...detection and monitoring, and advances in computational capabilities have provided for embedded data analysis and the generation of information from raw... computing and manufacturing technology have made such systems possible. In order to harness this potential for Air Force applica- tions, a method of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lingerfelt, Eric J; Endeve, Eirik; Hui, Yawei
Improvements in scientific instrumentation allow imaging at mesoscopic to atomic length scales, many spectroscopic modes, and now--with the rise of multimodal acquisition systems and the associated processing capability--the era of multidimensional, informationally dense data sets has arrived. Technical issues in these combinatorial scientific fields are exacerbated by computational challenges best summarized as a necessity for drastic improvement in the capability to transfer, store, and analyze large volumes of data. The Bellerophon Environment for Analysis of Materials (BEAM) platform provides material scientists the capability to directly leverage the integrated computational and analytical power of High Performance Computing (HPC) to perform scalablemore » data analysis and simulation and manage uploaded data files via an intuitive, cross-platform client user interface. This framework delivers authenticated, "push-button" execution of complex user workflows that deploy data analysis algorithms and computational simulations utilizing compute-and-data cloud infrastructures and HPC environments like Titan at the Oak Ridge Leadershp Computing Facility (OLCF).« less
Visualization and processing of computed solid-state NMR parameters: MagresView and MagresPython.
Sturniolo, Simone; Green, Timothy F G; Hanson, Robert M; Zilka, Miri; Refson, Keith; Hodgkinson, Paul; Brown, Steven P; Yates, Jonathan R
2016-09-01
We introduce two open source tools to aid the processing and visualisation of ab-initio computed solid-state NMR parameters. The Magres file format for computed NMR parameters (as implemented in CASTEP v8.0 and QuantumEspresso v5.0.0) is implemented. MagresView is built upon the widely used Jmol crystal viewer, and provides an intuitive environment to display computed NMR parameters. It can provide simple pictorial representation of one- and two-dimensional NMR spectra as well as output a selected spin-system for exact simulations with dedicated spin-dynamics software. MagresPython provides a simple scripting environment to manipulate large numbers of computed NMR parameters to search for structural correlations. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
A Virtual Look at Epstein–Barr Virus Infection: Biological Interpretations
Delgado-Eckert, Edgar; Hadinoto, Vey; Jarrah, Abdul S; Laubenbacher, Reinhard; Lee, Kichol; Luzuriaga, Katherine; Polys, Nicholas F; Thorley-Lawson, David A
2007-01-01
The possibility of using computer simulation and mathematical modeling to gain insight into biological and other complex systems is receiving increased attention. However, it is as yet unclear to what extent these techniques will provide useful biological insights or even what the best approach is. Epstein–Barr virus (EBV) provides a good candidate to address these issues. It persistently infects most humans and is associated with several important diseases. In addition, a detailed biological model has been developed that provides an intricate understanding of EBV infection in the naturally infected human host and accounts for most of the virus' diverse and peculiar properties. We have developed an agent-based computer model/simulation (PathSim, Pathogen Simulation) of this biological model. The simulation is performed on a virtual grid that represents the anatomy of the tonsils of the nasopharyngeal cavity (Waldeyer ring) and the peripheral circulation—the sites of EBV infection and persistence. The simulation is presented via a user friendly visual interface and reproduces quantitative and qualitative aspects of acute and persistent EBV infection. The simulation also had predictive power in validation experiments involving certain aspects of viral infection dynamics. Moreover, it allows us to identify switch points in the infection process that direct the disease course towards the end points of persistence, clearance, or death. Lastly, we were able to identify parameter sets that reproduced aspects of EBV-associated diseases. These investigations indicate that such simulations, combined with laboratory and clinical studies and animal models, will provide a powerful approach to investigating and controlling EBV infection, including the design of targeted anti-viral therapies. PMID:17953479
A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses
Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria
2013-01-01
Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367
Stereoscopic, Force-Feedback Trainer For Telerobot Operators
NASA Technical Reports Server (NTRS)
Kim, Won S.; Schenker, Paul S.; Bejczy, Antal K.
1994-01-01
Computer-controlled simulator for training technicians to operate remote robots provides both visual and kinesthetic virtual reality. Used during initial stage of training; saves time and expense, increases operational safety, and prevents damage to robots by inexperienced operators. Computes virtual contact forces and torques of compliant robot in real time, providing operator with feel of forces experienced by manipulator as well as view in any of three modes: single view, two split views, or stereoscopic view. From keyboard, user specifies force-reflection gain and stiffness of manipulator hand for three translational and three rotational axes. System offers two simulated telerobotic tasks: insertion of peg in hole in three dimensions, and removal and insertion of drawer.
NAS technical summaries. Numerical aerodynamic simulation program, March 1992 - February 1993
NASA Technical Reports Server (NTRS)
1994-01-01
NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1992-93 operational year concluded with 399 high-speed processor projects and 91 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.
PyFly: A fast, portable aerodynamics simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, Daniel; Ghommem, M.; Collier, Nathaniel O.
Here, we present a fast, user-friendly implementation of a potential flow solver based on the unsteady vortex lattice method (UVLM), namely PyFly. UVLM computes the aerodynamic loads applied on lifting surfaces while capturing the unsteady effects such as the added mass forces, the growth of bound circulation, and the wake while assuming that the flow separation location is known a priori. This method is based on discretizing the body surface into a lattice of vortex rings and relies on the Biot–Savart law to construct the velocity field at every point in the simulated domain. We introduce the pointwise approximation approachmore » to simulate the interactions of the far-field vortices to overcome the computational burden associated with the classical implementation of UVLM. The computational framework uses the Python programming language to provide an easy to handle user interface while the computational kernels are written in Fortran. The mixed language approach enables high performance regarding solution time and great flexibility concerning easiness of code adaptation to different system configurations and applications. The computational tool predicts the unsteady aerodynamic behavior of multiple moving bodies (e.g., flapping wings, rotating blades, suspension bridges) subject to incoming air. The aerodynamic simulator can also deal with enclosure effects, multi-body interactions, and B-spline representation of body shapes. Finally, we simulate different aerodynamic problems to illustrate the usefulness and effectiveness of PyFly.« less
PyFly: A fast, portable aerodynamics simulator
Garcia, Daniel; Ghommem, M.; Collier, Nathaniel O.; ...
2018-03-14
Here, we present a fast, user-friendly implementation of a potential flow solver based on the unsteady vortex lattice method (UVLM), namely PyFly. UVLM computes the aerodynamic loads applied on lifting surfaces while capturing the unsteady effects such as the added mass forces, the growth of bound circulation, and the wake while assuming that the flow separation location is known a priori. This method is based on discretizing the body surface into a lattice of vortex rings and relies on the Biot–Savart law to construct the velocity field at every point in the simulated domain. We introduce the pointwise approximation approachmore » to simulate the interactions of the far-field vortices to overcome the computational burden associated with the classical implementation of UVLM. The computational framework uses the Python programming language to provide an easy to handle user interface while the computational kernels are written in Fortran. The mixed language approach enables high performance regarding solution time and great flexibility concerning easiness of code adaptation to different system configurations and applications. The computational tool predicts the unsteady aerodynamic behavior of multiple moving bodies (e.g., flapping wings, rotating blades, suspension bridges) subject to incoming air. The aerodynamic simulator can also deal with enclosure effects, multi-body interactions, and B-spline representation of body shapes. Finally, we simulate different aerodynamic problems to illustrate the usefulness and effectiveness of PyFly.« less
Analyzing Conflict Dynamics with the Aid of an Interactive Microworld Simulator of a Fishing Dispute
ERIC Educational Resources Information Center
Kuperman, Ranan D.
2010-01-01
This article presents findings from a research project that uses an interactive simulator of an imaginary fishing dispute. Subjects operating the simulator play the role of a state leader, while the computer program controls the behavior of a contending state as well as provides all the environmental data associated with the conflict. The…
A Summary of Proceedings for the Advanced Deployable Day/Night Simulation Symposium
2009-07-01
initiated to design , develop, and deliver transportable visual simulations that jointly provide night-vision and high-resolution daylight capability. The...Deployable Day/Night Simulation (ADDNS) Technology Demonstration Project was initiated to design , develop, and deliver transportable visual...was Dr. Richard Wildes (York University); Mr. Vitaly Zholudev (Department of Computer Science, York University), Mr. X. Zhu (Neptec Design Group), and
Initialization of high resolution surface wind simulations using NWS gridded data
J. Forthofer; K. Shannon; Bret Butler
2010-01-01
WindNinja is a standalone computer model designed to provide the user with simulations of surface wind flow. It is deterministic and steady state. It is currently being modified to allow the user to initialize the flow calculation using National Digital Forecast Database. It essentially allows the user to downscale the coarse scale simulations from meso-scale models to...
A users evaluation of SAMIS. [Solar Array Manufacturing Industry Simulation
NASA Technical Reports Server (NTRS)
Grenon, L. A.; Coleman, M. G.
1981-01-01
SAMIS, the Solar Array Manufacturing Industry Simulation computer program was developed by Jet Propulsion Laboratories (JPL) to provide a method whereby manufacturers or potential manufacturers of photovoltaics could simulate a solar industry using their own particular approach. This paper analyzes the usefulness of SAMIS to a growing photovoltaic industry and clearly illustrates its limitations as viewed by an industrial user.
Gorshkov, Anton V; Kirillin, Mikhail Yu
2015-08-01
Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.
Saho, Tatsunori; Onishi, Hideo
2015-07-01
In this study, we evaluated hemodynamics using simulated models and determined how cerebral aneurysms develop in simulated and patient-specific models based on medical images. Computational fluid dynamics (CFD) was analyzed by use of OpenFOAM software. Flow velocity, stream line, and wall shear stress (WSS) were evaluated in a simulated model aneurysm with known geometry and in a three-dimensional angiographic model. The ratio of WSS at the aneurysm compared with that at the basilar artery was 1:10 in simulated model aneurysms with a diameter of 10 mm and 1:18 in the angiographic model, indicating similar tendencies. Vortex flow occurred in both model aneurysms, and the WSS decreased in larger model aneurysms. The angiographic model provided accurate CFD information, and the tendencies of simulated and angiographic models were similar. These findings indicate that hemodynamic effects are involved in the development of aneurysms.
Design of teleoperation system with a force-reflecting real-time simulator
NASA Technical Reports Server (NTRS)
Hirata, Mitsunori; Sato, Yuichi; Nagashima, Fumio; Maruyama, Tsugito
1994-01-01
We developed a force-reflecting teleoperation system that uses a real-time graphic simulator. This system eliminates the effects of communication time delays in remote robot manipulation. The simulator provides the operator with predictive display and feedback of computed contact forces through a six-degree of freedom (6-DOF) master arm on a real-time basis. With this system, peg-in-hole tasks involving round-trip communication time delays of up to a few seconds were performed at three support levels: a real image alone, a predictive display with a real image, and a real-time graphic simulator with computed-contact-force reflection and a predictive display. The experimental results indicate the best teleoperation efficiency was achieved by using the force-reflecting simulator with two images. The shortest work time, lowest sensor maximum, and a 100 percent success rate were obtained. These results demonstrate the effectiveness of simulated-force-reflecting teleoperation efficiency.
Simulating the x-ray image contrast to setup techniques with desired flaw detectability
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2015-04-01
The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing the detector resolution. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.
Causal Reasoning in Medicine: Analysis of a Protocol.
ERIC Educational Resources Information Center
Kuipers, Benjamin; Kassirer, Jerome P.
1984-01-01
Describes the construction of a knowledge representation from the identification of the problem (nephrotic syndrome) to a running computer simulation of causal reasoning to provide a vertical slice of the construction of a cognitive model. Interactions between textbook knowledge, observations of human experts, and computational requirements are…
Computational fluid dynamics applications to improve crop production systems
USDA-ARS?s Scientific Manuscript database
Computational fluid dynamics (CFD), numerical analysis and simulation tools of fluid flow processes have emerged from the development stage and become nowadays a robust design tool. It is widely used to study various transport phenomena which involve fluid flow, heat and mass transfer, providing det...
ERIC Educational Resources Information Center
School Science Review, 1983
1983-01-01
Discusses the Rugby clock as a source of project material, use of ZX81 for experimental science, computer dice analog, oil recovery from reservoirs, and computer simulation of Thompson's experiment for determining e/m for an electron. Activities/procedures are provided when applicable. Also presents questions (and answers) related to time-coded…
Computer program uses Monte Carlo techniques for statistical system performance analysis
NASA Technical Reports Server (NTRS)
Wohl, D. P.
1967-01-01
Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling.
Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company
NASA Technical Reports Server (NTRS)
Lores, M. E.
1978-01-01
Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langer, S; Rotman, D; Schwegler, E
The Institutional Computing Executive Group (ICEG) review of FY05-06 Multiprogrammatic and Institutional Computing (M and IC) activities is presented in the attached report. In summary, we find that the M and IC staff does an outstanding job of acquiring and supporting a wide range of institutional computing resources to meet the programmatic and scientific goals of LLNL. The responsiveness and high quality of support given to users and the programs investing in M and IC reflects the dedication and skill of the M and IC staff. M and IC has successfully managed serial capacity, parallel capacity, and capability computing resources.more » Serial capacity computing supports a wide range of scientific projects which require access to a few high performance processors within a shared memory computer. Parallel capacity computing supports scientific projects that require a moderate number of processors (up to roughly 1000) on a parallel computer. Capability computing supports parallel jobs that push the limits of simulation science. M and IC has worked closely with Stockpile Stewardship, and together they have made LLNL a premier institution for computational and simulation science. Such a standing is vital to the continued success of laboratory science programs and to the recruitment and retention of top scientists. This report provides recommendations to build on M and IC's accomplishments and improve simulation capabilities at LLNL. We recommend that institution fully fund (1) operation of the atlas cluster purchased in FY06 to support a few large projects; (2) operation of the thunder and zeus clusters to enable 'mid-range' parallel capacity simulations during normal operation and a limited number of large simulations during dedicated application time; (3) operation of the new yana cluster to support a wide range of serial capacity simulations; (4) improvements to the reliability and performance of the Lustre parallel file system; (5) support for the new GDO petabyte-class storage facility on the green network for use in data intensive external collaborations; and (6) continued support for visualization and other methods for analyzing large simulations. We also recommend that M and IC begin planning in FY07 for the next upgrade of its parallel clusters. LLNL investments in M and IC have resulted in a world-class simulation capability leading to innovative science. We thank the LLNL management for its continued support and thank the M and IC staff for its vision and dedicated efforts to make it all happen.« less
Spatial adaptive sampling in multiscale simulation
NASA Astrophysics Data System (ADS)
Rouet-Leduc, Bertrand; Barros, Kipton; Cieren, Emmanuel; Elango, Venmugil; Junghans, Christoph; Lookman, Turab; Mohd-Yusof, Jamaludin; Pavel, Robert S.; Rivera, Axel Y.; Roehm, Dominic; McPherson, Allen L.; Germann, Timothy C.
2014-07-01
In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈ 50 ×N0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.
A Java-Enabled Interactive Graphical Gas Turbine Propulsion System Simulator
NASA Technical Reports Server (NTRS)
Reed, John A.; Afjeh, Abdollah A.
1997-01-01
This paper describes a gas turbine simulation system which utilizes the newly developed Java language environment software system. The system provides an interactive graphical environment which allows the quick and efficient construction and analysis of arbitrary gas turbine propulsion systems. The simulation system couples a graphical user interface, developed using the Java Abstract Window Toolkit, and a transient, space- averaged, aero-thermodynamic gas turbine analysis method, both entirely coded in the Java language. The combined package provides analytical, graphical and data management tools which allow the user to construct and control engine simulations by manipulating graphical objects on the computer display screen. Distributed simulations, including parallel processing and distributed database access across the Internet and World-Wide Web (WWW), are made possible through services provided by the Java environment.
Biomechanical testing simulation of a cadaver spine specimen: development and evaluation study.
Ahn, Hyung Soo; DiAngelo, Denis J
2007-05-15
This article describes a computer model of the cadaver cervical spine specimen and virtual biomechanical testing. To develop a graphics-oriented, multibody model of a cadaver cervical spine and to build a virtual laboratory simulator for the biomechanical testing using physics-based dynamic simulation techniques. Physics-based computer simulations apply the laws of physics to solid bodies with defined material properties. This technique can be used to create a virtual simulator for the biomechanical testing of a human cadaver spine. An accurate virtual model and simulation would complement tissue-based in vitro studies by providing a consistent test bed with minimal variability and by reducing cost. The geometry of cervical vertebrae was created from computed tomography images. Joints linking adjacent vertebrae were modeled as a triple-joint complex, comprised of intervertebral disc joints in the anterior region, 2 facet joints in the posterior region, and the surrounding ligament structure. A virtual laboratory simulation of an in vitro testing protocol was performed to evaluate the model responses during flexion, extension, and lateral bending. For kinematic evaluation, the rotation of motion segment unit, coupling behaviors, and 3-dimensional helical axes of motion were analyzed. The simulation results were in correlation with the findings of in vitro tests and published data. For kinetic evaluation, the forces of the intervertebral discs and facet joints of each segment were determined and visually animated. This methodology produced a realistic visualization of in vitro experiment, and allowed for the analyses of the kinematics and kinetics of the cadaver cervical spine. With graphical illustrations and animation features, this modeling technique has provided vivid and intuitive information.
Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val
1989-01-01
Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.
NASA Astrophysics Data System (ADS)
Klopfer, Eric; Yoon, Susan; Perry, Judy
2005-09-01
This paper reports on teachers' perceptions of the educational affordances of a handheld application called Participatory Simulations. It presents evidence from five cases representing each of the populations who work with these computational tools. Evidence across multiple data sources yield similar results to previous research evaluations of handheld activities with respect to enhancing motivation, engagement and self-directed learning. Three additional themes are discussed that provide insight into understanding curricular applicability of Participatory Simulations that suggest a new take on ubiquitous and accessible mobile computing. These themes generally point to the multiple layers of social and cognitive flexibility intrinsic to their design: ease of adaptation to subject-matter content knowledge and curricular integration; facility in attending to teacher-individualized goals; and encouraging the adoption of learner-centered strategies.
NASA Technical Reports Server (NTRS)
Benyo, Theresa L.
2002-01-01
Integration of a supersonic inlet simulation with a computer aided design (CAD) system is demonstrated. The integration is performed using the Project Integration Architecture (PIA). PIA provides a common environment for wrapping many types of applications. Accessing geometry data from CAD files is accomplished by incorporating appropriate function calls from the Computational Analysis Programming Interface (CAPRI). CAPRI is a CAD vendor neutral programming interface that aids in acquiring geometry data directly from CAD files. The benefits of wrapping a supersonic inlet simulation into PIA using CAPRI are; direct access of geometry data, accurate capture of geometry data, automatic conversion of data units, CAD vendor neutral operation, and on-line interactive history capture. This paper describes the PIA and the CAPRI wrapper and details the supersonic inlet simulation demonstration.
MOOSE: A parallel computational framework for coupled systems of nonlinear equations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derek Gaston; Chris Newman; Glen Hansen
Systems of coupled, nonlinear partial differential equations (PDEs) often arise in simulation of nuclear processes. MOOSE: Multiphysics Object Oriented Simulation Environment, a parallel computational framework targeted at the solution of such systems, is presented. As opposed to traditional data-flow oriented computational frameworks, MOOSE is instead founded on the mathematical principle of Jacobian-free Newton-Krylov (JFNK) solution methods. Utilizing the mathematical structure present in JFNK, physics expressions are modularized into `Kernels,'' allowing for rapid production of new simulation tools. In addition, systems are solved implicitly and fully coupled, employing physics based preconditioning, which provides great flexibility even with large variance in timemore » scales. A summary of the mathematics, an overview of the structure of MOOSE, and several representative solutions from applications built on the framework are presented.« less
Numerical aerodynamic simulation facility preliminary study: Executive study
NASA Technical Reports Server (NTRS)
1977-01-01
A computing system was designed with the capability of providing an effective throughput of one billion floating point operations per second for three dimensional Navier-Stokes codes. The methodology used in defining the baseline design, and the major elements of the numerical aerodynamic simulation facility are described.
Crocodile Technology. [CD-ROM].
ERIC Educational Resources Information Center
2000
This high school physics computer software resource is a systems and control simulator that covers the topics of electricity, electronics, mechanics, and programming. Circuits can easily be simulated on the screen and electronic and mechanical components can be combined. In addition to those provided in Crocodile Technology, a student can create…
Fast and unbiased estimator of the time-dependent Hurst exponent.
Pianese, Augusto; Bianchi, Sergio; Palazzo, Anna Maria
2018-03-01
We combine two existing estimators of the local Hurst exponent to improve both the goodness of fit and the computational speed of the algorithm. An application with simulated time series is implemented, and a Monte Carlo simulation is performed to provide evidence of the improvement.
Fast and unbiased estimator of the time-dependent Hurst exponent
NASA Astrophysics Data System (ADS)
Pianese, Augusto; Bianchi, Sergio; Palazzo, Anna Maria
2018-03-01
We combine two existing estimators of the local Hurst exponent to improve both the goodness of fit and the computational speed of the algorithm. An application with simulated time series is implemented, and a Monte Carlo simulation is performed to provide evidence of the improvement.
NASA Technical Reports Server (NTRS)
Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.
1981-01-01
The initial phase of a program to determine the best interpretation strategy and sensor configuration for a radar remote sensing system for geologic applications is discussed. In this phase, terrain modeling and radar image simulation were used to perform parametric sensitivity studies. A relatively simple computer-generated terrain model is presented, and the data base, backscatter file, and transfer function for digital image simulation are described. Sets of images are presented that simulate the results obtained with an X-band radar from an altitude of 800 km and at three different terrain-illumination angles. The simulations include power maps, slant-range images, ground-range images, and ground-range images with statistical noise incorporated. It is concluded that digital image simulation and computer modeling provide cost-effective methods for evaluating terrain variations and sensor parameter changes, for predicting results, and for defining optimum sensor parameters.
NASA Technical Reports Server (NTRS)
Rogers, Ralph V.
1992-01-01
This research project addresses the need to provide an efficient and safe mechanism to investigate the effects and requirements of the tiltrotor aircraft's commercial operations on air transportation infrastructures, particularly air traffic control. The mechanism of choice is computer simulation. Unfortunately, the fundamental paradigms of the current air traffic control simulation models do not directly support the broad range of operational options and environments necessary to study tiltrotor operations. Modification of current air traffic simulation models to meet these requirements does not appear viable given the range and complexity of issues needing resolution. As a result, the investigation of systemic, infrastructure issues surrounding the effects of tiltrotor commercial operations requires new approaches to simulation modeling. These models should be based on perspectives and ideas closer to those associated with tiltrotor air traffic operations.
The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation
NASA Astrophysics Data System (ADS)
Thoreson, Gregory G.; Schneider, Erich A.
2012-04-01
Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.
Sardinha, Ana Gabriella de Oliveira; Oyama, Ceres Nunes de Resende; de Mendonça Maroja, Armando; Costa, Ivan F
2016-01-01
The aim of this paper is to provide a general discussion, algorithm, and actual working programs of the deformation method for fast simulation of biological tissue formed by fibers and fluid. In order to demonstrate the benefit of the clinical applications software, we successfully used our computational program to deform a 3D breast image acquired from patients, using a 3D scanner, in a real hospital environment. The method implements a quasi-static solution for elastic global deformations of objects. Each pair of vertices of the surface is connected and defines an elastic fiber. The set of all the elastic fibers defines a mesh of smaller size than the volumetric meshes, allowing for simulation of complex objects with less computational effort. The behavior similar to the stress tensor is obtained by the volume conservation equation that mixes the 3D coordinates. Step by step, we show the computational implementation of this approach. As an example, a 2D rectangle formed by only 4 vertices is solved and, for this simple geometry, all intermediate results are shown. On the other hand, actual implementations of these ideas in the form of working computer routines are provided for general 3D objects, including a clinical application.
Effective Energy Simulation and Optimal Design of Side-lit Buildings with Venetian Blinds
NASA Astrophysics Data System (ADS)
Cheng, Tian
Venetian blinds are popularly used in buildings to control the amount of incoming daylight for improving visual comfort and reducing heat gains in air-conditioning systems. Studies have shown that the proper design and operation of window systems could result in significant energy savings in both lighting and cooling. However, there is no convenient computer tool that allows effective and efficient optimization of the envelope of side-lit buildings with blinds now. Three computer tools, Adeline, DOE2 and EnergyPlus widely used for the above-mentioned purpose have been experimentally examined in this study. Results indicate that the two former tools give unacceptable accuracy due to unrealistic assumptions adopted while the last one may generate large errors in certain conditions. Moreover, current computer tools have to conduct hourly energy simulations, which are not necessary for life-cycle energy analysis and optimal design, to provide annual cooling loads. This is not computationally efficient, particularly not suitable for optimal designing a building at initial stage because the impacts of many design variations and optional features have to be evaluated. A methodology is therefore developed for efficient and effective thermal and daylighting simulations and optimal design of buildings with blinds. Based on geometric optics and radiosity method, a mathematical model is developed to reasonably simulate the daylighting behaviors of venetian blinds. Indoor illuminance at any reference point can be directly and efficiently computed. They have been validated with both experiments and simulations with Radiance. Validation results show that indoor illuminances computed by the new models agree well with the measured data, and the accuracy provided by them is equivalent to that of Radiance. The computational efficiency of the new models is much higher than that of Radiance as well as EnergyPlus. Two new methods are developed for the thermal simulation of buildings. A fast Fourier transform (FFT) method is presented to avoid the root-searching process in the inverse Laplace transform of multilayered walls. Generalized explicit FFT formulae for calculating the discrete Fourier transform (DFT) are developed for the first time. They can largely facilitate the implementation of FFT. The new method also provides a basis for generating the symbolic response factors. Validation simulations show that it can generate the response factors as accurate as the analytical solutions. The second method is for direct estimation of annual or seasonal cooling loads without the need for tedious hourly energy simulations. It is validated by hourly simulation results with DOE2. Then symbolic long-term cooling load can be created by combining the two methods with thermal network analysis. The symbolic long-term cooling load can keep the design parameters of interest as symbols, which is particularly useful for the optimal design and sensitivity analysis. The methodology is applied to an office building in Hong Kong for the optimal design of building envelope. Design variables such as window-to-wall ratio, building orientation, and glazing optical and thermal properties are included in the study. Results show that the selected design values could significantly impact the energy performance of windows, and the optimal design of side-lit buildings could greatly enhance energy savings. The application example also demonstrates that the developed methodology significantly facilitates the optimal building design and sensitivity analysis, and leads to high computational efficiency.
Sukop, Michael C.; Huang, Haibo; Alvarez, Pedro F.; Variano, Evan A.; Cunningham, Kevin J.
2013-01-01
Lattice Boltzmann flow simulations provide a physics-based means of estimating intrinsic permeability from pore structure and accounting for inertial flow that leads to departures from Darcy's law. Simulations were used to compute intrinsic permeability where standard measurement methods may fail and to provide better understanding of departures from Darcy's law under field conditions. Simulations also investigated resolution issues. Computed tomography (CT) images were acquired at 0.8 mm interscan spacing for seven samples characterized by centimeter-scale biogenic vuggy macroporosity from the extremely transmissive sole-source carbonate karst Biscayne aquifer in southeastern Florida. Samples were as large as 0.3 m in length; 7–9 cm-scale-length subsamples were used for lattice Boltzmann computations. Macroporosity of the subsamples was as high as 81%. Matrix porosity was ignored in the simulations. Non-Darcy behavior led to a twofold reduction in apparent hydraulic conductivity as an applied hydraulic gradient increased to levels observed at regional scale within the Biscayne aquifer; larger reductions are expected under higher gradients near wells and canals. Thus, inertial flows and departures from Darcy's law may occur under field conditions. Changes in apparent hydraulic conductivity with changes in head gradient computed with the lattice Boltzmann model closely fit the Darcy-Forchheimer equation allowing estimation of the Forchheimer parameter. CT-scan resolution appeared adequate to capture intrinsic permeability; however, departures from Darcy behavior were less detectable as resolution coarsened.