Simulating History to Understand International Politics
ERIC Educational Resources Information Center
Weir, Kimberly; Baranowski, Michael
2011-01-01
To understand world politics, one must appreciate the context in which international systems develop and operate. Pedagogy studies demonstrate that the more active students are in their learning, the more they learn. As such, using computer simulations can complement and enhance classroom instruction. CIVILIZATION is a computer simulation game…
Preface to advances in numerical simulation of plasmas
NASA Astrophysics Data System (ADS)
Parker, Scott E.; Chacon, Luis
2016-10-01
This Journal of Computational Physics Special Issue, titled ;Advances in Numerical Simulation of Plasmas,; presents a snapshot of the international state of the art in the field of computational plasma physics. The articles herein are a subset of the topics presented as invited talks at the 24th International Conference on the Numerical Simulation of Plasmas (ICNSP), August 12-14, 2015 in Golden, Colorado. The choice of papers was highly selective. The ICNSP is held every other year and is the premier scientific meeting in the field of computational plasma physics.
International Futures (IFs): A Global Issues Simulation for Teaching and Research.
ERIC Educational Resources Information Center
Hughes, Barry B.
This paper describes the International Futures (IFs) computer assisted simulation game for use with undergraduates. Written in Standard Fortran IV, the model currently runs on mainframe or mini computers, but has not been adapted for micros. It has been successfully installed on Harris, Burroughs, Telefunken, CDC, Univac, IBM, and Prime machines.…
Using PC Software To Enhance the Student's Ability To Learn the Exporting Process.
ERIC Educational Resources Information Center
Buckles, Tom A.; Lange, Irene
This paper describes the advantages of using computer simulations in the classroom or managerial environment and the major premise and principal components of Export to Win!, a computer simulation used in international marketing seminars. A rationale for using computer simulations argues that they improve the quality of teaching by building…
Advances in Numerical Boundary Conditions for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.
1997-01-01
Advances in Computational Aeroacoustics (CAA) depend critically on the availability of accurate, nondispersive, least dissipative computation algorithm as well as high quality numerical boundary treatments. This paper focuses on the recent developments of numerical boundary conditions. In a typical CAA problem, one often encounters two types of boundaries. Because a finite computation domain is used, there are external boundaries. On the external boundaries, boundary conditions simulating the solution outside the computation domain are to be imposed. Inside the computation domain, there may be internal boundaries. On these internal boundaries, boundary conditions simulating the presence of an object or surface with specific acoustic characteristics are to be applied. Numerical boundary conditions, both external or internal, developed for simple model problems are reviewed and examined. Numerical boundary conditions for real aeroacoustic problems are also discussed through specific examples. The paper concludes with a description of some much needed research in numerical boundary conditions for CAA.
2008-02-09
Campbell, S. Ogata, and F. Shimojo, “ Multimillion atom simulations of nanosystems on parallel computers,” in Proceedings of the International...nanomesas: multimillion -atom molecular dynamics simulations on parallel computers,” J. Appl. Phys. 94, 6762 (2003). 21. P. Vashishta, R. K. Kalia...and A. Nakano, “ Multimillion atom molecular dynamics simulations of nanoparticles on parallel computers,” Journal of Nanoparticle Research 5, 119-135
International benchmarking of longitudinal train dynamics simulators: results
NASA Astrophysics Data System (ADS)
Wu, Qing; Spiryagin, Maksym; Cole, Colin; Chang, Chongyi; Guo, Gang; Sakalo, Alexey; Wei, Wei; Zhao, Xubao; Burgelman, Nico; Wiersma, Pier; Chollet, Hugues; Sebes, Michel; Shamdani, Amir; Melzi, Stefano; Cheli, Federico; di Gialleonardo, Egidio; Bosso, Nicola; Zampieri, Nicolò; Luo, Shihui; Wu, Honghua; Kaza, Guy-Léon
2018-03-01
This paper presents the results of the International Benchmarking of Longitudinal Train Dynamics Simulators which involved participation of nine simulators (TABLDSS, UM, CRE-LTS, TDEAS, PoliTo, TsDyn, CARS, BODYSIM and VOCO) from six countries. Longitudinal train dynamics results and computing time of four simulation cases are presented and compared. The results show that all simulators had basic agreement in simulations of locomotive forces, resistance forces and track gradients. The major differences among different simulators lie in the draft gear models. TABLDSS, UM, CRE-LTS, TDEAS, TsDyn and CARS had general agreement in terms of the in-train forces; minor differences exist as reflections of draft gear model variations. In-train force oscillations were observed in VOCO due to the introduction of wheel-rail contact. In-train force instabilities were sometimes observed in PoliTo and BODYSIM due to the velocity controlled transitional characteristics which could have generated unreasonable transitional stiffness. Regarding computing time per train operational second, the following list is in order of increasing computing speed: VOCO, TsDyn, PoliTO, CARS, BODYSIM, UM, TDEAS, CRE-LTS and TABLDSS (fastest); all simulators except VOCO, TsDyn and PoliTo achieved faster speeds than real-time simulations. Similarly, regarding computing time per integration step, the computing speeds in order are: CRE-LTS, VOCO, CARS, TsDyn, UM, TABLDSS and TDEAS (fastest).
Computer simulation of space charge
NASA Astrophysics Data System (ADS)
Yu, K. W.; Chung, W. K.; Mak, S. S.
1991-05-01
Using the particle-mesh (PM) method, a one-dimensional simulation of the well-known Langmuir-Child's law is performed on an INTEL 80386-based personal computer system. The program is coded in turbo basic (trademark of Borland International, Inc.). The numerical results obtained were in excellent agreement with theoretical predictions and the computational time required is quite modest. This simulation exercise demonstrates that some simple computer simulation using particles may be implemented successfully on PC's that are available today, and hopefully this will provide the necessary incentives for newcomers to the field who wish to acquire a flavor of the elementary aspects of the practice.
Study on temperature distribution effect on internal charging by computer simulation
NASA Astrophysics Data System (ADS)
Yi, Zhong
2016-07-01
Internal charging (or deep dielectric charging) is a great threaten to spacecraft. Dielectric conductivity is an important parameter for internal charging and it is sensitive to temperature. Considering the exposed dielectric outside a spacecraft may experience a relatively large temperature range, temperature effect can't be ignored in internal charging assessment. We can see some reporters on techniques of computer simulation of internal charging, but the temperature effect has not been taken into accounts. In this paper, we realize the internal charging simulation with consideration of temperature distribution inside the dielectric. Geant4 is used for charge transportation, and a numerical method is proposed for solving the current reservation equation. The conductivity dependences on temperature, radiation dose rate and intense electric field are considered. Compared to the case of uniform temperature, the internal charging with temperature distribution is more complicated. Results show that temperature distribution can cause electric field distortion within the dielectric. This distortion refers to locally considerable enlargement of electric field. It usually corresponds to the peak electric field which is critical for dielectric breakdown judgment. The peak electric field can emerge inside the dielectric, or appear on the boundary. This improvement of internal charging simulation is beneficial for the assessment of internal charging under multiple factors.
A study of workstation computational performance for real-time flight simulation
NASA Technical Reports Server (NTRS)
Maddalon, Jeffrey M.; Cleveland, Jeff I., II
1995-01-01
With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.
NASA Astrophysics Data System (ADS)
Akai, Hisazumi; Tsuneyuki, Shinji
2009-02-01
This special issue of Journal of Physics: Condensed Matter comprises selected papers from the proceedings of the 2nd International Conference on Quantum Simulators and Design (QSD2008) held in Tokyo, Japan, between 31 May and 3 June 2008. This conference was organized under the auspices of the Development of New Quantum Simulators and Quantum Design Grant-in-Aid for Scientific Research on Priority Areas, Ministry of Education, Culture, Sports, Science and Technology of Japan (MEXT). The conference focused on the development of first principles electronic structure calculations and their applications. The aim was to provide an opportunity for discussion on the progress in computational materials design and, in particular, the development of quantum simulators and quantum design. Computational materials design is a computational approach to the development of new materials. The essential ingredient is the use of quantum simulators to design a material that meets a given specification of properties and functionalities. For this to be successful, the quantum simulator should be very reliable and be applicable to systems of realistic size. During the conference, new methods of quantum simulation and quantum design were discussed including methods beyond the local density approximation of density functional theory, order-N methods, methods dealing with excitations and reactions, and the application of these methods to the design of novel materials, devices and systems. The conference provided an international forum for experimental and theoretical researchers to exchange ideas. A total of 220 delegates from eight countries participated in the conference. There were 13 invited talks, ten oral presentations and 120 posters. The 3rd International Conference on Quantum Simulators and Design will be held in Germany in the autumn of 2011.
Computational simulation of the creep-rupture process in filamentary composite materials
NASA Technical Reports Server (NTRS)
Slattery, Kerry T.; Hackett, Robert M.
1991-01-01
A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.
High performance computing for advanced modeling and simulation of materials
NASA Astrophysics Data System (ADS)
Wang, Jue; Gao, Fei; Vazquez-Poletti, Jose Luis; Li, Jianjiang
2017-02-01
The First International Workshop on High Performance Computing for Advanced Modeling and Simulation of Materials (HPCMS2015) was held in Austin, Texas, USA, Nov. 18, 2015. HPCMS 2015 was organized by Computer Network Information Center (Chinese Academy of Sciences), University of Michigan, Universidad Complutense de Madrid, University of Science and Technology Beijing, Pittsburgh Supercomputing Center, China Institute of Atomic Energy, and Ames Laboratory.
Computer Simulation of Reading.
ERIC Educational Resources Information Center
Leton, Donald A.
In recent years, coding and decoding have been claimed to be the processes for converting one language form to another. But there has been little effort to locate these processes in the human learner or to identify the nature of the internal codes. Computer simulation of reading is useful because the similarities in the human reception and…
Modeling a Wireless Network for International Space Station
NASA Technical Reports Server (NTRS)
Alena, Richard; Yaprak, Ece; Lamouri, Saad
2000-01-01
This paper describes the application of wireless local area network (LAN) simulation modeling methods to the hybrid LAN architecture designed for supporting crew-computing tools aboard the International Space Station (ISS). These crew-computing tools, such as wearable computers and portable advisory systems, will provide crew members with real-time vehicle and payload status information and access to digital technical and scientific libraries, significantly enhancing human capabilities in space. A wireless network, therefore, will provide wearable computer and remote instruments with the high performance computational power needed by next-generation 'intelligent' software applications. Wireless network performance in such simulated environments is characterized by the sustainable throughput of data under different traffic conditions. This data will be used to help plan the addition of more access points supporting new modules and more nodes for increased network capacity as the ISS grows.
Fluid Structural Analysis of Human Cerebral Aneurysm Using Their Own Wall Mechanical Properties
Valencia, Alvaro; Burdiles, Patricio; Ignat, Miguel; Mura, Jorge; Rivera, Rodrigo; Sordo, Juan
2013-01-01
Computational Structural Dynamics (CSD) simulations, Computational Fluid Dynamics (CFD) simulation, and Fluid Structure Interaction (FSI) simulations were carried out in an anatomically realistic model of a saccular cerebral aneurysm with the objective of quantifying the effects of type of simulation on principal fluid and solid mechanics results. Eight CSD simulations, one CFD simulation, and four FSI simulations were made. The results allowed the study of the influence of the type of material elements in the solid, the aneurism's wall thickness, and the type of simulation on the modeling of a human cerebral aneurysm. The simulations use their own wall mechanical properties of the aneurysm. The more complex simulation was the FSI simulation completely coupled with hyperelastic Mooney-Rivlin material, normal internal pressure, and normal variable thickness. The FSI simulation coupled in one direction using hyperelastic Mooney-Rivlin material, normal internal pressure, and normal variable thickness is the one that presents the most similar results with respect to the more complex FSI simulation, requiring one-fourth of the calculation time. PMID:24151523
NASA Astrophysics Data System (ADS)
Lund, Matthew Lawrence
The space radiation environment is a significant challenge to future manned and unmanned space travels. Future missions will rely more on accurate simulations of radiation transport in space through spacecraft to predict astronaut dose and energy deposition within spacecraft electronics. The International Space Station provides long-term measurements of the radiation environment in Low Earth Orbit (LEO); however, only the Apollo missions provided dosimetry data beyond LEO. Thus dosimetry analysis for deep space missions is poorly supported with currently available data, and there is a need to develop dosimetry-predicting models for extended deep space missions. GEANT4, a Monte Carlo Method, provides a powerful toolkit in C++ for simulation of radiation transport in arbitrary media, thus including the spacecraft and space travels. The newest version of GEANT4 supports multithreading and MPI, resulting in faster distributive processing of simulations in high-performance computing clusters. This thesis introduces a new application based on GEANT4 that greatly reduces computational time using Kingspeak and Ember computational clusters at the Center for High Performance Computing (CHPC) to simulate radiation transport through full spacecraft geometry, reducing simulation time to hours instead of weeks without post simulation processing. Additionally, this thesis introduces a new set of detectors besides the historically used International Commission of Radiation Units (ICRU) spheres for calculating dose distribution, including a Thermoluminescent Detector (TLD), Tissue Equivalent Proportional Counter (TEPC), and human phantom combined with a series of new primitive scorers in GEANT4 to calculate dose equivalence based on the International Commission of Radiation Protection (ICRP) standards. The developed models in this thesis predict dose depositions in the International Space Station and during the Apollo missions showing good agreement with experimental measurements. From these models the greatest contributor to radiation dose for the Apollo missions was from Galactic Cosmic Rays due to the short time within the radiation belts. The Apollo 14 dose measurements were an order of magnitude higher compared to other Apollo missions. The GEANT4 model of the Apollo Command Module shows consistent doses due to Galactic Cosmic Rays and Radiation Belts for all missions, with a small variation in dose distribution across the capsule. The model also predicts well the dose depositions and equivalent dose values in various human organs for the International Space Station or Apollo Command Module.
NASA Astrophysics Data System (ADS)
Li, Shouju; Shangguan, Zichang; Cao, Lijuan
A procedure based on FEM is proposed to simulate interaction between concrete segments of tunnel linings and soils. The beam element named as Beam 3 in ANSYS software was used to simulate segments. The ground loss induced from shield tunneling and segment installing processes is simulated in finite element analysis. The distributions of bending moment, axial force and shear force on segments were computed by FEM. The commutated internal forces on segments will be used to design reinforced bars on shield linings. Numerically simulated ground settlements agree with observed values.
Advanced Computer Simulations of Military Incinerators
2004-12-01
Reaction Engineering International (REI) has developed advanced computer simulation tools for analyzing chemical demilitarization incinerators. The...Manager, 2003a: Summary of Engineering Design Study Projectile Washout System (PWS) Testing. Assembled Chemical Weapons Alternatives (ACWA), Final... Engineering Design Studies for Demilitarization of Assembled Chemical Weapons at Pueblo Chemical Depot. O’Shea, L. et al, 2003: RIM 57 – Monitoring in
Solid rocket booster internal flow analysis by highly accurate adaptive computational methods
NASA Technical Reports Server (NTRS)
Huang, C. Y.; Tworzydlo, W.; Oden, J. T.; Bass, J. M.; Cullen, C.; Vadaketh, S.
1991-01-01
The primary objective of this project was to develop an adaptive finite element flow solver for simulating internal flows in the solid rocket booster. Described here is a unique flow simulator code for analyzing highly complex flow phenomena in the solid rocket booster. New methodologies and features incorporated into this analysis tool are described.
NASA Astrophysics Data System (ADS)
Deng, Hua; Dutta, Prashanta; Liu, Jin
2016-11-01
Clathrin-mediated endocytosis (CME) is one of the most important endocytic pathways for the internalization of bioparticles at lipid membrane of cells, which plays crucial roles in fundamental understanding of viral infections and interacellular/transcelluar targeted drug delivery. During CME, highly dynamic clathrin-coated pit (CCP), formed by the growth of ordered clathrin lattices, is the key scaffolding component that drives the deformation of plasma membrane. Experimental studies have shown that CCP alone can provide sufficient membrane curvature for facilitating membrane invagination. However, currently there is no computational model that could couple cargo receptor binding with membrane invagination process, nor simulations of the dynamic growing process of CCP. We develop a stochastic computational model for the clathrin-mediated endocytosis based on Metropolis Monte Carlo simulations. In our model, the energetic costs of bending membrane and CCP are linked with antigen-antibody interactions. The assembly of clathrin lattices is a dynamic process that correlates with antigen-antibody bond formation. This model helps study the membrane deformation and the effects of CCP during functionalized bioparticles internalization through CME. This work is supported by NSF Grants: CBET-1250107 and CBET-1604211.
Provably unbounded memory advantage in stochastic simulation using quantum mechanics
NASA Astrophysics Data System (ADS)
Garner, Andrew J. P.; Liu, Qing; Thompson, Jayne; Vedral, Vlatko; Gu, mile
2017-10-01
Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart.
The International Negotiation Seminars Project. Project ICONS.
ERIC Educational Resources Information Center
Wilkenfeld, Jonathan; Kaufman, Joyce; Starkey, Brigid
This report of a study at the University of Maryland describes an international, interactive, and interdisciplinary project for first- and second-year students, which combines a large lecture format with small-group, seminar-type sessions organized around a computer-assisted simulation model, the International Communication and Negotiation…
Fuel-Air Explosive Simulation of Far-Field Nuclear Airblasts.
1979-12-31
Blastwave Simulator," Sixieme Symposium International sur Les A19 plications Militaires de La Simulation de Souffle, Centre D’Etudes de Gramat , Gramat ... Gramat , Gramat , France, p. 4.2.1, June 1979. 207 7............................. 64. Cooperwaithe, M. and Zwisler, W. H., "TIGER Computer Program
ERIC Educational Resources Information Center
Lee, James R.
1989-01-01
Discussion of the use of simulations to teach international relations (IR) highlights the Chinese House Game, a computer-based decision-making game based on Inter Nation Simulation (INS). Topics discussed include the increasing role of artificial intelligence in IR simulations, multi-disciplinary approaches, and the direction of IR as a…
Numerical Simulation Of Flow Through An Artificial Heart
NASA Technical Reports Server (NTRS)
Rogers, Stuart; Kutler, Paul; Kwak, Dochan; Kiris, Centin
1991-01-01
Research in both artificial hearts and fluid dynamics benefits from computational studies. Algorithm that implements Navier-Stokes equations of flow extended to simulate flow of viscous, incompressible blood through articifial heart. Ability to compute details of such flow important for two reasons: internal flows with moving boundaries of academic interest in their own right, and many of deficiencies of artificial hearts attributable to dynamics of flow.
The Communicative Computer Compares: A CALL Design Project for Elementary French.
ERIC Educational Resources Information Center
Kyle, Patricia J.
A computer lesson entitled "Aux Jeux Olympiques" (To the Olympic Games) simulates an ongoing situational dialog between the French student and the PLATO computer system. It offers an international setting for functional learning exercises focusing on students' understanding and use of comparative constructions, selected verbs, and other linguistic…
ERIC Educational Resources Information Center
Association for the Development of Computer-based Instructional Systems.
These proceedings include papers on such topics as authoring systems, computer-managed instruction, testing, instructional design, management education, simulations, intelligent computer-assisted instruction, and other areas related to computer-based education. Fifty-six papers and 104 abstracts are organized by Association for the Development of…
Practical Unitary Simulator for Non-Markovian Complex Processes
NASA Astrophysics Data System (ADS)
Binder, Felix C.; Thompson, Jayne; Gu, Mile
2018-06-01
Stochastic processes are as ubiquitous throughout the quantitative sciences as they are notorious for being difficult to simulate and predict. In this Letter, we propose a unitary quantum simulator for discrete-time stochastic processes which requires less internal memory than any classical analogue throughout the simulation. The simulator's internal memory requirements equal those of the best previous quantum models. However, in contrast to previous models, it only requires a (small) finite-dimensional Hilbert space. Moreover, since the simulator operates unitarily throughout, it avoids any unnecessary information loss. We provide a stepwise construction for simulators for a large class of stochastic processes hence directly opening the possibility for experimental implementations with current platforms for quantum computation. The results are illustrated for an example process.
Low-Order Modeling of Internal Heat Transfer in Biomass Particle Pyrolysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiggins, Gavin M.; Ciesielski, Peter N.; Daw, C. Stuart
2016-06-16
We present a computationally efficient, one-dimensional simulation methodology for biomass particle heating under conditions typical of fast pyrolysis. Our methodology is based on identifying the rate limiting geometric and structural factors for conductive heat transport in biomass particle models with realistic morphology to develop low-order approximations that behave appropriately. Comparisons of transient temperature trends predicted by our one-dimensional method with three-dimensional simulations of woody biomass particles reveal good agreement, if the appropriate equivalent spherical diameter and bulk thermal properties are used. We conclude that, for particle sizes and heating regimes typical of fast pyrolysis, it is possible to simulate biomassmore » particle heating with reasonable accuracy and minimal computational overhead, even when variable size, aspherical shape, anisotropic conductivity, and complex, species-specific internal pore geometry are incorporated.« less
NASA Astrophysics Data System (ADS)
Wang, Jia Jie; Wriedt, Thomas; Han, Yi Ping; Mädler, Lutz; Jiao, Yong Chang
2018-05-01
Light scattering of a radially inhomogeneous droplet, which is modeled by a multilayered sphere, is investigated within the framework of Generalized Lorenz-Mie Theory (GLMT), with particular efforts devoted to the analysis of the internal field distribution in the cases of shaped beam illumination. To circumvent numerical difficulties in the computation of internal field for an absorbing/non-absorbing droplet with pretty large size parameter, a recursive algorithm is proposed by reformulation of the equations for the expansion coefficients. Two approaches are proposed for the prediction of the internal field distribution, namely a rigorous method and an approximation method. The developed computer code is tested to be stable in a wide range of size parameters. Numerical computations are implemented to simulate the internal field distributions of a radially inhomogeneous droplet illuminated by a focused Gaussian beam.
The use of virtual reality simulation of head trauma in a surgical boot camp.
Vergara, Victor M; Panaiotis; Kingsley, Darra; Alverson, Dale C; Godsmith, Timothy; Xia, Shan; Caudell, Thomas P
2009-01-01
Surgical "boot camps" provide excellent opportunities to enhance orientation, learning, and preparation of new surgery interns as they enter the clinical arena. This paper describes the utilization of an interactive virtual reality (VR) simulation and associated virtual patient (VP) as an additional tool for surgical boot camps. Complementing other forms of simulation, virtual patients (VPs) require less specialized equipment and can also provide a wide variety of medical scenarios. In this paper we discuss a study that measured the learning effectiveness of a real-world VP simulation used by a class of new surgery interns who operated it with a standard computer interface. The usability of the simulator as a learning tool has been demonstrated and measured. This study brings the use of VR simulation with VPs closer to wider application and integration into a training curriculum, such as a surgery intern boot camp.
Proceedings of the 14th International Conference on the Numerical Simulation of Plasmas
NASA Astrophysics Data System (ADS)
Partial Contents are as follows: Numerical Simulations of the Vlasov-Maxwell Equations by Coupled Particle-Finite Element Methods on Unstructured Meshes; Electromagnetic PIC Simulations Using Finite Elements on Unstructured Grids; Modelling Travelling Wave Output Structures with the Particle-in-Cell Code CONDOR; SST--A Single-Slice Particle Simulation Code; Graphical Display and Animation of Data Produced by Electromagnetic, Particle-in-Cell Codes; A Post-Processor for the PEST Code; Gray Scale Rendering of Beam Profile Data; A 2D Electromagnetic PIC Code for Distributed Memory Parallel Computers; 3-D Electromagnetic PIC Simulation on the NRL Connection Machine; Plasma PIC Simulations on MIMD Computers; Vlasov-Maxwell Algorithm for Electromagnetic Plasma Simulation on Distributed Architectures; MHD Boundary Layer Calculation Using the Vortex Method; and Eulerian Codes for Plasma Simulations.
Energy consumption program: A computer model simulating energy loads in buildings
NASA Technical Reports Server (NTRS)
Stoller, F. W.; Lansing, F. L.; Chai, V. W.; Higgins, S.
1978-01-01
The JPL energy consumption computer program developed as a useful tool in the on-going building modification studies in the DSN energy conservation project is described. The program simulates building heating and cooling loads and computes thermal and electric energy consumption and cost. The accuracy of computations are not sacrificed, however, since the results lie within + or - 10 percent margin compared to those read from energy meters. The program is carefully structured to reduce both user's time and running cost by asking minimum information from the user and reducing many internal time-consuming computational loops. Many unique features were added to handle two-level electronics control rooms not found in any other program.
NASA Astrophysics Data System (ADS)
Wang, Tianmin; Gao, Fei; Hu, Wangyu; Lai, Wensheng; Lu, Guang-Hong; Zu, Xiaotao
2009-09-01
The Ninth International Conference on Computer Simulation of Radiation Effects in Solids (COSIRES 2008) was hosted by Beihang University in Beijing, China from 12 to 17 October 2008. Started in 1992 in Berlin, Germany, this conference series has been held biennially in Santa Barbara, CA, USA (1994); Guildford, UK (1996); Okayama, Japan (1998); State College, PA, USA (2000); Dresden, Germany (2002); Helsinki Finland (2004); and Richland, WA USA (2006). The COSIRES conferences are the foremost international forum on the theory, development and application of advanced computer simulation methods and algorithms to achieve fundamental understanding and predictive modeling of the interaction of energetic particles and clusters with solids. As can be noticed in the proceedings of the COSIRES conferences, these computer simulation methods and algorithms have been proven to be very useful for the study of fundamental radiation effect processes, which are not easily accessible by experimental methods owing to small time and length scales. Moreover, with advance in computing power, they have remarkably been developed in the different scales ranging from meso to atomistic, and even down to electronic levels, as well as coupling of the different scales. They are now becoming increasingly applicable for materials processing and performance prediction in advance engineering and energy-production technologies.
Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES
NASA Astrophysics Data System (ADS)
Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu
2016-11-01
Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.
NASA Technical Reports Server (NTRS)
Badler, N. I.; Fishwick, P.; Taft, N.; Agrawala, M.
1985-01-01
The use of computer graphics to simulate the movement of articulated animals and mechanisms has a number of uses ranging over many fields. Human motion simulation systems can be useful in education, medicine, anatomy, physiology, and dance. In biomechanics, computer displays help to understand and analyze performance. Simulations can be used to help understand the effect of external or internal forces. Similarly, zero-gravity simulation systems should provide a means of designing and exploring the capabilities of hypothetical zero-gravity situations before actually carrying out such actions. The advantage of using a simulation of the motion is that one can experiment with variations of a maneuver before attempting to teach it to an individual. The zero-gravity motion simulation problem can be divided into two broad areas: human movement and behavior in zero-gravity, and simulation of articulated mechanisms.
Miller, Ross H; Hamill, Joseph
2009-08-01
Biomechanical aspects of running injuries are often inferred from external loading measurements. However, previous research has suggested that relationships between external loading and potential injury-inducing internal loads can be complex and nonintuitive. Further, the loading response to training interventions can vary widely between subjects. In this study, we use a subject-specific computer simulation approach to estimate internal and external loading of the distal tibia during the impact phase for two runners when running in shoes with different midsole cushioning parameters. The results suggest that: (1) changes in tibial loading induced by footwear are not reflected by changes in ground reaction force (GRF) magnitudes; (2) the GRF loading rate is a better surrogate measure of tibial loading and stress fracture risk than the GRF magnitude; and (3) averaging results across groups may potentially mask differential responses to training interventions between individuals.
NASA Astrophysics Data System (ADS)
Bessonov, O.; Silvestrov, P.
2017-02-01
This paper describes the general idea and the first implementation of the Interactive information and simulation system - an integrated environment that combines computational modules for modeling the aerodynamics and aerothermodynamics of re-entry space vehicles with the large collection of different information materials on this topic. The internal organization and the composition of the system are described and illustrated. Examples of the computational and information output are presented. The system has the unified implementation for Windows and Linux operation systems and can be deployed on any modern high-performance personal computer.
Robot, computer problem solving system
NASA Technical Reports Server (NTRS)
Becker, J. D.
1972-01-01
The development of a computer problem solving system is reported that considers physical problems faced by an artificial robot moving around in a complex environment. Fundamental interaction constraints with a real environment are simulated for the robot by visual scan and creation of an internal environmental model. The programming system used in constructing the problem solving system for the simulated robot and its simulated world environment is outlined together with the task that the system is capable of performing. A very general framework for understanding the relationship between an observed behavior and an adequate description of that behavior is included.
Particle-in-cell code library for numerical simulation of the ECR source plasma
NASA Astrophysics Data System (ADS)
Shirkov, G.; Alexandrov, V.; Preisendorf, V.; Shevtsov, V.; Filippov, A.; Komissarov, R.; Mironov, V.; Shirkova, E.; Strekalovsky, O.; Tokareva, N.; Tuzikov, A.; Vatulin, V.; Vasina, E.; Fomin, V.; Anisimov, A.; Veselov, R.; Golubev, A.; Grushin, S.; Povyshev, V.; Sadovoi, A.; Donskoi, E.; Nakagawa, T.; Yano, Y.
2003-05-01
The project ;Numerical simulation and optimization of ion accumulation and production in multicharged ion sources; is funded by the International Science and Technology Center (ISTC). A summary of recent project development and the first version of a computer code library for simulation of electron-cyclotron resonance (ECR) source plasmas based on the particle-in-cell method are presented.
Kang, K-T.; Koh, Y-G.; Son, J.; Kwon, O-R.; Baek, C.; Jung, S. H.
2016-01-01
Objectives Malrotation of the femoral component can result in post-operative complications in total knee arthroplasty (TKA), including patellar maltracking. Therefore, we used computational simulation to investigate the influence of femoral malrotation on contact stresses on the polyethylene (PE) insert and on the patellar button as well as on the forces on the collateral ligaments. Materials and Methods Validated finite element (FE) models, for internal and external malrotations from 0° to 10° with regard to the neutral position, were developed to evaluate the effect of malrotation on the femoral component in TKA. Femoral malrotation in TKA on the knee joint was simulated in walking stance-phase gait and squat loading conditions. Results Contact stress on the medial side of the PE insert increased with internal femoral malrotation and decreased with external femoral malrotation in both stance-phase gait and squat loading conditions. There was an opposite trend in the lateral side of the PE insert case. Contact stress on the patellar button increased with internal femoral malrotation and decreased with external femoral malrotation in both stance-phase gait and squat loading conditions. In particular, contact stress on the patellar button increased by 98% with internal malrotation of 10° in the squat loading condition. The force on the medial collateral ligament (MCL) and the lateral collateral ligament (LCL) increased with internal and external femoral malrotations, respectively. Conclusions These findings provide support for orthopaedic surgeons to determine a more accurate femoral component alignment in order to reduce post-operative PE problems. Cite this article: K-T. Kang, Y-G. Koh, J. Son, O-R. Kwon, C. Baek, S. H. Jung, K. K. Park. Measuring the effect of femoral malrotation on knee joint biomechanics for total knee arthroplasty using computational simulation. Bone Joint Res 2016;5:552–559. DOI: 10.1302/2046-3758.511.BJR-2016-0107.R1. PMID:28094763
Real-Time Computer-Mediated Communication: Email and Instant Messaging Simulation
ERIC Educational Resources Information Center
Newman, Amy
2007-01-01
As computer-mediated communication becomes increasingly prevalent in the workplace, students need to apply effective writing principles to today's technologies. Email, in particular, requires interns and new hires to manage incoming messages, use an appropriate tone, and craft clear, concise messages. In addition, with instant messaging (IM)…
People Power--Computer Games in the Classroom
ERIC Educational Resources Information Center
Hilliard, Ivan
2014-01-01
This article presents a case study in the use of the computer simulation game "People Power," developed by the International Center on Nonviolent Conflict. The principal objective of the activity was to offer students an opportunity to understand the dynamics of social conflicts, in a format not possible in a traditional classroom…
Expert Systems--The New International Language of Business.
ERIC Educational Resources Information Center
Sondak, Norman E.; And Others
A discussion of expert systems, computer programs designed to simulate human reasoning and expertise, begins with the assumption that few business educators understand the impact that expert systems will have on international business. The fundamental principles of the design and development of expert systems in business are outlined, with special…
NASA Technical Reports Server (NTRS)
2004-01-01
The proceedings of this symposium consist of abstracts of talks presented by interns at NASA Glenn Research Center (GRC). The interns assisted researchers at GRC in projects which primarily address the following topics: aircraft engines and propulsion, spacecraft propulsion, fuel cells, thin film photovoltaic cells, aerospace materials, computational fluid dynamics, aircraft icing, management, and computerized simulation.
A Note on Verification of Computer Simulation Models
ERIC Educational Resources Information Center
Aigner, Dennis J.
1972-01-01
Establishes an argument that questions the validity of one test'' of goodness-of-fit (the extent to which a series of obtained measures agrees with a series of theoretical measures) for the simulated time path of a simple endogenous (internally developed) variable in a simultaneous, perhaps dynamic econometric model. (Author)
MoCog1: A computer simulation of recognition-primed human decision making
NASA Technical Reports Server (NTRS)
Gevarter, William B.
1991-01-01
The results of the first stage of a research effort to develop a 'sophisticated' computer model of human cognitive behavior are described. Most human decision making is an experience-based, relatively straight-forward, largely automatic response to internal goals and drives, utilizing cues and opportunities perceived from the current environment. The development of the architecture and computer program (MoCog1) associated with such 'recognition-primed' decision making is discussed. The resultant computer program was successfully utilized as a vehicle to simulate earlier findings that relate how an individual's implicit theories orient the individual toward particular goals, with resultant cognitions, affects, and behavior in response to their environment.
NASA Astrophysics Data System (ADS)
Chien, Cheng-Chih
In the past thirty years, the effectiveness of computer assisted learning was found varied by individual studies. Today, with drastic technical improvement, computers have been widely spread in schools and used in a variety of ways. In this study, a design model involving educational technology, pedagogy, and content domain is proposed for effective use of computers in learning. Computer simulation, constructivist and Vygotskian perspectives, and circular motion are the three elements of the specific Chain Model for instructional design. The goal of the physics course is to help students remove the ideas which are not consistent with the physics community and rebuild new knowledge. To achieve the learning goal, the strategies of using conceptual conflicts and using language to internalize specific tasks into mental functions were included. Computer simulations and accompanying worksheets were used to help students explore their own ideas and to generate questions for discussions. Using animated images to describe the dynamic processes involved in the circular motion may reduce the complexity and possible miscommunications resulting from verbal explanations. The effectiveness of the instructional material on student learning is evaluated. The results of problem solving activities show that students using computer simulations had significantly higher scores than students not using computer simulations. For conceptual understanding, on the pretest students in the non-simulation group had significantly higher score than students in the simulation group. There was no significant difference observed between the two groups in the posttest. The relations of gender, prior physics experience, and frequency of computer uses outside the course to student achievement were also studied. There were fewer female students than male students and fewer students using computer simulations than students not using computer simulations. These characteristics affect the statistical power for detecting differences. For the future research, more intervention of simulations may be introduced to explore the potential of computer simulation in helping students learning. A test for conceptual understanding with more problems and appropriate difficulty level may be needed.
[The research on bidirectional reflectance computer simulation of forest canopy at pixel scale].
Song, Jin-Ling; Wang, Jin-Di; Shuai, Yan-Min; Xiao, Zhi-Qiang
2009-08-01
Computer simulation is based on computer graphics to generate the realistic 3D structure scene of vegetation, and to simulate the canopy regime using radiosity method. In the present paper, the authors expand the computer simulation model to simulate forest canopy bidirectional reflectance at pixel scale. But usually, the trees are complex structures, which are tall and have many branches. So there is almost a need for hundreds of thousands or even millions of facets to built up the realistic structure scene for the forest It is difficult for the radiosity method to compute so many facets. In order to make the radiosity method to simulate the forest scene at pixel scale, in the authors' research, the authors proposed one idea to simplify the structure of forest crowns, and abstract the crowns to ellipsoids. And based on the optical characteristics of the tree component and the characteristics of the internal energy transmission of photon in real crown, the authors valued the optical characteristics of ellipsoid surface facets. In the computer simulation of the forest, with the idea of geometrical optics model, the gap model is considered to get the forest canopy bidirectional reflectance at pixel scale. Comparing the computer simulation results with the GOMS model, and Multi-angle Imaging SpectroRadiometer (MISR) multi-angle remote sensing data, the simulation results are in agreement with the GOMS simulation result and MISR BRF. But there are also some problems to be solved. So the authors can conclude that the study has important value for the application of multi-angle remote sensing and the inversion of vegetation canopy structure parameters.
High resolution simulations of a variable HH jet
NASA Astrophysics Data System (ADS)
Raga, A. C.; de Colle, F.; Kajdič, P.; Esquivel, A.; Cantó, J.
2007-04-01
Context: In many papers, the flows in Herbig-Haro (HH) jets have been modeled as collimated outflows with a time-dependent ejection. In particular, a supersonic variability of the ejection velocity leads to the production of "internal working surfaces" which (for appropriate forms of the time-variability) can produce emitting knots that resemble the chains of knots observed along HH jets. Aims: In this paper, we present axisymmetric simulations of an "internal working surface" in a radiative jet (produced by an ejection velocity variability). We concentrate on a given parameter set (i.e., on a jet with a constante ejection density, and a sinusoidal velocity variability with a 20 yr period and a 40 km s-1 half-amplitude), and carry out a study of the behaviour of the solution for increasing numerical resolutions. Methods: In our simulations, we solve the gasdynamic equations together with a 17-species atomic/ionic network, and we are therefore able to compute emission coefficients for different emission lines. Results: We compute 3 adaptive grid simulations, with 20, 163 and 1310 grid points (at the highest grid resolution) across the initial jet radius. From these simulations we see that successively more complex structures are obtained for increasing numerical resolutions. Such an effect is seen in the stratifications of the flow variables as well as in the predicted emission line intensity maps. Conclusions: .We find that while the detailed structure of an internal working surface depends on resolution, the predicted emission line luminosities (integrated over the volume of the working surface) are surprisingly stable. This is definitely good news for the future computation of predictions from radiative jet models for carrying out comparisons with observations of HH objects.
NASA Astrophysics Data System (ADS)
Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri
2015-04-01
Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
Ozaki, Y.; Kaida, A.; Miura, M.; Nakagawa, K.; Toda, K.; Yoshimura, R.; Sumi, Y.; Kurabayashi, T.
2017-01-01
Abstract Early stage oral cancer can be cured with oral brachytherapy, but whole-body radiation exposure status has not been previously studied. Recently, the International Commission on Radiological Protection Committee (ICRP) recommended the use of ICRP phantoms to estimate radiation exposure from external and internal radiation sources. In this study, we used a Monte Carlo simulation with ICRP phantoms to estimate whole-body exposure from oral brachytherapy. We used a Particle and Heavy Ion Transport code System (PHITS) to model oral brachytherapy with 192Ir hairpins and 198Au grains and to perform a Monte Carlo simulation on the ICRP adult reference computational phantoms. To confirm the simulations, we also computed local dose distributions from these small sources, and compared them with the results from Oncentra manual Low Dose Rate Treatment Planning (mLDR) software which is used in day-to-day clinical practice. We successfully obtained data on absorbed dose for each organ in males and females. Sex-averaged equivalent doses were 0.547 and 0.710 Sv with 192Ir hairpins and 198Au grains, respectively. Simulation with PHITS was reliable when compared with an alternative computational technique using mLDR software. We concluded that the absorbed dose for each organ and whole-body exposure from oral brachytherapy can be estimated with Monte Carlo simulation using PHITS on ICRP reference phantoms. Effective doses for patients with oral cancer were obtained. PMID:28339846
The application of virtual reality systems as a support of digital manufacturing and logistics
NASA Astrophysics Data System (ADS)
Golda, G.; Kampa, A.; Paprocka, I.
2016-08-01
Modern trends in development of computer aided techniques are heading toward the integration of design competitive products and so-called "digital manufacturing and logistics", supported by computer simulation software. All phases of product lifecycle: starting from design of a new product, through planning and control of manufacturing, assembly, internal logistics and repairs, quality control, distribution to customers and after-sale service, up to its recycling or utilization should be aided and managed by advanced packages of product lifecycle management software. Important problems for providing the efficient flow of materials in supply chain management of whole product lifecycle, using computer simulation will be described on that paper. Authors will pay attention to the processes of acquiring relevant information and correct data, necessary for virtual modeling and computer simulation of integrated manufacturing and logistics systems. The article describes possibilities of use an applications of virtual reality software for modeling and simulation the production and logistics processes in enterprise in different aspects of product lifecycle management. The authors demonstrate effective method of creating computer simulations for digital manufacturing and logistics and show modeled and programmed examples and solutions. They pay attention to development trends and show options of the applications that go beyond enterprise.
Technologies for Propelled Hypersonic Flight Volume 2 - Subgroup 2: Scram Propulsion
2006-01-01
effort is focused on the MSD code, initially developed by ONERA to simulate internal aerodynamic flows, which has been upgraded in cooperation...inlets were studied: a mixed, external/ internal , compression inlet studied at DLR with testing in the H2K and TMK wind-tunnels, and an internal ...movable panels during operation along the trajectory, modification of the internal geometry by a control-command computer connected with sensors on the
Writing Essays on a Laptop or a Desktop Computer: Does It Matter?
ERIC Educational Resources Information Center
Ling, Guangming; Bridgeman, Brent
2013-01-01
To explore the potential effect of computer type on the Test of English as a Foreign Language-Internet-Based Test (TOEFL iBT) Writing Test, a sample of 444 international students was used. The students were randomly assigned to either a laptop or a desktop computer to write two TOEFL iBT practice essays in a simulated testing environment, followed…
Numerical simulation of unsteady viscous flows
NASA Technical Reports Server (NTRS)
Hankey, Wilbur L.
1987-01-01
Most unsteady viscous flows may be grouped into two categories, i.e., forced and self-sustained oscillations. Examples of forced oscillations occur in turbomachinery and in internal combustion engines while self-sustained oscillations prevail in vortex shedding, inlet buzz, and wing flutter. Numerical simulation of these phenomena was achieved due to the advancement of vector processor computers. Recent progress in the simulation of unsteady viscous flows is addressed.
NASA Technical Reports Server (NTRS)
Trinh, H. P.; Gross, K. W.
1989-01-01
Computational studies have been conducted to examine the capability of a CFD code by simulating the steady state thrust chamber internal flow. The SSME served as the sample case, and significant parameter profiles are presented and discussed. Performance predictions from TDK, the recommended JANNAF reference computer program, are compared with those from PHOENICS to establish the credibility of its results. The investigation of an overexpanded nozzle flow is particularly addressed since it plays an important role in the area ratio selection of future rocket engines. Experience gained during this uncompleted flow separation study and future steps are outlined.
Using Computer Simulation for Neurolab 2 Mission Planning
NASA Technical Reports Server (NTRS)
Sanders, Betty M.
1997-01-01
This paper presents an overview of the procedure used in the creation of a computer simulation video generated by the Graphics Research and Analysis Facility at NASA/Johnson Space Center. The simulation was preceded by an analysis of anthropometric characteristics of crew members and workspace requirements for 13 experiments to be conducted on Neurolab 2 which is dedicated to neuroscience and behavioral research. Neurolab 2 is being carried out as a partnership among national domestic research institutes and international space agencies. The video is a tour of the Spacelab module as it will be configured for STS-90, scheduled for launch in the spring of 1998, and identifies experiments that can be conducted in parallel during that mission. Therefore, this paper will also address methods for using computer modeling to facilitate the mission planning activity.
A Kernel-Free Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 4
NASA Technical Reports Server (NTRS)
Park, Young-Keun; Fahrenthold, Eric P.
2004-01-01
An improved hybrid particle-finite element method has been developed for the simulation of hypervelocity impact problems. Unlike alternative methods, the revised formulation computes the density without reference to any kernel or interpolation functions, for either the density or the rate of dilatation. This simplifies the state space model and leads to a significant reduction in computational cost. The improved method introduces internal energy variables as generalized coordinates in a new formulation of the thermomechanical Lagrange equations. Example problems show good agreement with exact solutions in one dimension and good agreement with experimental data in a three dimensional simulation.
STS-133 crew members Mike Barratt and Nicole Stott in cupola
2010-06-08
JSC2010-E-090701 (8 June 2010) --- Several computer monitors are featured in this image photographed during an STS-133 exercise in the systems engineering simulator in the Avionics Systems Laboratory at NASA's Johnson Space Center. The facility includes moving scenes of full-sized International Space Station components over a simulated Earth.
Combining high performance simulation, data acquisition, and graphics display computers
NASA Technical Reports Server (NTRS)
Hickman, Robert J.
1989-01-01
Issues involved in the continuing development of an advanced simulation complex are discussed. This approach provides the capability to perform the majority of tests on advanced systems, non-destructively. The controlled test environments can be replicated to examine the response of the systems under test to alternative treatments of the system control design, or test the function and qualification of specific hardware. Field tests verify that the elements simulated in the laboratories are sufficient. The digital computer is hosted by a Digital Equipment Corp. MicroVAX computer with an Aptec Computer Systems Model 24 I/O computer performing the communication function. An Applied Dynamics International AD100 performs the high speed simulation computing and an Evans and Sutherland PS350 performs on-line graphics display. A Scientific Computer Systems SCS40 acts as a high performance FORTRAN program processor to support the complex, by generating numerous large files from programs coded in FORTRAN that are required for the real time processing. Four programming languages are involved in the process, FORTRAN, ADSIM, ADRIO, and STAPLE. FORTRAN is employed on the MicroVAX host to initialize and terminate the simulation runs on the system. The generation of the data files on the SCS40 also is performed with FORTRAN programs. ADSIM and ADIRO are used to program the processing elements of the AD100 and its IOCP processor. STAPLE is used to program the Aptec DIP and DIA processors.
Computer-intensive simulation of solid-state NMR experiments using SIMPSON.
Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas
2014-09-01
Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
King, David, Jr.; Manson, Russell; Trout, Joseph; Decicco, Nicholas; Rios, Manny
2015-04-01
Wake vortices are generated by airplanes in flight. These vortices decay slowly and may persist for several minutes after their creation. These vortices and associated smaller scale turbulent structures present a hazard to incoming flights. It is for this reason that incoming flights are timed to arrive after these vortices have dissipated. Local weather conditions, mainly prevailing winds, can affect the transport and evolution of these vortices; therefore, there is a need to fully understand localized wind patterns at the airport-sized mircoscale. Here we have undertaken a computational investigation into the impacts of localized wind flows and physical structures on the velocity field at Atlantic City International Airport. The simulations are undertaken in OpenFOAM, an open source computational fluid dynamics software package, using an optimized geometric mesh of the airport. Initial conditions for the simulations are based on historical data with the option to run simulations based on projected weather conditions imported from the Weather Research & Forcasting (WRF) Model. Sub-grid scale turbulence is modeled using a Large Eddy Simulation (LES) approach. The initial results gathered from the WRF Model simulations and historical weather data analysis are presented elsewhere.
NASA Astrophysics Data System (ADS)
Sharpanskykh, Alexei; Treur, Jan
Employing rich internal agent models of actors in large-scale socio-technical systems often results in scalability issues. The problem addressed in this paper is how to improve computational properties of a complex internal agent model, while preserving its behavioral properties. The problem is addressed for the case of an existing affective-cognitive decision making model instantiated for an emergency scenario. For this internal decision model an abstracted behavioral agent model is obtained, which ensures a substantial increase of the computational efficiency at the cost of approximately 1% behavioural error. The abstraction technique used can be applied to a wide range of internal agent models with loops, for example, involving mutual affective-cognitive interactions.
A hybrid method for the computation of quasi-3D seismograms.
NASA Astrophysics Data System (ADS)
Masson, Yder; Romanowicz, Barbara
2013-04-01
The development of powerful computer clusters and efficient numerical computation methods, such as the Spectral Element Method (SEM) made possible the computation of seismic wave propagation in a heterogeneous 3D earth. However, the cost of theses computations is still problematic for global scale tomography that requires hundreds of such simulations. Part of the ongoing research effort is dedicated to the development of faster modeling methods based on the spectral element method. Capdeville et al. (2002) proposed to couple SEM simulations with normal modes calculation (C-SEM). Nissen-Meyer et al. (2007) used 2D SEM simulations to compute 3D seismograms in a 1D earth model. Thanks to these developments, and for the first time, Lekic et al. (2011) developed a 3D global model of the upper mantle using SEM simulations. At the local and continental scale, adjoint tomography that is using a lot of SEM simulation can be implemented on current computers (Tape, Liu et al. 2009). Due to their smaller size, these models offer higher resolution. They provide us with images of the crust and the upper part of the mantle. In an attempt to teleport such local adjoint tomographic inversions into the deep earth, we are developing a hybrid method where SEM computation are limited to a region of interest within the earth. That region can have an arbitrary shape and size. Outside this region, the seismic wavefield is extrapolated to obtain synthetic data at the Earth's surface. A key feature of the method is the use of a time reversal mirror to inject the wavefield induced by distant seismic source into the region of interest (Robertsson and Chapman 2000). We compute synthetic seismograms as follow: Inside the region of interest, we are using regional spectral element software RegSEM to compute wave propagation in 3D. Outside this region, the wavefield is extrapolated to the surface by convolution with the Green's functions from the mirror to the seismic stations. For now, these Green's functions are computed using 2D SEM simulation in a 1D Earth model. Such seismograms account for the 3D structure inside the region of interest in a quasi-exact manner. Later we plan to extrapolate the misfit function computed from such seismograms at the stations back into the SEM region in order to compute local adjoint kernels. This opens a new path toward regional adjoint tomography into the deep Earth. Capdeville, Y., et al. (2002). "Coupling the spectral element method with a modal solution for elastic wave propagation in global Earth models." Geophysical Journal International 152(1): 34-67. Lekic, V. and B. Romanowicz (2011). "Inferring upper-mantle structure by full waveform tomography with the spectral element method." Geophysical Journal International 185(2): 799-831. Nissen-Meyer, T., et al. (2007). "A two-dimensional spectral-element method for computing spherical-earth seismograms-I. Moment-tensor source." Geophysical Journal International 168(3): 1067-1092. Robertsson, J. O. A. and C. H. Chapman (2000). "An efficient method for calculating finite-difference seismograms after model alterations." Geophysics 65(3): 907-918. Tape, C., et al. (2009). "Adjoint tomography of the southern California crust." Science 325(5943): 988-992.
Varol, Altan; Basa, Selçuk
2009-06-01
Maxillary distraction osteogenesis is a challenging procedure when it is performed with internal submerged distractors due to obligation of setting accurate distraction vectors. Five patients with severe maxillary retrognathy were planned with Mimics 10.01 CMF and Simplant 10.01 software. Distraction vectors and rods of distractors were arranged in 3D environment and on STL models. All patients were operated under general anaesthesia and complete Le Fort I downfracture was performed. All distractions were performed according to orientated vectors. All patients achieved stable occlusion and satisfactory aesthetic outcome at the end of the treatment period. Preoperative bending of internal maxillary distractors prevents significant loss of operation time. 3D computer-aided surgical simulation and model surgery provide accurate orientation of distraction vectors for premaxillary and internal trans-sinusoidal maxillary distraction. Combination of virtual surgical simulation and stereolithographic models surgery can be validated as an effective method of preoperative planning for complicated maxillofacial surgery cases.
Lifton, Joseph J; Malcolm, Andrew A; McBride, John W
2015-01-01
X-ray computed tomography (CT) is a radiographic scanning technique for visualising cross-sectional images of an object non-destructively. From these cross-sectional images it is possible to evaluate internal dimensional features of a workpiece which may otherwise be inaccessible to tactile and optical instruments. Beam hardening is a physical process that degrades the quality of CT images and has previously been suggested to influence dimensional measurements. Using a validated simulation tool, the influence of spectrum pre-filtration and beam hardening correction are evaluated for internal and external dimensional measurements. Beam hardening is shown to influence internal and external dimensions in opposition, and to have a greater influence on outer dimensions compared to inner dimensions. The results suggest the combination of spectrum pre-filtration and a local gradient-based surface determination method are able to greatly reduce the influence of beam hardening in X-ray CT for dimensional metrology.
Gilet, Estelle; Diard, Julien; Bessière, Pierre
2011-01-01
In this paper, we study the collaboration of perception and action representations involved in cursive letter recognition and production. We propose a mathematical formulation for the whole perception–action loop, based on probabilistic modeling and Bayesian inference, which we call the Bayesian Action–Perception (BAP) model. Being a model of both perception and action processes, the purpose of this model is to study the interaction of these processes. More precisely, the model includes a feedback loop from motor production, which implements an internal simulation of movement. Motor knowledge can therefore be involved during perception tasks. In this paper, we formally define the BAP model and show how it solves the following six varied cognitive tasks using Bayesian inference: i) letter recognition (purely sensory), ii) writer recognition, iii) letter production (with different effectors), iv) copying of trajectories, v) copying of letters, and vi) letter recognition (with internal simulation of movements). We present computer simulations of each of these cognitive tasks, and discuss experimental predictions and theoretical developments. PMID:21674043
ISS Radiation Shielding and Acoustic Simulation Using an Immersive Environment
NASA Technical Reports Server (NTRS)
Verhage, Joshua E.; Sandridge, Chris A.; Qualls, Garry D.; Rizzi, Stephen A.
2002-01-01
The International Space Station Environment Simulator (ISSES) is a virtual reality application that uses high-performance computing, graphics, and audio rendering to simulate the radiation and acoustic environments of the International Space Station (ISS). This CAVE application allows the user to maneuver to different locations inside or outside of the ISS and interactively compute and display the radiation dose at a point. The directional dose data is displayed as a color-mapped sphere that indicates the relative levels of radiation from all directions about the center of the sphere. The noise environment is rendered in real time over headphones or speakers and includes non-spatial background noise, such as air-handling equipment, and spatial sounds associated with specific equipment racks, such as compressors or fans. Changes can be made to equipment rack locations that produce changes in both the radiation shielding and system noise. The ISSES application allows for interactive investigation and collaborative trade studies between radiation shielding and noise for crew safety and comfort.
A symbiotic approach to fluid equations and non-linear flux-driven simulations of plasma dynamics
NASA Astrophysics Data System (ADS)
Halpern, Federico
2017-10-01
The fluid framework is ubiquitous in studies of plasma transport and stability. Typical forms of the fluid equations are motivated by analytical work dating several decades ago, before computer simulations were indispensable, and can be, therefore, not optimal for numerical computation. We demonstrate a new first-principles approach to obtaining manifestly consistent, skew-symmetric fluid models, ensuring internal consistency and conservation properties even in discrete form. Mass, kinetic, and internal energy become quadratic (and always positive) invariants of the system. The model lends itself to a robust, straightforward discretization scheme with inherent non-linear stability. A simpler, drift-ordered form of the equations is obtained, and first results of their numerical implementation as a binary framework for bulk-fluid global plasma simulations are demonstrated. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences, Theory Program, under Award No. DE-FG02-95ER54309.
Multigrid Computations of 3-D Incompressible Internal and External Viscous Rotating Flows
NASA Technical Reports Server (NTRS)
Sheng, Chunhua; Taylor, Lafayette K.; Chen, Jen-Ping; Jiang, Min-Yee; Whitfield, David L.
1996-01-01
This report presents multigrid methods for solving the 3-D incompressible viscous rotating flows in a NASA low-speed centrifugal compressor and a marine propeller 4119. Numerical formulations are given in both the rotating reference frame and the absolute frame. Comparisons are made for the accuracy, efficiency, and robustness between the steady-state scheme and the time-accurate scheme for simulating viscous rotating flows for complex internal and external flow applications. Prospects for further increase in efficiency and accuracy of unsteady time-accurate computations are discussed.
NASA Technical Reports Server (NTRS)
Majumdar, Alok; Leclair, Andre; Moore, Ric; Schallhorn, Paul
2011-01-01
GFSSP stands for Generalized Fluid System Simulation Program. It is a general-purpose computer program to compute pressure, temperature and flow distribution in a flow network. GFSSP calculates pressure, temperature, and concentrations at nodes and calculates flow rates through branches. It was primarily developed to analyze Internal Flow Analysis of a Turbopump Transient Flow Analysis of a Propulsion System. GFSSP development started in 1994 with an objective to provide a generalized and easy to use flow analysis tool for thermo-fluid systems.
High-Performance Algorithms and Complex Fluids | Computational Science |
only possible by combining experimental data with simulation. Capabilities Capabilities include: Block -laden, non-Newtonian, as well as traditional internal and external flows. Contact Ray Grout Group
Computational Aerodynamic Simulations of a Spacecraft Cabin Ventilation Fan Design
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.
2010-01-01
Quieter working environments for astronauts are needed if future long-duration space exploration missions are to be safe and productive. Ventilation and payload cooling fans are known to be dominant sources of noise, with the International Space Station being a good case in point. To address this issue cost effectively, early attention to fan design, selection, and installation has been recommended, leading to an effort by NASA to examine the potential for small-fan noise reduction by improving fan aerodynamic design. As a preliminary part of that effort, the aerodynamics of a cabin ventilation fan designed by Hamilton Sundstrand has been simulated using computational fluid dynamics codes, and the computed solutions analyzed to quantify various aspects of the fan aerodynamics and performance. Four simulations were performed at the design rotational speed: two at the design flow rate and two at off-design flow rates. Following a brief discussion of the computational codes, various aerodynamic- and performance-related quantities derived from the computed flow fields are presented along with relevant flow field details. The results show that the computed fan performance is in generally good agreement with stated design goals.
Ozaki, Y; Watanabe, H; Kaida, A; Miura, M; Nakagawa, K; Toda, K; Yoshimura, R; Sumi, Y; Kurabayashi, T
2017-07-01
Early stage oral cancer can be cured with oral brachytherapy, but whole-body radiation exposure status has not been previously studied. Recently, the International Commission on Radiological Protection Committee (ICRP) recommended the use of ICRP phantoms to estimate radiation exposure from external and internal radiation sources. In this study, we used a Monte Carlo simulation with ICRP phantoms to estimate whole-body exposure from oral brachytherapy. We used a Particle and Heavy Ion Transport code System (PHITS) to model oral brachytherapy with 192Ir hairpins and 198Au grains and to perform a Monte Carlo simulation on the ICRP adult reference computational phantoms. To confirm the simulations, we also computed local dose distributions from these small sources, and compared them with the results from Oncentra manual Low Dose Rate Treatment Planning (mLDR) software which is used in day-to-day clinical practice. We successfully obtained data on absorbed dose for each organ in males and females. Sex-averaged equivalent doses were 0.547 and 0.710 Sv with 192Ir hairpins and 198Au grains, respectively. Simulation with PHITS was reliable when compared with an alternative computational technique using mLDR software. We concluded that the absorbed dose for each organ and whole-body exposure from oral brachytherapy can be estimated with Monte Carlo simulation using PHITS on ICRP reference phantoms. Effective doses for patients with oral cancer were obtained. © The Author 2017. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
ERIC Educational Resources Information Center
Cangelosi, Angelo
2007-01-01
In this paper we present the "grounded adaptive agent" computational framework for studying the emergence of communication and language. This modeling framework is based on simulations of population of cognitive agents that evolve linguistic capabilities by interacting with their social and physical environment (internal and external symbol…
Foundations for computer simulation of a low pressure oil flooded single screw air compressor
NASA Astrophysics Data System (ADS)
Bein, T. W.
1981-12-01
The necessary logic to construct a computer model to predict the performance of an oil flooded, single screw air compressor is developed. The geometric variables and relationships used to describe the general single screw mechanism are developed. The governing equations to describe the processes are developed from their primary relationships. The assumptions used in the development are also defined and justified. The computer model predicts the internal pressure, temperature, and flowrates through the leakage paths throughout the compression cycle of the single screw compressor. The model uses empirical external values as the basis for the internal predictions. The computer values are compared to the empirical values, and conclusions are drawn based on the results. Recommendations are made for future efforts to improve the computer model and to verify some of the conclusions that are drawn.
Low-order modeling of internal heat transfer in biomass particle pyrolysis
Wiggins, Gavin M.; Daw, C. Stuart; Ciesielski, Peter N.
2016-05-11
We present a computationally efficient, one-dimensional simulation methodology for biomass particle heating under conditions typical of fast pyrolysis. Our methodology is based on identifying the rate limiting geometric and structural factors for conductive heat transport in biomass particle models with realistic morphology to develop low-order approximations that behave appropriately. Comparisons of transient temperature trends predicted by our one-dimensional method with three-dimensional simulations of woody biomass particles reveal good agreement, if the appropriate equivalent spherical diameter and bulk thermal properties are used. Here, we conclude that, for particle sizes and heating regimes typical of fast pyrolysis, it is possible to simulatemore » biomass particle heating with reasonable accuracy and minimal computational overhead, even when variable size, aspherical shape, anisotropic conductivity, and complex, species-specific internal pore geometry are incorporated.« less
Spectral decomposition of internal gravity wave sea surface height in global models
NASA Astrophysics Data System (ADS)
Savage, Anna C.; Arbic, Brian K.; Alford, Matthew H.; Ansong, Joseph K.; Farrar, J. Thomas; Menemenlis, Dimitris; O'Rourke, Amanda K.; Richman, James G.; Shriver, Jay F.; Voet, Gunnar; Wallcraft, Alan J.; Zamudio, Luis
2017-10-01
Two global ocean models ranging in horizontal resolution from 1/12° to 1/48° are used to study the space and time scales of sea surface height (SSH) signals associated with internal gravity waves (IGWs). Frequency-horizontal wavenumber SSH spectral densities are computed over seven regions of the world ocean from two simulations of the HYbrid Coordinate Ocean Model (HYCOM) and three simulations of the Massachusetts Institute of Technology general circulation model (MITgcm). High wavenumber, high-frequency SSH variance follows the predicted IGW linear dispersion curves. The realism of high-frequency motions (>0.87 cpd) in the models is tested through comparison of the frequency spectral density of dynamic height variance computed from the highest-resolution runs of each model (1/25° HYCOM and 1/48° MITgcm) with dynamic height variance frequency spectral density computed from nine in situ profiling instruments. These high-frequency motions are of particular interest because of their contributions to the small-scale SSH variability that will be observed on a global scale in the upcoming Surface Water and Ocean Topography (SWOT) satellite altimetry mission. The variance at supertidal frequencies can be comparable to the tidal and low-frequency variance for high wavenumbers (length scales smaller than ˜50 km), especially in the higher-resolution simulations. In the highest-resolution simulations, the high-frequency variance can be greater than the low-frequency variance at these scales.
Adaptive time steps in trajectory surface hopping simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spörkel, Lasse, E-mail: spoerkel@kofo.mpg.de; Thiel, Walter, E-mail: thiel@kofo.mpg.de
2016-05-21
Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energymore » surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.« less
Adaptive time steps in trajectory surface hopping simulations
NASA Astrophysics Data System (ADS)
Spörkel, Lasse; Thiel, Walter
2016-05-01
Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.
The internal gravity wave spectrum in two high-resolution global ocean models
NASA Astrophysics Data System (ADS)
Arbic, B. K.; Ansong, J. K.; Buijsman, M. C.; Kunze, E. L.; Menemenlis, D.; Müller, M.; Richman, J. G.; Savage, A.; Shriver, J. F.; Wallcraft, A. J.; Zamudio, L.
2016-02-01
We examine the internal gravity wave (IGW) spectrum in two sets of high-resolution global ocean simulations that are forced concurrently by atmospheric fields and the astronomical tidal potential. We analyze global 1/12th and 1/25th degree HYCOM simulations, and global 1/12th, 1/24th, and 1/48th degree simulations of the MITgcm. We are motivated by the central role that IGWs play in ocean mixing, by operational considerations of the US Navy, which runs HYCOM as an ocean forecast model, and by the impact of the IGW continuum on the sea surface height (SSH) measurements that will be taken by the planned NASA/CNES SWOT wide-swath altimeter mission. We (1) compute the IGW horizontal wavenumber-frequency spectrum of kinetic energy, and interpret the results with linear dispersion relations computed from the IGW Sturm-Liouville problem, (2) compute and similarly interpret nonlinear spectral kinetic energy transfers in the IGW band, (3) compute and similarly interpret IGW contributions to SSH variance, (4) perform comparisons of modeled IGW kinetic energy frequency spectra with moored current meter observations, and (5) perform comparisons of modeled IGW kinetic energy vertical wavenumber-frequency spectra with moored observations. This presentation builds upon our work in Muller et al. (2015, GRL), who performed tasks (1), (2), and (4) in 1/12th and 1/25th degree HYCOM simulations, for one region of the North Pacific. New for this presentation are tasks (3) and (5), the inclusion of MITgcm solutions, and the analysis of additional ocean regions.
A fast recursive algorithm for molecular dynamics simulation
NASA Technical Reports Server (NTRS)
Jain, A.; Vaidehi, N.; Rodriguez, G.
1993-01-01
The present recursive algorithm for solving molecular systems' dynamical equations of motion employs internal variable models that reduce such simulations' computation time by an order of magnitude, relative to Cartesian models. Extensive use is made of spatial operator methods recently developed for analysis and simulation of the dynamics of multibody systems. A factor-of-450 speedup over the conventional O(N-cubed) algorithm is demonstrated for the case of a polypeptide molecule with 400 residues.
MoCog1: A computer simulation of recognition-primed human decision making
NASA Technical Reports Server (NTRS)
Gevarter, William B.
1991-01-01
This report describes the successful results of the first stage of a research effort to develop a 'sophisticated' computer model of human cognitive behavior. Most human decision-making is of the experience-based, relatively straight-forward, largely automatic, type of response to internal goals and drives, utilizing cues and opportunities perceived from the current environment. This report describes the development of the architecture and computer program associated with such 'recognition-primed' decision-making. The resultant computer program was successfully utilized as a vehicle to simulate findings that relate how an individual's implicit theories orient them toward particular goals, with resultant cognitions, affects, and behavior in response to their environment. The present work is an expanded version and is based on research reported while the author was an employee of NASA ARC.
NASA Astrophysics Data System (ADS)
Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin
2016-12-01
This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.
Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation
NASA Astrophysics Data System (ADS)
Maiti, Raman
2016-06-01
The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.
Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation
NASA Astrophysics Data System (ADS)
Maiti, Raman
2018-06-01
The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.
The development of the Canadian Mobile Servicing System Kinematic Simulation Facility
NASA Technical Reports Server (NTRS)
Beyer, G.; Diebold, B.; Brimley, W.; Kleinberg, H.
1989-01-01
Canada will develop a Mobile Servicing System (MSS) as its contribution to the U.S./International Space Station Freedom. Components of the MSS will include a remote manipulator (SSRMS), a Special Purpose Dexterous Manipulator (SPDM), and a mobile base (MRS). In order to support requirements analysis and the evaluation of operational concepts related to the use of the MSS, a graphics based kinematic simulation/human-computer interface facility has been created. The facility consists of the following elements: (1) A two-dimensional graphics editor allowing the rapid development of virtual control stations; (2) Kinematic simulations of the space station remote manipulators (SSRMS and SPDM), and mobile base; and (3) A three-dimensional graphics model of the space station, MSS, orbiter, and payloads. These software elements combined with state of the art computer graphics hardware provide the capability to prototype MSS workstations, evaluate MSS operational capabilities, and investigate the human-computer interface in an interactive simulation environment. The graphics technology involved in the development and use of this facility is described.
Large-Eddy Simulation of Internal Flow through Human Vocal Folds
NASA Astrophysics Data System (ADS)
Lasota, Martin; Šidlof, Petr
2018-06-01
The phonatory process occurs when air is expelled from the lungs through the glottis and the pressure drop causes flow-induced oscillations of the vocal folds. The flow fields created in phonation are highly unsteady and the coherent vortex structures are also generated. For accuracy it is essential to compute on humanlike computational domain and appropriate mathematical model. The work deals with numerical simulation of air flow within the space between plicae vocales and plicae vestibulares. In addition to the dynamic width of the rima glottidis, where the sound is generated, there are lateral ventriculus laryngis and sacculus laryngis included in the computational domain as well. The paper presents the results from OpenFOAM which are obtained with a large-eddy simulation using second-order finite volume discretization of incompressible Navier-Stokes equations. Large-eddy simulations with different subgrid scale models are executed on structured mesh. In these cases are used only the subgrid scale models which model turbulence via turbulent viscosity and Boussinesq approximation in subglottal and supraglottal area in larynx.
ERIC Educational Resources Information Center
Sachse, Karoline A.; Roppelt, Alexander; Haag, Nicole
2016-01-01
Trend estimation in international comparative large-scale assessments relies on measurement invariance between countries. However, cross-national differential item functioning (DIF) has been repeatedly documented. We ran a simulation study using national item parameters, which required trends to be computed separately for each country, to compare…
Relationship between Norm-internalization and Cooperation in N-person Prisoners' Dilemma Games
NASA Astrophysics Data System (ADS)
Matsumoto, Mitsutaka
In this paper, I discuss the problems of ``order in social situations'' using a computer simulation of iterated N-person prisoners' dilemma game. It has been claimed that, in the case of the 2-person prisoners' dilemma, repetition of games and the reciprocal use of the ``tit-for-tat'' strategy promote the possibility of cooperation. However, in cases of N-person prisoners' dilemma where N is greater than 2, the logic does not work effectively. The most essential problem is so called ``sanctioning problems''. In this paper, firstly, I discuss the ``sanctioning problems'' which were introduced by Axelrod and Keohane in 1986. Based on the model formalized by Axelrod, I propose a new model, in which I added a mechanism of players' payoff changes in the Axelrod's model. I call this mechanism norm-internalization and call our model ``norm-internalization game''. Second, by using the model, I investigated the relationship between agents' norm-internalization (payoff-alternation) and the possibilities of cooperation. The results of computer simulation indicated that unequal distribution of cooperating norm and uniform distribution of sanctioning norm are more effective in establishing cooperation. I discuss the mathematical features and the implications of the results on social science.
Computed Flow Through An Artificial Heart And Valve
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Kwak, Dochan; Kiris, Cetin; Chang, I-Dee
1994-01-01
NASA technical memorandum discusses computations of flow of blood through artificial heart and through tilting-disk artificial heart valve. Represents further progress in research described in "Numerical Simulation of Flow Through an Artificial Heart" (ARC-12478). One purpose of research to exploit advanced techniques of computational fluid dynamics and capabilities of supercomputers to gain understanding of complicated internal flows of viscous, essentially incompressible fluids like blood. Another to use understanding to design better artificial hearts and valves.
NASA Astrophysics Data System (ADS)
Townsend, Molly T.; Sarigul-Klijn, Nesrin
2018-04-01
Living in reduced gravitational environments for a prolonged duration such, as a fly by mission to Mars or an extended stay at the international space station, affects the human body - in particular, the spine. As the spine adapts to spaceflight, morphological and physiological changes cause the mechanical integrity of the spinal column to be compromised, potentially endangering internal organs, nervous health, and human body mechanical function. Therefore, a high fidelity computational model and simulation of the whole human spine was created and validated for the purpose of investigating the mechanical integrity of the spine in crew members during exploratory space missions. A spaceflight exposed spine has been developed through the adaptation of a three-dimensional nonlinear finite element model with the updated Lagrangian formulation of a healthy ground-based human spine in vivo. Simulation of the porohyperelastic response of the intervertebral disc to mechanical unloading resulted in a model capable of accurately predicting spinal swelling/lengthening, spinal motion, and internal stress distribution. The curvature of this space adaptation exposed spine model was compared to a control terrestrial-based finite element model, indicating how the shape changed. Finally, the potential of injury sites to crew members are predicted for a typical 9 day mission.
NASA Technical Reports Server (NTRS)
Mysko, Stephen J.; Chyu, Wei J.; Stortz, Michael W.; Chow, Chuen-Yen
1993-01-01
In this work, the computation of combined external/internal transonic flow on the complex forebody/inlet configuration of the AV-8B Harrier II is performed. The actual aircraft has been measured and its surface and surrounding domain, in which the fuselage and inlet have a common wall, have been described using structured grids. The 'thin-layer' Navier-Stokes equations were used to model the flow along with the Chimera embedded multi-block technique. A fully conservative, alternating direction implicit (ADI), approximately factored, partially fluxsplit algorithm was employed to perform the computation. Comparisons to some experimental wind tunnel data yielded good agreement for flow at zero incidence and angle of attack. The aim of this paper is to provide a methodology or computational tool for the numerical solution of complex external/internal flows.
Real time simulation of computer-assisted sequencing of terminal area operations
NASA Technical Reports Server (NTRS)
Dear, R. G.
1981-01-01
A simulation was developed to investigate the utilization of computer assisted decision making for the task of sequencing and scheduling aircraft in a high density terminal area. The simulation incorporates a decision methodology termed Constrained Position Shifting. This methodology accounts for aircraft velocity profiles, routes, and weight classes in dynamically sequencing and scheduling arriving aircraft. A sample demonstration of Constrained Position Shifting is presented where six aircraft types (including both light and heavy aircraft) are sequenced to land at Denver's Stapleton International Airport. A graphical display is utilized and Constrained Position Shifting with a maximum shift of four positions (rearward or forward) is compared to first come, first serve with respect to arrival at the runway. The implementation of computer assisted sequencing and scheduling methodologies is investigated. A time based control concept will be required and design considerations for such a system are discussed.
Park, Sung Hwan; Lee, Ji Min; Kim, Jong Shik
2013-01-01
An irregular performance of a mechanical-type constant power regulator is considered. In order to find the cause of an irregular discharge flow at the cut-off pressure area, modeling and numerical simulations are performed to observe dynamic behavior of internal parts of the constant power regulator system for a swashplate-type axial piston pump. The commercial numerical simulation software AMESim is applied to model the mechanical-type regulator with hydraulic pump and simulate the performance of it. The validity of the simulation model of the constant power regulator system is verified by comparing simulation results with experiments. In order to find the cause of the irregular performance of the mechanical-type constant power regulator system, the behavior of main components such as the spool, sleeve, and counterbalance piston is investigated using computer simulation. The shape modification of the counterbalance piston is proposed to improve the undesirable performance of the mechanical-type constant power regulator. The performance improvement is verified by computer simulation using AMESim software.
Protein Dynamics from NMR and Computer Simulation
NASA Astrophysics Data System (ADS)
Wu, Qiong; Kravchenko, Olga; Kemple, Marvin; Likic, Vladimir; Klimtchuk, Elena; Prendergast, Franklyn
2002-03-01
Proteins exhibit internal motions from the millisecond to sub-nanosecond time scale. The challenge is to relate these internal motions to biological function. A strategy to address this aim is to apply a combination of several techniques including high-resolution NMR, computer simulation of molecular dynamics (MD), molecular graphics, and finally molecular biology, the latter to generate appropriate samples. Two difficulties that arise are: (1) the time scale which is most directly biologically relevant (ms to μs) is not readily accessible by these techniques and (2) the techniques focus on local and not collective motions. We will outline methods using ^13C-NMR to help alleviate the second problem, as applied to intestinal fatty acid binding protein, a relatively small intracellular protein believed to be involved in fatty acid transport and metabolism. This work is supported in part by PHS Grant GM34847 (FGP) and by a fellowship from the American Heart Association (QW).
Effect of varying internal geometry on the static performance of rectangular thrust-reverser ports
NASA Technical Reports Server (NTRS)
Re, Richard J.; Mason, Mary L.
1987-01-01
An investigation has been conducted to evaluate the effects of several geometric parameters on the internal performance of rectangular thrust-reverser ports for nonaxisymmetric nozzles. Internal geometry was varied with a test apparatus which simulated a forward-flight nozzle with a single, fully deployed reverser port. The test apparatus was designed to simulate thrust reversal (conceptually) either in the convergent section of the nozzle or in the constant-area duct just upstream of the nozzle. The main geometric parameters investigated were port angle, port corner radius, port location, and internal flow blocker angle. For all reverser port geometries, the port opening had an aspect ratio (throat width to throat height) of 6.1 and had a constant passage area from the geometric port throat to the exit. Reverser-port internal performance and thrust-vector angles computed from force-balance measurements are presented.
Wu, Jingheng; Shen, Lin; Yang, Weitao
2017-10-28
Ab initio quantum mechanics/molecular mechanics (QM/MM) molecular dynamics simulation is a useful tool to calculate thermodynamic properties such as potential of mean force for chemical reactions but intensely time consuming. In this paper, we developed a new method using the internal force correction for low-level semiempirical QM/MM molecular dynamics samplings with a predefined reaction coordinate. As a correction term, the internal force was predicted with a machine learning scheme, which provides a sophisticated force field, and added to the atomic forces on the reaction coordinate related atoms at each integration step. We applied this method to two reactions in aqueous solution and reproduced potentials of mean force at the ab initio QM/MM level. The saving in computational cost is about 2 orders of magnitude. The present work reveals great potentials for machine learning in QM/MM simulations to study complex chemical processes.
Computer-assisted learning and simulation systems in dentistry--a challenge to society.
Welk, A; Splieth, Ch; Wierinck, E; Gilpatrick, R O; Meyer, G
2006-07-01
Computer technology is increasingly used in practical training at universities. However, in spite of their potential, computer-assisted learning (CAL) and computer-assisted simulation (CAS) systems still appear to be underutilized in dental education. Advantages, challenges, problems, and solutions of computer-assisted learning and simulation in dentistry are discussed by means of MEDLINE, open Internet platform searches, and key results of a study among German dental schools. The advantages of computer-assisted learning are seen for example in self-paced and self-directed learning and increased motivation. It is useful for both objective theoretical and practical tests and for training students to handle complex cases. CAL can lead to more structured learning and can support training in evidence-based decision-making. The reasons for the still relatively rare implementation of CAL/CAS systems in dental education include an inability to finance, lack of studies of CAL/CAS, and too much effort required to integrate CAL/CAS systems into the curriculum. To overcome the reasons for the relative low degree of computer technology use, we should strive for multicenter research and development projects monitored by the appropriate national and international scientific societies, so that the potential of computer technology can be fully realized in graduate, postgraduate, and continuing dental education.
Adaptive quantum computation in changing environments using projective simulation
NASA Astrophysics Data System (ADS)
Tiersch, M.; Ganahl, E. J.; Briegel, H. J.
2015-08-01
Quantum information processing devices need to be robust and stable against external noise and internal imperfections to ensure correct operation. In a setting of measurement-based quantum computation, we explore how an intelligent agent endowed with a projective simulator can act as controller to adapt measurement directions to an external stray field of unknown magnitude in a fixed direction. We assess the agent’s learning behavior in static and time-varying fields and explore composition strategies in the projective simulator to improve the agent’s performance. We demonstrate the applicability by correcting for stray fields in a measurement-based algorithm for Grover’s search. Thereby, we lay out a path for adaptive controllers based on intelligent agents for quantum information tasks.
Adaptive quantum computation in changing environments using projective simulation
Tiersch, M.; Ganahl, E. J.; Briegel, H. J.
2015-01-01
Quantum information processing devices need to be robust and stable against external noise and internal imperfections to ensure correct operation. In a setting of measurement-based quantum computation, we explore how an intelligent agent endowed with a projective simulator can act as controller to adapt measurement directions to an external stray field of unknown magnitude in a fixed direction. We assess the agent’s learning behavior in static and time-varying fields and explore composition strategies in the projective simulator to improve the agent’s performance. We demonstrate the applicability by correcting for stray fields in a measurement-based algorithm for Grover’s search. Thereby, we lay out a path for adaptive controllers based on intelligent agents for quantum information tasks. PMID:26260263
NASA Technical Reports Server (NTRS)
Siclari, Michael J.
1988-01-01
A computer code called NCOREL (for Nonconical Relaxation) has been developed to solve for supersonic full potential flows over complex geometries. The method first solves for the conical at the apex and then marches downstream in a spherical coordinate system. Implicit relaxation techniques are used to numerically solve the full potential equation at each subsequent crossflow plane. Many improvements have been made to the original code including more reliable numerics for computing wing-body flows with multiple embedded shocks, inlet flow through simulation, wake model and entropy corrections. Line relaxation or approximate factorization schemes are optionally available. Improved internal grid generation using analytic conformal mappings, supported by a simple geometric Harris wave drag input that was originally developed for panel methods and internal geometry package are some of the new features.
NASA Astrophysics Data System (ADS)
Caturla, M. J.; Abril, I.; Denton, C.; Martín-Bragado, I.
2015-06-01
The 12th edition of the International Conference on Computer Simulation of Radiation Effects in Solids (COSIRES2014) was held in Alicante (Alacant), Spain on June 8-13, organized by the University of Alacant. This conference series, which started in 1992 in Berlin, Germany, and that is held every two years, is now a well-established meeting where the latest developments in computer modeling of all forms of irradiation of materials are discussed.
Natural Tasking of Robots Based on Human Interaction Cues
2005-06-01
MIT. • Matthew Marjanovic , researcher, ITA Software. • Brian Scasselatti, Assistant Professor of Computer Science, Yale. • Matthew Williamson...2004. 25 [74] Charlie C. Kemp. Shoes as a platform for vision. 7th IEEE International Symposium on Wearable Computers, 2004. [75] Matthew Marjanovic ...meso: Simulated muscles for a humanoid robot. Presentation for Humanoid Robotics Group, MIT AI Lab, August 2001. [76] Matthew J. Marjanovic . Teaching
Reusable Rapid Prototyped Blunt Impact Simulator
2016-08-01
for a nonclassical gun experimental application. 15. SUBJECT TERMS rapid prototype, additive manufacturing, reusable projectile, 3-axis accelerometer... gun -launched applications.1,2 SLS technology uses a bed of powdered material that is introduced to a laser. The laser is controlled by a computer to...in creating internal gun -hardened electronics for a variety of high-g applications, GTB developed an internal electronics package containing a COTS
Numerical Modeling of Internal Flow Aerodynamics. Part 2: Unsteady Flows
2004-01-01
fluid- structure coupling, ...). • • • • • Prediction: in this simulation, we want to assess the effect of a change in SRM geometry, propellant...surface reaches the structure ). The third characteristic time describes the slow evolution of the internal geometry. The last characteristic time...incorporates fluid- structure coupling facility, and is parallel. MOPTI® manages exchanges between two principal computational modules: • • A varying
Liu, Xin; Zeng, Can-Jun; Lu, Jian-Sen; Lin, Xu-Chen; Huang, Hua-Jun; Tan, Xin-Yu; Cai, Dao-Zhang
2017-03-20
To evaluate the feasibility and effectiveness of using 3D printing and computer-assisted surgical simulation in preoperative planning for acetabular fractures. A retrospective analysis was performed in 53 patients with pelvic fracture, who underwent surgical treatment between September, 2013 and December, 2015 with complete follow-up data. Among them, 19 patients were treated with CT three-dimensional reconstruction, computer-assisted virtual reset internal fixation, 3D model printing, and personalized surgery simulation before surgery (3D group), and 34 patients underwent routine preoperative examination (conventional group). The intraoperative blood loss, transfusion volume, times of intraoperative X-ray, operation time, Matta score and Merle D' Aubigne & Postel score were recorded in the 2 groups. Preoperative planning and postoperative outcomes in the two groups were compared. All the operations were completed successfully. In 3D group, significantly less intraoperative blood loss, transfusion volume, fewer times of X-ray, and shortened operation time were recorded compared with those in the conventional group (P<0.05). According to the Matta scores, excellent or good fracture reduction was achieved in 94.7% (18/19) of the patients in 3D group and in 82.4% (28/34) of the patients in conventional group; the rates of excellent and good hip function at the final follow-up were 89.5% (17/19) in the 3D group and 85.3% (29/34) in the conventional group (P>0.05). In the 3D group, the actual internal fixation well matched the preoperative design. 3D printing and computer-assisted surgical simulation for preoperative planning is feasible and accurate for management of acetabular fracture and can effectively improve the operation efficiency.
1990-01-01
S. Orszag, Chairman 1. P. Moin Some Issues in Computation of Turbulent Flows. 2. M. Lesieur, P. Comte, X. Normand, 0. Metais and A. Silveira Spectral...Richtmeyer’s computational experience with one-dimensional shock waves (1950) indicated the value of a non-linear artificial viscosity. Charney and... computer architecture and the advantages of semi-Lagrangian advective schemes may lure large-scale atmospheric modelers back to finite-difference
King, Mark A; Glynn, Jonathan A; Mitchell, Sean R
2011-11-01
A subject-specific angle-driven computer model of a tennis player, combined with a forward dynamics, equipment-specific computer model of tennis ball-racket impacts, was developed to determine the effect of ball-racket impacts on loading at the elbow for one-handed backhand groundstrokes. Matching subject-specific computer simulations of a typical topspin/slice one-handed backhand groundstroke performed by an elite tennis player were done with root mean square differences between performance and matching simulations of < 0.5 degrees over a 50 ms period starting from ball impact. Simulation results suggest that for similar ball-racket impact conditions, the difference in elbow loading for a topspin and slice one-handed backhand groundstroke is relatively small. In this study, the relatively small differences in elbow loading may be due to comparable angle-time histories at the wrist and elbow joints with the major kinematic differences occurring at the shoulder. Using a subject-specific angle-driven computer model combined with a forward dynamics, equipment-specific computer model of tennis ball-racket impacts allows peak internal loading, net impulse, and shock due to ball-racket impact to be calculated which would not otherwise be possible without impractical invasive techniques. This study provides a basis for further investigation of the factors that may increase elbow loading during tennis strokes.
Interactive visualization of Earth and Space Science computations
NASA Technical Reports Server (NTRS)
Hibbard, William L.; Paul, Brian E.; Santek, David A.; Dyer, Charles R.; Battaiola, Andre L.; Voidrot-Martinez, Marie-Francoise
1994-01-01
Computers have become essential tools for scientists simulating and observing nature. Simulations are formulated as mathematical models but are implemented as computer algorithms to simulate complex events. Observations are also analyzed and understood in terms of mathematical models, but the number of these observations usually dictates that we automate analyses with computer algorithms. In spite of their essential role, computers are also barriers to scientific understanding. Unlike hand calculations, automated computations are invisible and, because of the enormous numbers of individual operations in automated computations, the relation between an algorithm's input and output is often not intuitive. This problem is illustrated by the behavior of meteorologists responsible for forecasting weather. Even in this age of computers, many meteorologists manually plot weather observations on maps, then draw isolines of temperature, pressure, and other fields by hand (special pads of maps are printed for just this purpose). Similarly, radiologists use computers to collect medical data but are notoriously reluctant to apply image-processing algorithms to that data. To these scientists with life-and-death responsibilities, computer algorithms are black boxes that increase rather than reduce risk. The barrier between scientists and their computations can be bridged by techniques that make the internal workings of algorithms visible and that allow scientists to experiment with their computations. Here we describe two interactive systems developed at the University of Wisconsin-Madison Space Science and Engineering Center (SSEC) that provide these capabilities to Earth and space scientists.
The free jet as a simulator of forward velocity effects on jet noise
NASA Technical Reports Server (NTRS)
Ahuja, K. K.; Tester, B. J.; Tanna, H. K.
1978-01-01
A thorough theoretical and experimental study of the effects of the free-jet shear layer on the transmission of sound from a model jet placed within the free jet to the far-field receiver located outside the free-jet flow was conducted. The validity and accuracy of the free-jet flight simulation technique for forward velocity effects on jet noise was evaluated. Transformation charts and a systematic computational procedure for converting measurements from a free-jet simulation to the corresponding results from a wind-tunnel simulation, and, finally, to the flight case were provided. The effects of simulated forward flight on jet mixing noise, internal noise and shock-associated noise from model-scale unheated and heated jets were established experimentally in a free-jet facility. It was illustrated that the existing anomalies between full-scale flight data and model-scale flight simulation data projected to the flight case, could well be due to the contamination of flight data by engine internal noise.
Numerical simulation of hemorrhage in human injury
NASA Astrophysics Data System (ADS)
Chong, Kwitae; Jiang, Chenfanfu; Santhanam, Anand; Benharash, Peyman; Teran, Joseph; Eldredge, Jeff
2015-11-01
Smoothed Particle Hydrodynamics (SPH) is adapted to simulate hemorrhage in the injured human body. As a Lagrangian fluid simulation, SPH uses fluid particles as computational elements and thus mass conservation is trivially satisfied. In order to ensure anatomical fidelity, a three-dimensional reconstruction of a portion of the human body -here, demonstrated on the lower leg- is sampled as skin, bone and internal tissue particles from the CT scan image of an actual patient. The injured geometry is then generated by simulation of ballistic projectiles passing through the anatomical model with the Material Point Method (MPM) and injured vessel segments are identified. From each such injured segment, SPH is used to simulate bleeding, with inflow boundary condition obtained from a coupled 1-d vascular tree model. Blood particles interact with impermeable bone and skin particles through the Navier-Stokes equations and with permeable internal tissue particles through the Brinkman equations. The SPH results are rendered in post-processing for improved visual fidelity. The overall simulation strategy is demonstrated on several injury scenarios in the lower leg.
Heat and mass transfer boundary conditions at the surface of a heated sessile droplet
NASA Astrophysics Data System (ADS)
Ljung, Anna-Lena; Lundström, T. Staffan
2017-12-01
This work numerically investigates how the boundary conditions of a heated sessile water droplet should be defined in order to include effects of both ambient and internal flow. Significance of water vapor, Marangoni convection, separate simulations of the external and internal flow, and influence of contact angle throughout drying is studied. The quasi-steady simulations are carried out with Computational Fluid Dynamics and conduction, natural convection and Marangoni convection are accounted for inside the droplet. For the studied conditions, a noticeable effect of buoyancy due to evaporation is observed. Hence, the inclusion of moisture increases the maximum velocities in the external flow. Marangoni convection will, in its turn, increase the velocity within the droplet with up to three orders of magnitude. Results furthermore show that the internal and ambient flow can be simulated separately for the conditions studied, and the accuracy is improved if the internal temperature gradient is low, e.g. if Marangoni convection is present. Simultaneous simulations of the domains are however preferred at high plate temperatures if both internal and external flows are dominated by buoyancy and natural convection. The importance of a spatially resolved heat and mass transfer boundary condition is, in its turn, increased if the internal velocity is small or if there is a large variation of the transfer coefficients at the surface. Finally, the results indicate that when the internal convective heat transport is small, a rather constant evaporation rate may be obtained throughout the drying at certain conditions.
Trusted computing strengthens cloud authentication.
Ghazizadeh, Eghbal; Zamani, Mazdak; Ab Manan, Jamalul-lail; Alizadeh, Mojtaba
2014-01-01
Cloud computing is a new generation of technology which is designed to provide the commercial necessities, solve the IT management issues, and run the appropriate applications. Another entry on the list of cloud functions which has been handled internally is Identity Access Management (IAM). Companies encounter IAM as security challenges while adopting more technologies became apparent. Trust Multi-tenancy and trusted computing based on a Trusted Platform Module (TPM) are great technologies for solving the trust and security concerns in the cloud identity environment. Single sign-on (SSO) and OpenID have been released to solve security and privacy problems for cloud identity. This paper proposes the use of trusted computing, Federated Identity Management, and OpenID Web SSO to solve identity theft in the cloud. Besides, this proposed model has been simulated in .Net environment. Security analyzing, simulation, and BLP confidential model are three ways to evaluate and analyze our proposed model.
Trusted Computing Strengthens Cloud Authentication
2014-01-01
Cloud computing is a new generation of technology which is designed to provide the commercial necessities, solve the IT management issues, and run the appropriate applications. Another entry on the list of cloud functions which has been handled internally is Identity Access Management (IAM). Companies encounter IAM as security challenges while adopting more technologies became apparent. Trust Multi-tenancy and trusted computing based on a Trusted Platform Module (TPM) are great technologies for solving the trust and security concerns in the cloud identity environment. Single sign-on (SSO) and OpenID have been released to solve security and privacy problems for cloud identity. This paper proposes the use of trusted computing, Federated Identity Management, and OpenID Web SSO to solve identity theft in the cloud. Besides, this proposed model has been simulated in .Net environment. Security analyzing, simulation, and BLP confidential model are three ways to evaluate and analyze our proposed model. PMID:24701149
Viscous computations of cold air/air flow around scramjet nozzle afterbody
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Engelund, Walter C.
1991-01-01
The flow field in and around the nozzle afterbody section of a hypersonic vehicle was computationally simulated. The compressible, Reynolds averaged, Navier Stokes equations were solved by an implicit, finite volume, characteristic based method. The computational grids were adapted to the flow as the solutions were developing in order to improve the accuracy. The exhaust gases were assumed to be cold. The computational results were obtained for the two dimensional longitudinal plane located at the half span of the internal portion of the nozzle for over expanded and under expanded conditions. Another set of results were obtained, where the three dimensional simulations were performed for a half span nozzle. The surface pressures were successfully compared with the data obtained from the wind tunnel tests. The results help in understanding this complex flow field and, in turn, should help the design of the nozzle afterbody section.
Structure of overheated metal clusters: MD simulation study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vorontsov, Alexander
2015-08-17
The structure of overheated metal clusters appeared in condensation process was studied by computer simulation techniques. It was found that clusters with size larger than several tens of atoms have three layers: core part, intermediate dense packing layer and a gas- like shell with low density. The change of the size and structure of these layers with the variation of internal energy and the size of cluster is discussed.
Knowledge management: Role of the the Radiation Safety Information Computational Center (RSICC)
NASA Astrophysics Data System (ADS)
Valentine, Timothy
2017-09-01
The Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL) is an information analysis center that collects, archives, evaluates, synthesizes and distributes information, data and codes that are used in various nuclear technology applications. RSICC retains more than 2,000 software packages that have been provided by code developers from various federal and international agencies. RSICC's customers (scientists, engineers, and students from around the world) obtain access to such computing codes (source and/or executable versions) and processed nuclear data files to promote on-going research, to ensure nuclear and radiological safety, and to advance nuclear technology. The role of such information analysis centers is critical for supporting and sustaining nuclear education and training programs both domestically and internationally, as the majority of RSICC's customers are students attending U.S. universities. Additionally, RSICC operates a secure CLOUD computing system to provide access to sensitive export-controlled modeling and simulation (M&S) tools that support both domestic and international activities. This presentation will provide a general review of RSICC's activities, services, and systems that support knowledge management and education and training in the nuclear field.
An FPGA computing demo core for space charge simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Jinyuan; Huang, Yifei; /Fermilab
2009-01-01
In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computedmore » using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.« less
NASA Technical Reports Server (NTRS)
Srivastava, Priyaka; Kraus, Jeff; Murawski, Robert; Golden, Bertsel, Jr.
2015-01-01
NASAs Space Communications and Navigation (SCaN) program manages three active networks: the Near Earth Network, the Space Network, and the Deep Space Network. These networks simultaneously support NASA missions and provide communications services to customers worldwide. To efficiently manage these resources and their capabilities, a team of student interns at the NASA Glenn Research Center is developing a distributed system to model the SCaN networks. Once complete, the system shall provide a platform that enables users to perform capacity modeling of current and prospective missions with finer-grained control of information between several simulation and modeling tools. This will enable the SCaN program to access a holistic view of its networks and simulate the effects of modifications in order to provide NASA with decisional information. The development of this capacity modeling system is managed by NASAs Strategic Center for Education, Networking, Integration, and Communication (SCENIC). Three primary third-party software tools offer their unique abilities in different stages of the simulation process. MagicDraw provides UMLSysML modeling, AGIs Systems Tool Kit simulates the physical transmission parameters and de-conflicts scheduled communication, and Riverbed Modeler (formerly OPNET) simulates communication protocols and packet-based networking. SCENIC developers are building custom software extensions to integrate these components in an end-to-end space communications modeling platform. A central control module acts as the hub for report-based messaging between client wrappers. Backend databases provide information related to mission parameters and ground station configurations, while the end user defines scenario-specific attributes for the model. The eight SCENIC interns are working under the direction of their mentors to complete an initial version of this capacity modeling system during the summer of 2015. The intern team is composed of four students in Computer Science, two in Computer Engineering, one in Electrical Engineering, and one studying Space Systems Engineering.
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Richard, Jacques C.
1991-01-01
An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.
Duda, Timothy F; Lin, Ying-Tsong; Reeder, D Benjamin
2011-09-01
A study of 400 Hz sound focusing and ducting effects in a packet of curved nonlinear internal waves in shallow water is presented. Sound propagation roughly along the crests of the waves is simulated with a three-dimensional parabolic equation computational code, and the results are compared to measured propagation along fixed 3 and 6 km source/receiver paths. The measurements were made on the shelf of the South China Sea northeast of Tung-Sha Island. Construction of the time-varying three-dimensional sound-speed fields used in the modeling simulations was guided by environmental data collected concurrently with the acoustic data. Computed three-dimensional propagation results compare well with field observations. The simulations allow identification of time-dependent sound forward scattering and ducting processes within the curved internal gravity waves. Strong acoustic intensity enhancement was observed during passage of high-amplitude nonlinear waves over the source/receiver paths, and is replicated in the model. The waves were typical of the region (35 m vertical displacement). Two types of ducting are found in the model, which occur asynchronously. One type is three-dimensional modal trapping in deep ducts within the wave crests (shallow thermocline zones). The second type is surface ducting within the wave troughs (deep thermocline zones). © 2011 Acoustical Society of America
Unsteady 3D flow simulations in cranial arterial tree
NASA Astrophysics Data System (ADS)
Grinberg, Leopold; Anor, Tomer; Madsen, Joseph; Karniadakis, George
2008-11-01
High resolution unsteady 3D flow simulations in major cranial arteries have been performed. Two cases were considered: 1) a healthy volunteer with a complete Circle of Willis (CoW); and 2) a patient with hydrocephalus and an incomplete CoW. Computation was performed on 3344 processors of the new half petaflop supercomputer in TACC. Two new numerical approaches were developed and implemented: 1) a new two-level domain decomposition method, which couples continuous and discontinuous Galerkin discretization of the computational domain; and 2) a new type of outflow boundary conditions, which imposes, in an accurate and computationally efficient manner, clinically measured flow rates. In the first simulation, a geometric model of 65 cranial arteries was reconstructed. Our simulation reveals a high degree of asymmetry in the flow at the left and right parts of the CoW and the presence of swirling flow in most of the CoW arteries. In the second simulation, one of the main findings was a high pressure drop at the right anterior communicating artery (PCA). Due to the incompleteness of the CoW and the pressure drop at the PCA, the right internal carotid artery supplies blood to most regions of the brain.
Cardiovascular simulator improvement: pressure versus volume loop assessment.
Fonseca, Jeison; Andrade, Aron; Nicolosi, Denys E C; Biscegli, José F; Leme, Juliana; Legendre, Daniel; Bock, Eduardo; Lucchi, Julio Cesar
2011-05-01
This article presents improvement on a physical cardiovascular simulator (PCS) system. Intraventricular pressure versus intraventricular volume (PxV) loop was obtained to evaluate performance of a pulsatile chamber mimicking the human left ventricle. PxV loop shows heart contractility and is normally used to evaluate heart performance. In many heart diseases, the stroke volume decreases because of low heart contractility. This pathological situation must be simulated by the PCS in order to evaluate the assistance provided by a ventricular assist device (VAD). The PCS system is automatically controlled by a computer and is an auxiliary tool for VAD control strategies development. This PCS system is according to a Windkessel model where lumped parameters are used for cardiovascular system analysis. Peripheral resistance, arteries compliance, and fluid inertance are simulated. The simulator has an actuator with a roller screw and brushless direct current motor, and the stroke volume is regulated by the actuator displacement. Internal pressure and volume measurements are monitored to obtain the PxV loop. Left chamber internal pressure is directly obtained by pressure transducer; however, internal volume has been obtained indirectly by using a linear variable differential transformer, which senses the diaphragm displacement. Correlations between the internal volume and diaphragm position are made. LabVIEW integrates these signals and shows the pressure versus internal volume loop. The results that have been obtained from the PCS system show PxV loops at different ventricle elastances, making possible the simulation of pathological situations. A preliminary test with a pulsatile VAD attached to PCS system was made. © 2011, Copyright the Authors. Artificial Organs © 2011, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Physically-Based Modelling and Real-Time Simulation of Fluids.
NASA Astrophysics Data System (ADS)
Chen, Jim Xiong
1995-01-01
Simulating physically realistic complex fluid behaviors presents an extremely challenging problem for computer graphics researchers. Such behaviors include the effects of driving boats through water, blending differently colored fluids, rain falling and flowing on a terrain, fluids interacting in a Distributed Interactive Simulation (DIS), etc. Such capabilities are useful in computer art, advertising, education, entertainment, and training. We present a new method for physically-based modeling and real-time simulation of fluids in computer graphics and dynamic virtual environments. By solving the 2D Navier -Stokes equations using a CFD method, we map the surface into 3D using the corresponding pressures in the fluid flow field. This achieves realistic real-time fluid surface behaviors by employing the physical governing laws of fluids but avoiding extensive 3D fluid dynamics computations. To complement the surface behaviors, we calculate fluid volume and external boundary changes separately to achieve full 3D general fluid flow. To simulate physical activities in a DIS, we introduce a mechanism which uses a uniform time scale proportional to the clock-time and variable time-slicing to synchronize physical models such as fluids in the networked environment. Our approach can simulate many different fluid behaviors by changing the internal or external boundary conditions. It can model different kinds of fluids by varying the Reynolds number. It can simulate objects moving or floating in fluids. It can also produce synchronized general fluid flows in a DIS. Our model can serve as a testbed to simulate many other fluid phenomena which have never been successfully modeled previously.
Modeling and Simulation Environment for Critical Infrastructure Protection
2006-06-20
address at the triennial International Symposium on Mathematical Programming, held in Copenhagen, Denmark in August 2003. Finally, in very recent work... Teleworking - The human and organizational issues of computer and information security. Paper presented at the 11th Annual Conference on Human
Sub-grid drag model for immersed vertical cylinders in fluidized beds
Verma, Vikrant; Li, Tingwen; Dietiker, Jean -Francois; ...
2017-01-03
Immersed vertical cylinders are often used as heat exchanger in gas-solid fluidized beds. Computational Fluid Dynamics (CFD) simulations are computationally expensive for large scale systems with bundles of cylinders. Therefore sub-grid models are required to facilitate simulations on a coarse grid, where internal cylinders are treated as a porous medium. The influence of cylinders on the gas-solid flow tends to enhance segregation and affect the gas-solid drag. A correction to gas-solid drag must be modeled using a suitable sub-grid constitutive relationship. In the past, Sarkar et al. have developed a sub-grid drag model for horizontal cylinder arrays based on 2Dmore » simulations. However, the effect of a vertical cylinder arrangement was not considered due to computational complexities. In this study, highly resolved 3D simulations with vertical cylinders were performed in small periodic domains. These simulations were filtered to construct a sub-grid drag model which can then be implemented in coarse-grid simulations. Gas-solid drag was filtered for different solids fractions and a significant reduction in drag was identified when compared with simulation without cylinders and simulation with horizontal cylinders. Slip velocities significantly increase when vertical cylinders are present. Lastly, vertical suspension drag due to vertical cylinders is insignificant however substantial horizontal suspension drag is observed which is consistent to the finding for horizontal cylinders.« less
Advances in computational design and analysis of airbreathing propulsion systems
NASA Technical Reports Server (NTRS)
Klineberg, John M.
1989-01-01
The development of commercial and military aircraft depends, to a large extent, on engine manufacturers being able to achieve significant increases in propulsion capability through improved component aerodynamics, materials, and structures. The recent history of propulsion has been marked by efforts to develop computational techniques that can speed up the propulsion design process and produce superior designs. The availability of powerful supercomputers, such as the NASA Numerical Aerodynamic Simulator, and the potential for even higher performance offered by parallel computer architectures, have opened the door to the use of multi-dimensional simulations to study complex physical phenomena in propulsion systems that have previously defied analysis or experimental observation. An overview of several NASA Lewis research efforts is provided that are contributing toward the long-range goal of a numerical test-cell for the integrated, multidisciplinary design, analysis, and optimization of propulsion systems. Specific examples in Internal Computational Fluid Mechanics, Computational Structural Mechanics, Computational Materials Science, and High Performance Computing are cited and described in terms of current capabilities, technical challenges, and future research directions.
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.
2014-01-01
Computational Aerodynamic simulations of a 1215 ft/sec tip speed transonic fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, low-noise research fan/nacelle model that has undergone extensive experimental testing in the 9- by 15-foot Low Speed Wind Tunnel at the NASA Glenn Research Center. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating points simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, which for this model did not include a split flow path with core and bypass ducts. As a result, it was only necessary to adjust fan rotational speed in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. Computed blade row flow fields at all fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the flow fields at all operating conditions reveals no excessive boundary layer separations or related secondary-flow problems.
Computational Electromagnetics (CEM) Laboratory: Simulation Planning Guide
NASA Technical Reports Server (NTRS)
Khayat, Michael A.
2011-01-01
The simulation process, milestones and inputs are unknowns to first-time users of the CEM Laboratory. The Simulation Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their engineering personnel in simulation planning and execution. Material covered includes a roadmap of the simulation process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, facility interfaces, and inputs necessary to define scope, cost, and schedule are included as an appendix to the guide.
Park, Sung Hwan; Lee, Ji Min; Kim, Jong Shik
2013-01-01
An irregular performance of a mechanical-type constant power regulator is considered. In order to find the cause of an irregular discharge flow at the cut-off pressure area, modeling and numerical simulations are performed to observe dynamic behavior of internal parts of the constant power regulator system for a swashplate-type axial piston pump. The commercial numerical simulation software AMESim is applied to model the mechanical-type regulator with hydraulic pump and simulate the performance of it. The validity of the simulation model of the constant power regulator system is verified by comparing simulation results with experiments. In order to find the cause of the irregular performance of the mechanical-type constant power regulator system, the behavior of main components such as the spool, sleeve, and counterbalance piston is investigated using computer simulation. The shape modification of the counterbalance piston is proposed to improve the undesirable performance of the mechanical-type constant power regulator. The performance improvement is verified by computer simulation using AMESim software. PMID:24282389
NASA Technical Reports Server (NTRS)
Russell, Louis M.; Thurman, Douglas R.; Simonyi, Patricia S.; Hippensteele, Steven A.; Poinsatte, Philip E.
1993-01-01
Visual and quantitative information was obtained on heat transfer and flow in a branched-duct test section that had several significant features of an internal cooling passage of a turbine blade. The objective of this study was to generate a set of experimental data that could be used to validate computer codes for internal cooling systems. Surface heat transfer coefficients and entrance flow conditions were measured at entrance Reynolds numbers of 45,000, 335,000, and 726,000. The heat transfer data were obtained using an Inconel heater sheet attached to the surface and coated with liquid crystals. Visual and quantitative flow field results using particle image velocimetry were also obtained for a plane at mid channel height for a Reynolds number of 45,000. The flow was seeded with polystyrene particles and illuminated by a laser light sheet. Computational results were determined for the same configurations and at matching Reynolds numbers; these surface heat transfer coefficients and flow velocities were computed with a commercially available code. The experimental and computational results were compared. Although some general trends did agree, there were inconsistencies in the temperature patterns as well as in the numerical results. These inconsistencies strongly suggest the need for further computational studies on complicated geometries such as the one studied.
Angelaki, Dora E
2017-01-01
Brainstem and cerebellar neurons implement an internal model to accurately estimate self-motion during externally generated (‘passive’) movements. However, these neurons show reduced responses during self-generated (‘active’) movements, indicating that predicted sensory consequences of motor commands cancel sensory signals. Remarkably, the computational processes underlying sensory prediction during active motion and their relationship to internal model computations during passive movements remain unknown. We construct a Kalman filter that incorporates motor commands into a previously established model of optimal passive self-motion estimation. The simulated sensory error and feedback signals match experimentally measured neuronal responses during active and passive head and trunk rotations and translations. We conclude that a single sensory internal model can combine motor commands with vestibular and proprioceptive signals optimally. Thus, although neurons carrying sensory prediction error or feedback signals show attenuated modulation, the sensory cues and internal model are both engaged and critically important for accurate self-motion estimation during active head movements. PMID:29043978
Rey-Martinez, Jorge; McGarvie, Leigh; Pérez-Fernández, Nicolás
2017-03-01
The obtained simulations support the underlying hypothesis that the hydrostatic caloric drive is dissipated by local convective flow in a hydropic duct. To develop a computerized model to simulate and predict the internal fluid thermodynamic behavior within both normal and hydropic horizontal ducts. This study used a computational fluid dynamics software to simulate the effects of cooling and warming of two geometrical models representing normal and hydropic ducts of one semicircular horizontal canal during 120 s. Temperature maps, vorticity, and velocity fields were successfully obtained to characterize the endolymphatic flow during the caloric test in the developed models. In the normal semicircular canal, a well-defined endolymphatic linear flow was obtained, this flow has an opposite direction depending only on the cooling or warming condition of the simulation. For the hydropic model a non-effective endolymphatic flow was predicted; in this model the velocity and vorticity fields show a non-linear flow, with some vortices formed inside the hydropic duct.
Flexible Inhibitor Fluid-Structure Interaction Simulation in RSRM.
NASA Astrophysics Data System (ADS)
Wasistho, Bono
2005-11-01
We employ our tightly coupled fluid/structure/combustion simulation code 'Rocstar-3' for solid propellant rocket motors to study 3D flows past rigid and flexible inhibitors in the Reusable Solid Rocket Motor (RSRM). We perform high resolution simulations of a section of the rocket near the center joint slot at 100 seconds after ignition, using inflow conditions based on less detailed 3D simulations of the full RSRM. Our simulations include both inviscid and turbulent flows (using LES dynamic subgrid-scale model), and explore the interaction between the inhibitor and the resulting fluid flow. The response of the solid components is computed by an implicit finite element solver. The internal mesh motion scheme in our block-structured fluid solver enables our code to handle significant changes in geometry. We compute turbulent statistics and determine the compound instabilities originated from the natural hydrodynamic instabilities and the inhibitor motion. The ultimate goal is to studdy the effect of inhibitor flexing on the turbulent field.
Development of an Efficient CFD Model for Nuclear Thermal Thrust Chamber Assembly Design
NASA Technical Reports Server (NTRS)
Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See
2007-01-01
The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed thermo-fluid environments and global characteristics of the internal ballistics for a hypothetical solid-core nuclear thermal thrust chamber assembly (NTTCA). Several numerical and multi-physics thermo-fluid models, such as real fluid, chemically reacting, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver as the underlying computational methodology. The numerical simulations of detailed thermo-fluid environment of a single flow element provide a mechanism to estimate the thermal stress and possible occurrence of the mid-section corrosion of the solid core. In addition, the numerical results of the detailed simulation were employed to fine tune the porosity model mimic the pressure drop and thermal load of the coolant flow through a single flow element. The use of the tuned porosity model enables an efficient simulation of the entire NTTCA system, and evaluating its performance during the design cycle.
High Fidelity Simulations of Plume Impingement to the International Space Station
NASA Technical Reports Server (NTRS)
Lumpkin, Forrest E., III; Marichalar, Jeremiah; Stewart, Benedicte D.
2012-01-01
With the retirement of the Space Shuttle, the United States now depends on recently developed commercial spacecraft to supply the International Space Station (ISS) with cargo. These new vehicles supplement ones from international partners including the Russian Progress, the European Autonomous Transfer Vehicle (ATV), and the Japanese H-II Transfer Vehicle (HTV). Furthermore, to carry crew to the ISS and supplement the capability currently provided exclusively by the Russian Soyuz, new designs and a refinement to a cargo vehicle design are in work. Many of these designs include features such as nozzle scarfing or simultaneous firing of multiple thrusters resulting in complex plumes. This results in a wide variety of complex plumes impinging upon the ISS. Therefore, to ensure safe "proximity operations" near the ISS, the need for accurate and efficient high fidelity simulation of plume impingement to the ISS is as high as ever. A capability combining computational fluid dynamics (CFD) and the Direct Simulation Monte Carlo (DSMC) techniques has been developed to properly model the large density variations encountered as the plume expands from the high pressure in the combustion chamber to the near vacuum conditions at the orbiting altitude of the ISS. Details of the computational tools employed by this method, including recent software enhancements and the best practices needed to achieve accurate simulations, are discussed. Several recent examples of the application of this high fidelity capability are presented. These examples highlight many of the real world, complex features of plume impingement that occur when "visiting vehicles" operate in the vicinity of the ISS.
Welch, M C; Kwan, P W; Sajeev, A S M
2014-10-01
Agent-based modelling has proven to be a promising approach for developing rich simulations for complex phenomena that provide decision support functions across a broad range of areas including biological, social and agricultural sciences. This paper demonstrates how high performance computing technologies, namely General-Purpose Computing on Graphics Processing Units (GPGPU), and commercial Geographic Information Systems (GIS) can be applied to develop a national scale, agent-based simulation of an incursion of Old World Screwworm fly (OWS fly) into the Australian mainland. The development of this simulation model leverages the combination of massively data-parallel processing capabilities supported by NVidia's Compute Unified Device Architecture (CUDA) and the advanced spatial visualisation capabilities of GIS. These technologies have enabled the implementation of an individual-based, stochastic lifecycle and dispersal algorithm for the OWS fly invasion. The simulation model draws upon a wide range of biological data as input to stochastically determine the reproduction and survival of the OWS fly through the different stages of its lifecycle and dispersal of gravid females. Through this model, a highly efficient computational platform has been developed for studying the effectiveness of control and mitigation strategies and their associated economic impact on livestock industries can be materialised. Copyright © 2014 International Atomic Energy Agency 2014. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Huismann, Tyler D.
Due to the rapidly expanding role of electric propulsion (EP) devices, it is important to evaluate their integration with other spacecraft systems. Specifically, EP device plumes can play a major role in spacecraft integration, and as such, accurate characterization of plume structure bears on mission success. This dissertation addresses issues related to accurate prediction of plume structure in a particular type of EP device, a Hall thruster. This is done in two ways: first, by coupling current plume simulation models with current models that simulate a Hall thruster's internal plasma behavior; second, by improving plume simulation models and thereby increasing physical fidelity. These methods are assessed by comparing simulated results to experimental measurements. Assessment indicates the two methods improve plume modeling capabilities significantly: using far-field ion current density as a metric, these approaches used in conjunction improve agreement with measurements by a factor of 2.5, as compared to previous methods. Based on comparison to experimental measurements, recent computational work on discharge chamber modeling has been largely successful in predicting properties of internal thruster plasmas. This model can provide detailed information on plasma properties at a variety of locations. Frequently, experimental data is not available at many locations that are of interest regarding computational models. Excepting the presence of experimental data, there are limited alternatives for scientifically determining plasma properties that are necessary as inputs into plume simulations. Therefore, this dissertation focuses on coupling current models that simulate internal thruster plasma behavior with plume simulation models. Further, recent experimental work on atom-ion interactions has provided a better understanding of particle collisions within plasmas. This experimental work is used to update collision models in a current plume simulation code. Previous versions of the code assume an unknown dependence between particles' pre-collision velocities and post-collision scattering angles. This dissertation focuses on updating several of these types of collisions by assuming a curve fit based on the measurements of atom-ion interactions, such that previously unknown angular dependences are well-characterized.
Filter-fluorescer measurement of low-voltage simulator x-ray energy spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldwin, G.T.; Craven, R.E.
X-ray energy spectra of the Maxwell Laboratories MBS and Physics International Pulserad 737 were measured using an eight-channel filter-fluorescer array. The PHOSCAT computer code was used to calculate channel response functions, and the UFO code to unfold spectrum.
The 3d International Workshop on Computational Electronics
NASA Astrophysics Data System (ADS)
Goodnick, Stephen M.
1994-09-01
The Third International Workshop on Computational Electronics (IWCE) was held at the Benson Hotel in downtown Portland, Oregon, on May 18, 19, and 20, 1994. The workshop was devoted to a broad range of topics in computational electronics related to the simulation of electronic transport in semiconductors and semiconductor devices, particularly those which use large computational resources. The workshop was supported by the National Science Foundation (NSF), the Office of Naval Research and the Army Research Office, as well as local support from the Oregon Joint Graduate Schools of Engineering and the Oregon Center for Advanced Technology Education. There were over 100 participants in the Portland workshop, of which more than one quarter represented research groups outside of the United States from Austria, Canada, France, Germany, Italy, Japan, Switzerland, and the United Kingdom. There were a total 81 papers presented at the workshop, 9 invited talks, 26 oral presentations and 46 poster presentations. The emphasis of the contributions reflected the interdisciplinary nature of computational electronics with researchers from the Chemistry, Computer Science, Mathematics, Engineering, and Physics communities participating in the workshop.
Multidimensional effects in the thermal response of fuel rod simulators. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dabbs, R.D.; Ott, L.J.
1980-01-01
One of the primary objectives of the Oak Ridge National Laboratory Pressurized-Water Reactor Blowdown Heat Transfer Separate-Effects Program is the determination of the transient surface temperature and surface heat flux of fuel pin simulators (FPSs) from internal thermocouple signals obtained during a loss-of-coolant experiment (LOCE) in the Thermal-Hydraulics Test Facility. This analysis requires the solution of the classical inverse heat conduction problem. The assumptions that allow the governing differential equation to be reduced to one dimension can introduce significant errors in the computed surface heat flux and surface temperature. The degree to which these computed variables are perturbed is addressedmore » and quantified.« less
van Kempen, Bob J H; Ferket, Bart S; Hofman, Albert; Steyerberg, Ewout W; Colkesen, Ersen B; Boekholdt, S Matthijs; Wareham, Nicholas J; Khaw, Kay-Tee; Hunink, M G Myriam
2012-12-06
We developed a Monte Carlo Markov model designed to investigate the effects of modifying cardiovascular disease (CVD) risk factors on the burden of CVD. Internal, predictive, and external validity of the model have not yet been established. The Rotterdam Ischemic Heart Disease and Stroke Computer Simulation (RISC) model was developed using data covering 5 years of follow-up from the Rotterdam Study. To prove 1) internal and 2) predictive validity, the incidences of coronary heart disease (CHD), stroke, CVD death, and non-CVD death simulated by the model over a 13-year period were compared with those recorded for 3,478 participants in the Rotterdam Study with at least 13 years of follow-up. 3) External validity was verified using 10 years of follow-up data from the European Prospective Investigation of Cancer (EPIC)-Norfolk study of 25,492 participants, for whom CVD and non-CVD mortality was compared. At year 5, the observed incidences (with simulated incidences in brackets) of CHD, stroke, and CVD and non-CVD mortality for the 3,478 Rotterdam Study participants were 5.30% (4.68%), 3.60% (3.23%), 4.70% (4.80%), and 7.50% (7.96%), respectively. At year 13, these percentages were 10.60% (10.91%), 9.90% (9.13%), 14.20% (15.12%), and 24.30% (23.42%). After recalibrating the model for the EPIC-Norfolk population, the 10-year observed (simulated) incidences of CVD and non-CVD mortality were 3.70% (4.95%) and 6.50% (6.29%). All observed incidences fell well within the 95% credibility intervals of the simulated incidences. We have confirmed the internal, predictive, and external validity of the RISC model. These findings provide a basis for analyzing the effects of modifying cardiovascular disease risk factors on the burden of CVD with the RISC model.
NASA Technical Reports Server (NTRS)
Bedrossian, Nazareth; Jang, Jiann-Woei; McCants, Edward; Omohundro, Zachary; Ring, Tom; Templeton, Jeremy; Zoss, Jeremy; Wallace, Jonathan; Ziegler, Philip
2011-01-01
Draper Station Analysis Tool (DSAT) is a computer program, built on commercially available software, for simulating and analyzing complex dynamic systems. Heretofore used in designing and verifying guidance, navigation, and control systems of the International Space Station, DSAT has a modular architecture that lends itself to modification for application to spacecraft or terrestrial systems. DSAT consists of user-interface, data-structures, simulation-generation, analysis, plotting, documentation, and help components. DSAT automates the construction of simulations and the process of analysis. DSAT provides a graphical user interface (GUI), plus a Web-enabled interface, similar to the GUI, that enables a remotely located user to gain access to the full capabilities of DSAT via the Internet and Webbrowser software. Data structures are used to define the GUI, the Web-enabled interface, simulations, and analyses. Three data structures define the type of analysis to be performed: closed-loop simulation, frequency response, and/or stability margins. DSAT can be executed on almost any workstation, desktop, or laptop computer. DSAT provides better than an order of magnitude improvement in cost, schedule, and risk assessment for simulation based design and verification of complex dynamic systems.
A One Dimensional, Time Dependent Inlet/Engine Numerical Simulation for Aircraft Propulsion Systems
NASA Technical Reports Server (NTRS)
Garrard, Doug; Davis, Milt, Jr.; Cole, Gary
1999-01-01
The NASA Lewis Research Center (LeRC) and the Arnold Engineering Development Center (AEDC) have developed a closely coupled computer simulation system that provides a one dimensional, high frequency inlet/engine numerical simulation for aircraft propulsion systems. The simulation system, operating under the LeRC-developed Application Portable Parallel Library (APPL), closely coupled a supersonic inlet with a gas turbine engine. The supersonic inlet was modeled using the Large Perturbation Inlet (LAPIN) computer code, and the gas turbine engine was modeled using the Aerodynamic Turbine Engine Code (ATEC). Both LAPIN and ATEC provide a one dimensional, compressible, time dependent flow solution by solving the one dimensional Euler equations for the conservation of mass, momentum, and energy. Source terms are used to model features such as bleed flows, turbomachinery component characteristics, and inlet subsonic spillage while unstarted. High frequency events, such as compressor surge and inlet unstart, can be simulated with a high degree of fidelity. The simulation system was exercised using a supersonic inlet with sixty percent of the supersonic area contraction occurring internally, and a GE J85-13 turbojet engine.
Computer Simulation of Developmental Processes and ...
see attached presentation slides Dr. Knudsen has been invited to give a lecture at XIV International Congress of Toxicology (IUTOX) in Merida-Mexico October 2-6, 2016. He was invited to speak in a workshop on “Developmental Toxicology, Different Models, Different Endpoints” and will give a lecture entitled
Modeling flow and solute transport in irrigation furrows
USDA-ARS?s Scientific Manuscript database
This paper presents an internally coupled flow and solute transport model for free-draining irrigation furrows. Furrow hydraulics is simulated with a numerical zero-inertia model and solute transport is computed with a model based on a numerical solution of the cross-section averaged advection-dispe...
The Australian Computational Earth Systems Simulator
NASA Astrophysics Data System (ADS)
Mora, P.; Muhlhaus, H.; Lister, G.; Dyskin, A.; Place, D.; Appelbe, B.; Nimmervoll, N.; Abramson, D.
2001-12-01
Numerical simulation of the physics and dynamics of the entire earth system offers an outstanding opportunity for advancing earth system science and technology but represents a major challenge due to the range of scales and physical processes involved, as well as the magnitude of the software engineering effort required. However, new simulation and computer technologies are bringing this objective within reach. Under a special competitive national funding scheme to establish new Major National Research Facilities (MNRF), the Australian government together with a consortium of Universities and research institutions have funded construction of the Australian Computational Earth Systems Simulator (ACcESS). The Simulator or computational virtual earth will provide the research infrastructure to the Australian earth systems science community required for simulations of dynamical earth processes at scales ranging from microscopic to global. It will consist of thematic supercomputer infrastructure and an earth systems simulation software system. The Simulator models and software will be constructed over a five year period by a multi-disciplinary team of computational scientists, mathematicians, earth scientists, civil engineers and software engineers. The construction team will integrate numerical simulation models (3D discrete elements/lattice solid model, particle-in-cell large deformation finite-element method, stress reconstruction models, multi-scale continuum models etc) with geophysical, geological and tectonic models, through advanced software engineering and visualization technologies. When fully constructed, the Simulator aims to provide the software and hardware infrastructure needed to model solid earth phenomena including global scale dynamics and mineralisation processes, crustal scale processes including plate tectonics, mountain building, interacting fault system dynamics, and micro-scale processes that control the geological, physical and dynamic behaviour of earth systems. ACcESS represents a part of Australia's contribution to the APEC Cooperation for Earthquake Simulation (ACES) international initiative. Together with other national earth systems science initiatives including the Japanese Earth Simulator and US General Earthquake Model projects, ACcESS aims to provide a driver for scientific advancement and technological breakthroughs including: quantum leaps in understanding of earth evolution at global, crustal, regional and microscopic scales; new knowledge of the physics of crustal fault systems required to underpin the grand challenge of earthquake prediction; new understanding and predictive capabilities of geological processes such as tectonics and mineralisation.
Simulation of Combustion Systems with Realistic g-jitter
NASA Technical Reports Server (NTRS)
Mell, William E.; McGrattan, Kevin B.; Baum, Howard R.
2003-01-01
In this project a transient, fully three-dimensional computer simulation code was developed to simulate the effects of realistic g-jitter on a number of combustion systems. The simulation code is capable of simulating flame spread on a solid and nonpremixed or premixed gaseous combustion in nonturbulent flow with simple combustion models. Simple combustion models were used to preserve computational efficiency since this is meant to be an engineering code. Also, the use of sophisticated turbulence models was not pursued (a simple Smagorinsky type model can be implemented if deemed appropriate) because if flow velocities are large enough for turbulence to develop in a reduced gravity combustion scenario it is unlikely that g-jitter disturbances (in NASA's reduced gravity facilities) will play an important role in the flame dynamics. Acceleration disturbances of realistic orientation, magnitude, and time dependence can be easily included in the simulation. The simulation algorithm was based on techniques used in an existing large eddy simulation code which has successfully simulated fire dynamics in complex domains. A series of simulations with measured and predicted acceleration disturbances on the International Space Station (ISS) are presented. The results of this series of simulations suggested a passive isolation system and appropriate scheduling of crew activity would provide a sufficiently "quiet" acceleration environment for spherical diffusion flames.
NASA Astrophysics Data System (ADS)
Harvey, Natalie J.; Huntley, Nathan; Dacre, Helen F.; Goldstein, Michael; Thomson, David; Webster, Helen
2018-01-01
Following the disruption to European airspace caused by the eruption of Eyjafjallajökull in 2010 there has been a move towards producing quantitative predictions of volcanic ash concentration using volcanic ash transport and dispersion simulators. However, there is no formal framework for determining the uncertainties of these predictions and performing many simulations using these complex models is computationally expensive. In this paper a Bayesian linear emulation approach is applied to the Numerical Atmospheric-dispersion Modelling Environment (NAME) to better understand the influence of source and internal model parameters on the simulator output. Emulation is a statistical method for predicting the output of a computer simulator at new parameter choices without actually running the simulator. A multi-level emulation approach is applied using two configurations of NAME with different numbers of model particles. Information from many evaluations of the computationally faster configuration is combined with results from relatively few evaluations of the slower, more accurate, configuration. This approach is effective when it is not possible to run the accurate simulator many times and when there is also little prior knowledge about the influence of parameters. The approach is applied to the mean ash column loading in 75 geographical regions on 14 May 2010. Through this analysis it has been found that the parameters that contribute the most to the output uncertainty are initial plume rise height, mass eruption rate, free tropospheric turbulence levels and precipitation threshold for wet deposition. This information can be used to inform future model development and observational campaigns and routine monitoring. The analysis presented here suggests the need for further observational and theoretical research into parameterisation of atmospheric turbulence. Furthermore it can also be used to inform the most important parameter perturbations for a small operational ensemble of simulations. The use of an emulator also identifies the input and internal parameters that do not contribute significantly to simulator uncertainty. Finally, the analysis highlights that the faster, less accurate, configuration of NAME can, on its own, provide useful information for the problem of predicting average column load over large areas.
SciDAC GSEP: Gyrokinetic Simulation of Energetic Particle Turbulence and Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Zhihong
Energetic particle (EP) confinement is a key physics issue for burning plasma experiment ITER, the crucial next step in the quest for clean and abundant energy, since ignition relies on self-heating by energetic fusion products (α-particles). Due to the strong coupling of EP with burning thermal plasmas, plasma confinement property in the ignition regime is one of the most uncertain factors when extrapolating from existing fusion devices to the ITER tokamak. EP population in current tokamaks are mostly produced by auxiliary heating such as neutral beam injection (NBI) and radio frequency (RF) heating. Remarkable progress in developing comprehensive EP simulationmore » codes and understanding basic EP physics has been made by two concurrent SciDAC EP projects GSEP funded by the Department of Energy (DOE) Office of Fusion Energy Science (OFES), which have successfully established gyrokinetic turbulence simulation as a necessary paradigm shift for studying the EP confinement in burning plasmas. Verification and validation have rapidly advanced through close collaborations between simulation, theory, and experiment. Furthermore, productive collaborations with computational scientists have enabled EP simulation codes to effectively utilize current petascale computers and emerging exascale computers. We review here key physics progress in the GSEP projects regarding verification and validation of gyrokinetic simulations, nonlinear EP physics, EP coupling with thermal plasmas, and reduced EP transport models. Advances in high performance computing through collaborations with computational scientists that enable these large scale electromagnetic simulations are also highlighted. These results have been widely disseminated in numerous peer-reviewed publications including many Phys. Rev. Lett. papers and many invited presentations at prominent fusion conferences such as the biennial International Atomic Energy Agency (IAEA) Fusion Energy Conference and the annual meeting of the American Physics Society, Division of Plasma Physics (APS-DPP).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K
2015-01-01
Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allowmore » rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark-ignition combustion cycles within a parametric sampling grid were performed and analyzed statistically, and sensitivities of the physical factors leading to high CV are presented. With these results, the prospect of producing low-dimensional metamodels to describe engine dynamics at any point in the parameter space will be discussed. Additionally, modifications to the methodology to account for nondeterministic effects in the numerical solution environment are proposed« less
NASA Technical Reports Server (NTRS)
Park, Brian Vandellyn
1993-01-01
The Neutral Body Posture experienced in microgravity creates a biomechanical equilibrium by enabling the internal forces within the body to find their own balance. A patented reclining chair based on this posture provides a minimal stress environment for interfacing with computer systems for extended periods. When the chair is mounted on a 3 or 6 axis motion platform, a generic motion simulator for simulated digital environments is created. The Personal Motion Platform provides motional feedback to the occupant in synchronization with their movements inside the digital world which enhances the simulation experience. Existing HMD based simulation systems can be integrated to the turnkey system. Future developments are discussed.
Proceedings of the 2007 Antenna Applications Symposium, Volume 1
2007-12-01
34Gyrator- Based Biquad Filters and Negative Impedance Converters for Microwaves," International Journal of RF and Microwave Computer- Aided...upper curves in the figure is the bandwidth based on half power gain 0/ 2 0.443 / DλΔ = , and the other upper curve is for the “Ideal” case 0/ 2 0.5...simulator with TEM horn, 2Ports HP Power Network Analyzer (PNA). 153 [a] [b] Figure 7: Active input impedance of the measured and simulated
Gate simulation of Compton Ar-Xe gamma-camera for radionuclide imaging in nuclear medicine
NASA Astrophysics Data System (ADS)
Dubov, L. Yu; Belyaev, V. N.; Berdnikova, A. K.; Bolozdynia, A. I.; Akmalova, Yu A.; Shtotsky, Yu V.
2017-01-01
Computer simulations of cylindrical Compton Ar-Xe gamma camera are described in the current report. Detection efficiency of cylindrical Ar-Xe Compton camera with internal diameter of 40 cm is estimated as1-3%that is 10-100 times higher than collimated Anger’s camera. It is shown that cylindrical Compton camera can image Tc-99m radiotracer distribution with uniform spatial resolution of 20 mm through the whole field of view.
Simulation and mitigation of higher-order ionospheric errors in PPP
NASA Astrophysics Data System (ADS)
Zus, Florian; Deng, Zhiguo; Wickert, Jens
2017-04-01
We developed a rapid and precise algorithm to compute ionospheric phase advances in a realistic electron density field. The electron density field is derived from a plasmaspheric extension of the International Reference Ionosphere (Gulyaeva and Bilitza, 2012) and the magnetic field stems from the International Geomagnetic Reference Field. For specific station locations, elevation and azimuth angles the ionospheric phase advances are stored in a look-up table. The higher-order ionospheric residuals are computed by forming the standard linear combination of the ionospheric phase advances. In a simulation study we examine how the higher-order ionospheric residuals leak into estimated station coordinates, clocks, zenith delays and tropospheric gradients in precise point positioning. The simulation study includes a few hundred globally distributed stations and covers the time period 1990-2015. We take a close look on the estimated zenith delays and tropospheric gradients as they are considered a data source for meteorological and climate related research. We also show how the by product of this simulation study, the look-up tables, can be used to mitigate higher-order ionospheric errors in practise. Gulyaeva, T.L., and Bilitza, D. Towards ISO Standard Earth Ionosphere and Plasmasphere Model. In: New Developments in the Standard Model, edited by R.J. Larsen, pp. 1-39, NOVA, Hauppauge, New York, 2012, available at https://www.novapublishers.com/catalog/product_info.php?products_id=35812
Simulation and Collaborative Learning in Political Science and Sociology Classrooms.
ERIC Educational Resources Information Center
Peters, Sandra; Saxon, Deborah
The program described here used cooperative, content-based computer writing projects to teach Japanese students at an intermediate level of English proficiency enrolled in first-year, English-language courses in political science/environmental issues and sociology/environmental issues in an international college program. The approach was taken to…
Research-Based Design of Pedagogical Agent Roles: A Review, Progress, and Recommendations
ERIC Educational Resources Information Center
Kim, Yanghee; Baylor, Amy L.
2016-01-01
In this paper we review the contribution of our original work titled "Simulating Instructional Roles Through Pedagogical Agents" published in the "International Journal of Artificial Intelligence and Education" (Baylor and Kim in "Computers and Human Behavior," 25(2), 450-457, 2005). Our original work operationalized…
Simulation of minimally invasive vascular interventions for training purposes.
Alderliesten, Tanja; Konings, Maurits K; Niessen, Wiro J
2004-01-01
To master the skills required to perform minimally invasive vascular interventions, proper training is essential. A computer simulation environment has been developed to provide such training. The simulation is based on an algorithm specifically developed to simulate the motion of a guide wire--the main instrument used during these interventions--in the human vasculature. In this paper, the design and model of the computer simulation environment is described and first results obtained with phantom and patient data are presented. To simulate minimally invasive vascular interventions, a discrete representation of a guide wire is used which allows modeling of guide wires with different physical properties. An algorithm for simulating the propagation of a guide wire within a vascular system, on the basis of the principle of minimization of energy, has been developed. Both longitudinal translation and rotation are incorporated as possibilities for manipulating the guide wire. The simulation is based on quasi-static mechanics. Two types of energy are introduced: internal energy related to the bending of the guide wire, and external energy resulting from the elastic deformation of the vessel wall. A series of experiments were performed on phantom and patient data. Simulation results are qualitatively compared with 3D rotational angiography data. The results indicate plausible behavior of the simulation.
Tetrahedral and polyhedral mesh evaluation for cerebral hemodynamic simulation--a comparison.
Spiegel, Martin; Redel, Thomas; Zhang, Y; Struffert, Tobias; Hornegger, Joachim; Grossman, Robert G; Doerfler, Arnd; Karmonik, Christof
2009-01-01
Computational fluid dynamic (CFD) based on patient-specific medical imaging data has found widespread use for visualizing and quantifying hemodynamics in cerebrovascular disease such as cerebral aneurysms or stenotic vessels. This paper focuses on optimizing mesh parameters for CFD simulation of cerebral aneurysms. Valid blood flow simulations strongly depend on the mesh quality. Meshes with a coarse spatial resolution may lead to an inaccurate flow pattern. Meshes with a large number of elements will result in unnecessarily high computation time which is undesirable should CFD be used for planning in the interventional setting. Most CFD simulations reported for these vascular pathologies have used tetrahedral meshes. We illustrate the use of polyhedral volume elements in comparison to tetrahedral meshing on two different geometries, a sidewall aneurysm of the internal carotid artery and a basilar bifurcation aneurysm. The spatial mesh resolution ranges between 5,119 and 228,118 volume elements. The evaluation of the different meshes was based on the wall shear stress previously identified as a one possible parameter for assessing aneurysm growth. Polyhedral meshes showed better accuracy, lower memory demand, shorter computational speed and faster convergence behavior (on average 369 iterations less).
Streaming parallel GPU acceleration of large-scale filter-based spiking neural networks.
Slażyński, Leszek; Bohte, Sander
2012-01-01
The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises affordable large-scale neural network simulation previously only available at supercomputing facilities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of magnitude, the challenge is to develop fine-grained parallel algorithms to fully exploit the particulars of GPUs. Computation in a neural network is inherently parallel and thus a natural match for GPU architectures: given inputs, the internal state for each neuron can be updated in parallel. We show that for filter-based spiking neurons, like the Spike Response Model, the additive nature of membrane potential dynamics enables additional update parallelism. This also reduces the accumulation of numerical errors when using single precision computation, the native precision of GPUs. We further show that optimizing simulation algorithms and data structures to the GPU's architecture has a large pay-off: for example, matching iterative neural updating to the memory architecture of the GPU speeds up this simulation step by a factor of three to five. With such optimizations, we can simulate in better-than-realtime plausible spiking neural networks of up to 50 000 neurons, processing over 35 million spiking events per second.
NASA Technical Reports Server (NTRS)
Smith, Jeffrey D.; Twombly, I. Alexander; Maese, A. Christopher; Cagle, Yvonne; Boyle, Richard
2003-01-01
The International Space Station demonstrates the greatest capabilities of human ingenuity, international cooperation and technology development. The complexity of this space structure is unprecedented; and training astronaut crews to maintain all its systems, as well as perform a multitude of research experiments, requires the most advanced training tools and techniques. Computer simulation and virtual environments are currently used by astronauts to train for robotic arm manipulations and extravehicular activities; but now, with the latest computer technologies and recent successes in areas of medical simulation, the capability exists to train astronauts for more hands-on research tasks using immersive virtual environments. We have developed a new technology, the Virtual Glovebox (VGX), for simulation of experimental tasks that astronauts will perform aboard the Space Station. The VGX may also be used by crew support teams for design of experiments, testing equipment integration capability and optimizing the procedures astronauts will use. This is done through the 3D, desk-top sized, reach-in virtual environment that can simulate the microgravity environment in space. Additional features of the VGX allow for networking multiple users over the internet and operation of tele-robotic devices through an intuitive user interface. Although the system was developed for astronaut training and assisting support crews, Earth-bound applications, many emphasizing homeland security, have also been identified. Examples include training experts to handle hazardous biological and/or chemical agents in a safe simulation, operation of tele-robotic systems for assessing and diffusing threats such as bombs, and providing remote medical assistance to field personnel through a collaborative virtual environment. Thus, the emerging VGX simulation technology, while developed for space- based applications, can serve a dual use facilitating homeland security here on Earth.
Qinghua, Zhao; Jipeng, Li; Yongxing, Zhang; He, Liang; Xuepeng, Wang; Peng, Yan; Xiaofeng, Wu
2015-04-07
To employ three-dimensional finite element modeling and biomechanical simulation for evaluating the stability and stress conduction of two postoperative internal fixed modeling-multilevel posterior instrumentation ( MPI) and MPI with anterior instrumentation (MPAI) with neck-thoracic vertebral tumor en bloc resection. Mimics software and computed tomography (CT) images were used to establish the three-dimensional (3D) model of vertebrae C5-T2 and simulated the C7 en bloc vertebral resection for MPI and MPAI modeling. Then the statistics and images were transmitted into the ANSYS finite element system and 20N distribution load (simulating body weight) and applied 1 N · m torque on neutral point for simulating vertebral displacement and stress conduction and distribution of motion mode, i. e. flexion, extension, bending and rotating. With a better stability, the displacement of two adjacent vertebral bodies of MPI and MPAI modeling was less than that of complete vertebral modeling. No significant differences existed between each other. But as for stress shielding effect reduction, MPI was slightly better than MPAI. From biomechanical point of view, two internal instrumentations with neck-thoracic tumor en bloc resection may achieve an excellent stability with no significant differences. But with better stress conduction, MPI is more advantageous in postoperative reconstruction.
Data management and analysis for the Earth System Grid
NASA Astrophysics Data System (ADS)
Williams, D. N.; Ananthakrishnan, R.; Bernholdt, D. E.; Bharathi, S.; Brown, D.; Chen, M.; Chervenak, A. L.; Cinquini, L.; Drach, R.; Foster, I. T.; Fox, P.; Hankin, S.; Henson, V. E.; Jones, P.; Middleton, D. E.; Schwidder, J.; Schweitzer, R.; Schuler, R.; Shoshani, A.; Siebenlist, F.; Sim, A.; Strand, W. G.; Wilhelmi, N.; Su, M.
2008-07-01
The international climate community is expected to generate hundreds of petabytes of simulation data within the next five to seven years. This data must be accessed and analyzed by thousands of analysts worldwide in order to provide accurate and timely estimates of the likely impact of climate change on physical, biological, and human systems. Climate change is thus not only a scientific challenge of the first order but also a major technological challenge. In order to address this technological challenge, the Earth System Grid Center for Enabling Technologies (ESG-CET) has been established within the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC)-2 program, with support from the offices of Advanced Scientific Computing Research and Biological and Environmental Research. ESG-CET's mission is to provide climate researchers worldwide with access to the data, information, models, analysis tools, and computational capabilities required to make sense of enormous climate simulation datasets. Its specific goals are to (1) make data more useful to climate researchers by developing Grid technology that enhances data usability; (2) meet specific distributed database, data access, and data movement needs of national and international climate projects; (3) provide a universal and secure web-based data access portal for broad multi-model data collections; and (4) provide a wide-range of Grid-enabled climate data analysis tools and diagnostic methods to international climate centers and U.S. government agencies. Building on the successes of the previous Earth System Grid (ESG) project, which has enabled thousands of researchers to access tens of terabytes of data from a small number of ESG sites, ESG-CET is working to integrate a far larger number of distributed data providers, high-bandwidth wide-area networks, and remote computers in a highly collaborative problem-solving environment.
NASA Astrophysics Data System (ADS)
Bora, Ram Prasad; Prabhakar, Rajeev
2009-10-01
In this study, diffusion constants [translational (DT) and rotational (DR)], correlation times [rotational (τrot) and internal (τint)], and the intramolecular order parameters (S2) of the Alzheimer amyloid-β peptides Aβ40 and Aβ42 have been calculated from 150 ns molecular dynamics simulations in aqueous solution. The computed parameters have been compared with the experimentally measured values. The calculated DT of 1.61×10-6 cm2/s and 1.43×10-6 cm2/s for Aβ40 and Aβ42, respectively, at 300 K was found to follow the correct trend defined by the Debye-Stokes-Einstein relation that its value should decrease with the increase in the molecular weight. The estimated DR for Aβ40 and Aβ42 at 300 K are 0.085 and 0.071 ns-1, respectively. The rotational (Crot(t)) and internal (Cint(t)) correlation functions of Aβ40 and Aβ42 were observed to decay at nano- and picosecond time scales, respectively. The significantly different time decays of these functions validate the factorization of the total correlation function (Ctot(t)) of Aβ peptides into Crot(t) and Cint(t). At both short and long time scales, the Clore-Szabo model that was used as Cint(t) provided the best behavior of Ctot(t) for both Aβ40 and Aβ42. In addition, an effective rotational correlation time of Aβ40 is also computed at 18 °C and the computed value (2.30 ns) is in close agreement with the experimental value of 2.45 ns. The computed S2 parameters for the central hydrophobic core, the loop region, and C-terminal domains of Aβ40 and Aβ42 are in accord with the previous studies.
SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows
NASA Astrophysics Data System (ADS)
Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu
2017-12-01
A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.
Model implementation for dynamic computation of system cost
NASA Astrophysics Data System (ADS)
Levri, J.; Vaccari, D.
The Advanced Life Support (ALS) Program metric is the ratio of the equivalent system mass (ESM) of a mission based on International Space Station (ISS) technology to the ESM of that same mission based on ALS technology. ESM is a mission cost analog that converts the volume, power, cooling and crewtime requirements of a mission into mass units to compute an estimate of the life support system emplacement cost. Traditionally, ESM has been computed statically, using nominal values for system sizing. However, computation of ESM with static, nominal sizing estimates cannot capture the peak sizing requirements driven by system dynamics. In this paper, a dynamic model for a near-term Mars mission is described. The model is implemented in Matlab/Simulink' for the purpose of dynamically computing ESM. This paper provides a general overview of the crew, food, biomass, waste, water and air blocks in the Simulink' model. Dynamic simulations of the life support system track mass flow, volume and crewtime needs, as well as power and cooling requirement profiles. The mission's ESM is computed, based upon simulation responses. Ultimately, computed ESM values for various system architectures will feed into an optimization search (non-derivative) algorithm to predict parameter combinations that result in reduced objective function values.
NASA Technical Reports Server (NTRS)
Smith, Jeffrey
2003-01-01
The Bio- Visualization, Imaging and Simulation (BioVIS) Technology Center at NASA's Ames Research Center is dedicated to developing and applying advanced visualization, computation and simulation technologies to support NASA Space Life Sciences research and the objectives of the Fundamental Biology Program. Research ranges from high resolution 3D cell imaging and structure analysis, virtual environment simulation of fine sensory-motor tasks, computational neuroscience and biophysics to biomedical/clinical applications. Computer simulation research focuses on the development of advanced computational tools for astronaut training and education. Virtual Reality (VR) and Virtual Environment (VE) simulation systems have become important training tools in many fields from flight simulation to, more recently, surgical simulation. The type and quality of training provided by these computer-based tools ranges widely, but the value of real-time VE computer simulation as a method of preparing individuals for real-world tasks is well established. Astronauts routinely use VE systems for various training tasks, including Space Shuttle landings, robot arm manipulations and extravehicular activities (space walks). Currently, there are no VE systems to train astronauts for basic and applied research experiments which are an important part of many missions. The Virtual Glovebox (VGX) is a prototype VE system for real-time physically-based simulation of the Life Sciences Glovebox where astronauts will perform many complex tasks supporting research experiments aboard the International Space Station. The VGX consists of a physical display system utilizing duel LCD projectors and circular polarization to produce a desktop-sized 3D virtual workspace. Physically-based modeling tools (Arachi Inc.) provide real-time collision detection, rigid body dynamics, physical properties and force-based controls for objects. The human-computer interface consists of two magnetic tracking devices (Ascention Inc.) attached to instrumented gloves (Immersion Inc.) which co-locate the user's hands with hand/forearm representations in the virtual workspace. Force-feedback is possible in a work volume defined by a Phantom Desktop device (SensAble inc.). Graphics are written in OpenGL. The system runs on a 2.2 GHz Pentium 4 PC. The prototype VGX provides astronauts and support personnel with a real-time physically-based VE system to simulate basic research tasks both on Earth and in the microgravity of Space. The immersive virtual environment of the VGX also makes it a useful tool for virtual engineering applications including CAD development, procedure design and simulation of human-system systems in a desktop-sized work volume.
Liu, Peter X.; Lai, Pinhua; Xu, Shaoping; Zou, Yanni
2018-01-01
In the present work, the majority of implemented virtual surgery simulation systems have been based on either a mesh or meshless strategy with regard to soft tissue modelling. To take full advantage of the mesh and meshless models, a novel coupled soft tissue cutting model is proposed. Specifically, the reconstructed virtual soft tissue consists of two essential components. One is associated with surface mesh that is convenient for surface rendering and the other with internal meshless point elements that is used to calculate the force feedback during cutting. To combine two components in a seamless way, virtual points are introduced. During the simulation of cutting, the Bezier curve is used to characterize smooth and vivid incision on the surface mesh. At the same time, the deformation of internal soft tissue caused by cutting operation can be treated as displacements of the internal point elements. Furthermore, we discussed and proved the stability and convergence of the proposed approach theoretically. The real biomechanical tests verified the validity of the introduced model. And the simulation experiments show that the proposed approach offers high computational efficiency and good visual effect, enabling cutting of soft tissue with high stability. PMID:29850006
NCC Simulation Model: Simulating the operations of the network control center, phase 2
NASA Technical Reports Server (NTRS)
Benjamin, Norman M.; Paul, Arthur S.; Gill, Tepper L.
1992-01-01
The simulation of the network control center (NCC) is in the second phase of development. This phase seeks to further develop the work performed in phase one. Phase one concentrated on the computer systems and interconnecting network. The focus of phase two will be the implementation of the network message dialogues and the resources controlled by the NCC. These resources are requested, initiated, monitored and analyzed via network messages. In the NCC network messages are presented in the form of packets that are routed across the network. These packets are generated, encoded, decoded and processed by the network host processors that generate and service the message traffic on the network that connects these hosts. As a result, the message traffic is used to characterize the work done by the NCC and the connected network. Phase one of the model development represented the NCC as a network of bi-directional single server queues and message generating sources. The generators represented the external segment processors. The served based queues represented the host processors. The NCC model consists of the internal and external processors which generate message traffic on the network that links these hosts. To fully realize the objective of phase two it is necessary to identify and model the processes in each internal processor. These processes live in the operating system of the internal host computers and handle tasks such as high speed message exchanging, ISN and NFE interface, event monitoring, network monitoring, and message logging. Inter process communication is achieved through the operating system facilities. The overall performance of the host is determined by its ability to service messages generated by both internal and external processors.
NASA Astrophysics Data System (ADS)
Tang, Z. B.; Deng, Y. D.; Su, C. Q.; Yuan, X. H.
2015-06-01
In this study, a numerical model has been employed to analyze the internal flow field distribution in a heat exchanger applied for an automotive thermoelectric generator based on computational fluid dynamics. The model simulates the influence of factors relevant to the heat exchanger, including the automotive waste heat mass flow velocity, temperature, internal fins, and back pressure. The result is in good agreement with experimental test data. Sensitivity analysis of the inlet parameters shows that increase of the exhaust velocity, compared with the inlet temperature, makes little contribution (0.1 versus 0.19) to the heat transfer but results in a detrimental back pressure increase (0.69 versus 0.21). A configuration equipped with internal fins is proved to offer better thermal performance compared with that without fins. Finally, based on an attempt to improve the internal flow field, a more rational structure is obtained, offering a more homogeneous temperature distribution, higher average heat transfer coefficient, and lower back pressure.
Ardalan, Ali; Balikuddembe, Joseph Kimuli; Ingrassia, Pier Luigi; Carenzo, Luca; Della Corte, Francesco; Akbarisari, Ali; Djalali, Ahmadreza
2015-07-13
Disaster education needs innovative educational methods to be more effective compared to traditional approaches. This can be done by using virtual simulation method. This article presents an experience about using virtual simulation methods to teach health professional on disaster medicine in Iran. The workshop on the "Application of New Technologies in Disaster Management Simulation" was held in Tehran in January 2015. It was co-organized by the Disaster and Emergency Health Academy of Tehran University of Medical Sciences and Emergency and the Research Center in Disaster Medicine and Computer Science applied to Medicine (CRIMEDIM), Università del Piemonte Orientale. Different simulators were used by the participants, who were from the health system and other relevant fields, both inside and outside Iran. As a result of the workshop, all the concerned stakeholders are called on to support this new initiative of incorporating virtual training and exercise simulation in the field of disaster medicine, so that its professionals are endowed with field-based and practical skills in Iran and elsewhere. Virtual simulation technology is recommended to be used in education of disaster management. This requires capacity building of instructors, and provision of technologies. International collaboration can facilitate this process.
Enabling Grid Computing resources within the KM3NeT computing model
NASA Astrophysics Data System (ADS)
Filippidis, Christos
2016-04-01
KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duke, Daniel J.; Finney, Charles E. A.; Kastengren, Alan
Given the importance of the fuel-injection process on the combustion and emissions performance of gasoline direct injected engines, there has been significant recent interest in understanding the fluid dynamics within the injector, particularly around the needle and through the nozzles. Furthermore, the pressure losses and transients that occur in the flow passages above the needle are also of interest. Simulations of these injectors typically use the nominal design geometry, which does not always match the production geometry. Computed tomography (CT) using x-ray and neutron sources can be used to obtain the real geometry from production injectors, but there are trade-offsmore » in using these techniques. X-ray CT provides high resolution, but cannot penetrate through the thicker parts of the injector. Neutron CT has excellent penetrating power but lower resolution. Here, we present results from a joint effort to characterize a gasoline direct injector representative of the Spray G injector as defined by the Engine Combustion Network. High-resolution (1.2 to 3 µm) x-ray CT measurements from the Advanced Photon Source at Argonne National Laboratory were combined with moderate-resolution (40 µm) neutron CT measurements from the High Flux Isotope Reactor at Oak Ridge National Laboratory to generate a complete internal geometry for the injector. This effort combined the strengths of both facilities’ capabilities, with extremely fine spatially resolved features in the nozzles and injector tips and fine resolution of internal features of the needle along the length of injector. Analysis of the resulting surface model of the internal fluid flow volumes of the injector reveals how the internal cross-sectional area and nozzle hole geometry differs slightly from the design dimensions. A simplified numerical simulation of the internal flow shows how deviations from the design geometry can alter the flow inside the sac and holes. Our results of this study will provide computational modelers with very accurate solid and surface models for use in computational fluid dynamics studies and experimentalists with increased insight into the operating characteristics of their injectors.« less
Current structure of strongly nonlinear interfacial solitary waves
NASA Astrophysics Data System (ADS)
Semin, Sergey; Kurkina, Oxana; Kurkin, Andrey; Talipova, Tatiana; Pelinovsky, Efim; Churaev, Egor
2015-04-01
The characteristics of highly nonlinear solitary internal waves (solitons) in two-layer flow are computed within the fully nonlinear Navier-Stokes equations with use of numerical model of the Massachusetts Institute of Technology (MITgcm). The verification and adaptation of the model is based on the data from laboratory experiments [Carr & Davies, 2006]. The present paper also compares the results of our calculations with the computations performed in the framework of the fully nonlinear Bergen Ocean Model [Thiem et al, 2011]. The comparison of the computed soliton parameters with the predictions of the weakly nonlinear theory based on the Gardner equation is given. The occurrence of reverse flow in the bottom layer directly behind the soliton is confirmed in numerical simulations. The trajectories of Lagrangian particles in the internal soliton on the surface, on the interface and near the bottom are computed. The results demonstrated completely different trajectories at different depths of the model area. Thus, in the surface layer is observed the largest displacement of Lagrangian particles, which can be more than two and a half times larger than the characteristic width of the soliton. Located at the initial moment along the middle pycnocline fluid particles move along the elongated vertical loop at a distance of not more than one third of the width of the solitary wave. In the bottom layer of the fluid moves in the opposite direction of propagation of the internal wave, but under the influence of the reverse flow, when the bulk of the velocity field of the soliton ceases to influence the trajectory, it moves in the opposite direction. The magnitude of displacement of fluid particles in the bottom layer is not more than the half-width of the solitary wave. 1. Carr, M., and Davies, P.A. The motion of an internal solitary wave of depression over a fixed bottom boundary in a shallow, two-layer fluid. Phys. Fluids, 2006, vol. 18, No. 1, 1 - 10. 2. Thiem, O., Carr, M., Berntsen, J., and Davies, P.A. Numerical simulation of internal solitary wave-induced reverse flow and associated vortices in a shallow, two-layer fluid benthic boundary layer. Ocean Dynamics, 2011, vol. 61, No. 6, 857 - 872.
Duke, Daniel J.; Finney, Charles E. A.; Kastengren, Alan; ...
2017-03-14
Given the importance of the fuel-injection process on the combustion and emissions performance of gasoline direct injected engines, there has been significant recent interest in understanding the fluid dynamics within the injector, particularly around the needle and through the nozzles. Furthermore, the pressure losses and transients that occur in the flow passages above the needle are also of interest. Simulations of these injectors typically use the nominal design geometry, which does not always match the production geometry. Computed tomography (CT) using x-ray and neutron sources can be used to obtain the real geometry from production injectors, but there are trade-offsmore » in using these techniques. X-ray CT provides high resolution, but cannot penetrate through the thicker parts of the injector. Neutron CT has excellent penetrating power but lower resolution. Here, we present results from a joint effort to characterize a gasoline direct injector representative of the Spray G injector as defined by the Engine Combustion Network. High-resolution (1.2 to 3 µm) x-ray CT measurements from the Advanced Photon Source at Argonne National Laboratory were combined with moderate-resolution (40 µm) neutron CT measurements from the High Flux Isotope Reactor at Oak Ridge National Laboratory to generate a complete internal geometry for the injector. This effort combined the strengths of both facilities’ capabilities, with extremely fine spatially resolved features in the nozzles and injector tips and fine resolution of internal features of the needle along the length of injector. Analysis of the resulting surface model of the internal fluid flow volumes of the injector reveals how the internal cross-sectional area and nozzle hole geometry differs slightly from the design dimensions. A simplified numerical simulation of the internal flow shows how deviations from the design geometry can alter the flow inside the sac and holes. Our results of this study will provide computational modelers with very accurate solid and surface models for use in computational fluid dynamics studies and experimentalists with increased insight into the operating characteristics of their injectors.« less
Khavrutskii, Ilja V; Wallqvist, Anders
2010-11-09
This paper introduces an efficient single-topology variant of Thermodynamic Integration (TI) for computing relative transformation free energies in a series of molecules with respect to a single reference state. The presented TI variant that we refer to as Single-Reference TI (SR-TI) combines well-established molecular simulation methodologies into a practical computational tool. Augmented with Hamiltonian Replica Exchange (HREX), the SR-TI variant can deliver enhanced sampling in select degrees of freedom. The utility of the SR-TI variant is demonstrated in calculations of relative solvation free energies for a series of benzene derivatives with increasing complexity. Noteworthy, the SR-TI variant with the HREX option provides converged results in a challenging case of an amide molecule with a high (13-15 kcal/mol) barrier for internal cis/trans interconversion using simulation times of only 1 to 4 ns.
Space station Simulation Computer System (SCS) study for NASA/MSFC. Volume 2: Concept document
NASA Technical Reports Server (NTRS)
1989-01-01
The Simulation Computer System (SCS) concept document describes and establishes requirements for the functional performance of the SCS system, including interface, logistic, and qualification requirements. The SCS is the computational communications and display segment of the Marshall Space Flight Center (MSFC) Payload Training Complex (PTC). The PTC is the MSFC facility that will train onboard and ground operations personnel to operate the payloads and experiments on board the international Space Station Freedom. The requirements to be satisfied by the system implementation are identified here. The SCS concept document defines the requirements to be satisfied through the implementation of the system capability. The information provides the operational basis for defining the requirements to be allocated to the system components and enables the system organization to assess whether or not the completed system complies with the requirements of the system.
NASA Astrophysics Data System (ADS)
Recent advances in computational fluid dynamics are discussed in reviews and reports. Topics addressed include large-scale LESs for turbulent pipe and channel flows, numerical solutions of the Euler and Navier-Stokes equations on parallel computers, multigrid methods for steady high-Reynolds-number flow past sudden expansions, finite-volume methods on unstructured grids, supersonic wake flow on a blunt body, a grid-characteristic method for multidimensional gas dynamics, and CIC numerical simulation of a wave boundary layer. Consideration is given to vortex simulations of confined two-dimensional jets, supersonic viscous shear layers, spectral methods for compressible flows, shock-wave refraction at air/water interfaces, oscillatory flow in a two-dimensional collapsible channel, the growth of randomness in a spatially developing wake, and an efficient simplex algorithm for the finite-difference and dynamic linear-programming method in optimal potential control.
Impact of detector simulation in particle physics collider experiments
NASA Astrophysics Data System (ADS)
Daniel Elvira, V.
2017-06-01
Through the last three decades, accurate simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics (HEP) experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detector simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the precision of the physics results and publication turnaround, from data-taking to submission. It also presents estimates of the cost and economic impact of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data with increasingly complex detectors, taxing heavily the performance of simulation and reconstruction software. Consequently, exploring solutions to speed up simulation and reconstruction software to satisfy the growing demand of computing resources in a time of flat budgets is a matter that deserves immediate attention. The article ends with a short discussion on the potential solutions that are being considered, based on leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering HEP code for concurrency and parallel computing.
Parallelization of a Fully-Distributed Hydrologic Model using Sub-basin Partitioning
NASA Astrophysics Data System (ADS)
Vivoni, E. R.; Mniszewski, S.; Fasel, P.; Springer, E.; Ivanov, V. Y.; Bras, R. L.
2005-12-01
A primary obstacle towards advances in watershed simulations has been the limited computational capacity available to most models. The growing trend of model complexity, data availability and physical representation has not been matched by adequate developments in computational efficiency. This situation has created a serious bottleneck which limits existing distributed hydrologic models to small domains and short simulations. In this study, we present novel developments in the parallelization of a fully-distributed hydrologic model. Our work is based on the TIN-based Real-time Integrated Basin Simulator (tRIBS), which provides continuous hydrologic simulation using a multiple resolution representation of complex terrain based on a triangulated irregular network (TIN). While the use of TINs reduces computational demand, the sequential version of the model is currently limited over large basins (>10,000 km2) and long simulation periods (>1 year). To address this, a parallel MPI-based version of the tRIBS model has been implemented and tested using high performance computing resources at Los Alamos National Laboratory. Our approach utilizes domain decomposition based on sub-basin partitioning of the watershed. A stream reach graph based on the channel network structure is used to guide the sub-basin partitioning. Individual sub-basins or sub-graphs of sub-basins are assigned to separate processors to carry out internal hydrologic computations (e.g. rainfall-runoff transformation). Routed streamflow from each sub-basin forms the major hydrologic data exchange along the stream reach graph. Individual sub-basins also share subsurface hydrologic fluxes across adjacent boundaries. We demonstrate how the sub-basin partitioning provides computational feasibility and efficiency for a set of test watersheds in northeastern Oklahoma. We compare the performance of the sequential and parallelized versions to highlight the efficiency gained as the number of processors increases. We also discuss how the coupled use of TINs and parallel processing can lead to feasible long-term simulations in regional watersheds while preserving basin properties at high-resolution.
Quantum decision-maker theory and simulation
NASA Astrophysics Data System (ADS)
Zak, Michail; Meyers, Ronald E.; Deacon, Keith S.
2000-07-01
A quantum device simulating the human decision making process is introduced. It consists of quantum recurrent nets generating stochastic processes which represent the motor dynamics, and of classical neural nets describing the evolution of probabilities of these processes which represent the mental dynamics. The autonomy of the decision making process is achieved by a feedback from the mental to motor dynamics which changes the stochastic matrix based upon the probability distribution. This feedback replaces unavailable external information by an internal knowledge- base stored in the mental model in the form of probability distributions. As a result, the coupled motor-mental dynamics is described by a nonlinear version of Markov chains which can decrease entropy without an external source of information. Applications to common sense based decisions as well as to evolutionary games are discussed. An example exhibiting self-organization is computed using quantum computer simulation. Force on force and mutual aircraft engagements using the quantum decision maker dynamics are considered.
NASA Astrophysics Data System (ADS)
Tadokoro, Satoshi; Kitano, Hiroaki; Takahashi, Tomoichi; Noda, Itsuki; Matsubara, Hitoshi; Shinjoh, Atsushi; Koto, Tetsuo; Takeuchi, Ikuo; Takahashi, Hironao; Matsuno, Fumitoshi; Hatayama, Mitsunori; Nobe, Jun; Shimada, Susumu
2000-07-01
This paper introduces the RoboCup-Rescue Simulation Project, a contribution to the disaster mitigation, search and rescue problem. A comprehensive urban disaster simulator is constructed on distributed computers. Heterogeneous intelligent agents such as fire fighters, victims and volunteers conduct search and rescue activities in this virtual disaster world. A real world interface integrates various sensor systems and controllers of infrastructures in the real cities with the real world. Real-time simulation is synchronized with actual disasters, computing complex relationship between various damage factors and agent behaviors. A mission-critical man-machine interface provides portability and robustness of disaster mitigation centers, and augmented-reality interfaces for rescue in real disasters. It also provides a virtual- reality training function for the public. This diverse spectrum of RoboCup-Rescue contributes to the creation of the safer social system.
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.
2014-01-01
Computational Aerodynamic simulations of an 840 ft/sec tip speed, Advanced Ducted Propulsor fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, lownoise research fan/nacelle model that has undergone extensive experimental testing in the 9- by 15- foot Low Speed Wind Tunnel at the NASA Glenn Research Center, resulting in quality, detailed aerodynamic and acoustic measurement data. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating conditions simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, excluding a long core duct section downstream of the core inlet guide vane. As a result, only fan rotational speed and system bypass ratio, set by specifying static pressure downstream of the core inlet guide vane row, were adjusted in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. The computed blade row flow fields for all five fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the computed flow fields reveals no excessive boundary layer separations or related secondary-flow problems. A few spanwise comparisons between computational and measurement data in the bypass duct show that they are in good agreement, thus providing a partial validation of the computational results.
International Space Station (ISS)
2001-02-01
The Marshall Space Flight Center (MSFC) is responsible for designing and building the life support systems that will provide the crew of the International Space Station (ISS) a comfortable environment in which to live and work. Scientists and engineers at the MSFC are working together to provide the ISS with systems that are safe, efficient and cost-effective. These compact and powerful systems are collectively called the Environmental Control and Life Support Systems, or simply, ECLSS. This is an exterior view of the U.S. Laboratory Module Simulator containing the ECLSS Internal Thermal Control System (ITCS) testing facility at MSFC. At the bottom right is the data acquisition and control computers (in the blue equipment racks) that monitor the testing in the facility. The ITCS simulator facility duplicates the function, operation, and troubleshooting problems of the ITCS. The main function of the ITCS is to control the temperature of equipment and hardware installed in a typical ISS Payload Rack.
NASA Astrophysics Data System (ADS)
Peterson, J. L.; Bell, R.; Candy, J.; Guttenfelder, W.; Hammett, G. W.; Kaye, S. M.; LeBlanc, B.; Mikkelsen, D. R.; Smith, D. R.; Yuh, H. Y.
2012-05-01
The National Spherical Torus Experiment (NSTX) [M. Ono et al., Nucl. Fusion 40, 557 (2000)] can achieve high electron plasma confinement regimes that are super-critically unstable to the electron temperature gradient driven (ETG) instability. These plasmas, dubbed electron internal transport barriers (e-ITBs), occur when the magnetic shear becomes strongly negative. Using the gyrokinetic code GYRO [J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)], the first nonlinear ETG simulations of NSTX e-ITB plasmas reinforce this observation. Local simulations identify a strongly upshifted nonlinear critical gradient for thermal transport that depends on magnetic shear. Global simulations show e-ITB formation can occur when the magnetic shear becomes strongly negative. While the ETG-driven thermal flux at the outer edge of the barrier is large enough to be experimentally relevant, the turbulence cannot propagate past the barrier into the plasma interior.
Numerical Simulation of the Oscillations in a Mixer: An Internal Aeroacoustic Feedback System
NASA Technical Reports Server (NTRS)
Jorgenson, Philip C. E.; Loh, Ching Y.
2004-01-01
The space-time conservation element and solution element method is employed to numerically study the acoustic feedback system in a high temperature, high speed wind tunnel mixer. The computation captures the self-sustained feedback loop between reflecting Mach waves and the shear layer. This feedback loop results in violent instabilities that are suspected of causing damage to some tunnel components. The computed frequency is in good agreement with the available experimental data. The physical phenomena are explained based on the numerical results.
ATCA for Machines-- Advanced Telecommunications Computing Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, R.S.; /SLAC
2008-04-22
The Advanced Telecommunications Computing Architecture is a new industry open standard for electronics instrument modules and shelves being evaluated for the International Linear Collider (ILC). It is the first industrial standard designed for High Availability (HA). ILC availability simulations have shown clearly that the capabilities of ATCA are needed in order to achieve acceptable integrated luminosity. The ATCA architecture looks attractive for beam instruments and detector applications as well. This paper provides an overview of ongoing R&D including application of HA principles to power electronics systems.
Computer predictions of ground storage effects on performance of Galileo and ISPM generators
NASA Technical Reports Server (NTRS)
Chmielewski, A.
1983-01-01
Radioisotope Thermoelectric Generators (RTG) that will supply electrical power to the Galileo and International Solar Polar Mission (ISPM) spacecraft are exposed to several degradation mechanisms during the prolonged ground storage before launch. To assess the effect of storage on the RTG flight performance, a computer code has been developed which simulates all known degradation mechanisms that occur in an RTG during storage and flight. The modeling of these mechanisms and their impact on the RTG performance are discussed.
Current problems in applied mathematics and mathematical physics
NASA Astrophysics Data System (ADS)
Samarskii, A. A.
Papers are presented on such topics as mathematical models in immunology, mathematical problems of medical computer tomography, classical orthogonal polynomials depending on a discrete variable, and boundary layer methods for singular perturbation problems in partial derivatives. Consideration is also given to the computer simulation of supernova explosion, nonstationary internal waves in a stratified fluid, the description of turbulent flows by unsteady solutions of the Navier-Stokes equations, and the reduced Galerkin method for external diffraction problems using the spline approximation of fields.
Application of CFD codes to the design and development of propulsion systems
NASA Technical Reports Server (NTRS)
Lord, W. K.; Pickett, G. F.; Sturgess, G. J.; Weingold, H. D.
1987-01-01
The internal flows of aerospace propulsion engines have certain common features that are amenable to analysis through Computational Fluid Dynamics (CFD) computer codes. Although the application of CFD to engineering problems in engines was delayed by the complexities associated with internal flows, many codes with different capabilities are now being used as routine design tools. This is illustrated by examples taken from the aircraft gas turbine engine of flows calculated with potential flow, Euler flow, parabolized Navier-Stokes, and Navier-Stokes codes. Likely future directions of CFD applied to engine flows are described, and current barriers to continued progress are highlighted. The potential importance of the Numerical Aerodynamic Simulator (NAS) to resolution of these difficulties is suggested.
NIMROD: A computational laboratory for studying nonlinear fusion magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Sovinec, C. R.; Gianakon, T. A.; Held, E. D.; Kruger, S. E.; Schnack, D. D.
2003-05-01
Nonlinear numerical studies of macroscopic modes in a variety of magnetic fusion experiments are made possible by the flexible high-order accurate spatial representation and semi-implicit time advance in the NIMROD simulation code [A. H. Glasser et al., Plasma Phys. Controlled Fusion 41, A747 (1999)]. Simulation of a resistive magnetohydrodynamics mode in a shaped toroidal tokamak equilibrium demonstrates computation with disparate time scales, simulations of discharge 87009 in the DIII-D tokamak [J. L. Luxon et al., Plasma Physics and Controlled Nuclear Fusion Research 1986 (International Atomic Energy Agency, Vienna, 1987), Vol. I, p. 159] confirm an analytic scaling for the temporal evolution of an ideal mode subject to plasma-β increasing beyond marginality, and a spherical torus simulation demonstrates nonlinear free-boundary capabilities. A comparison of numerical results on magnetic relaxation finds the n=1 mode and flux amplification in spheromaks to be very closely related to the m=1 dynamo modes and magnetic reversal in reversed-field pinch configurations. Advances in local and nonlocal closure relations developed for modeling kinetic effects in fluid simulation are also described.
Propulsion Simulations with the Unstructured-Grid CFD Tool TetrUSS
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Pandya, Mohagna J.
2002-01-01
A computational investigation has been completed to assess the capability of the NASA Tetrahedral Unstructured Software System (TetrUSS) for simulation of exhaust nozzle flows. Three configurations were chosen for this study: (1) a fluidic jet effects model, (2) an isolated nacelle with a supersonic cruise nozzle, and (3) a fluidic pitchthrust- vectoring nozzle. These configurations were chosen because existing data provided a means for measuring the ability of the TetrUSS flow solver USM3D for simulating complex nozzle flows. Fluidic jet effects model simulations were compared with structured-grid CFD (computational fluid dynamics) data at Mach numbers from 0.3 to 1.2 at nozzle pressure ratios up to 7.2. Simulations of an isolated nacelle with a supersonic cruise nozzle were compared with wind tunnel experimental data and structured-grid CFD data at Mach numbers of 0.9 and 1.2, with a nozzle pressure ratio of 5. Fluidic pitch-thrust-vectoring nozzle simulations were compared with static experimental data and structured-grid CFD data at static freestream conditions and nozzle pressure ratios from 3 to 10. A fluidic injection case was computed with the third configuration at a nozzle pressure ratio of 4.6 and a secondary pressure ratio of 0.7. Results indicate that USM3D with the S-A turbulence model provides accurate exhaust nozzle simulations at on-design conditions, but does not predict internal shock location at overexpanded conditions or pressure recovery along a boattail at transonic conditions.
NASA Astrophysics Data System (ADS)
Akai, Hisazumi; Oguchi, Tamio
2007-09-01
This special issue of Journal of Physics: Condensed Matter comprises selected papers from the 1st International Conference on Quantum Simulators and Design (QSD2006) held in Hiroshima, Japan, 3-6 December 2006. This conference was organized under the auspices of the Development of New Quantum Simulators and Quantum Design Grant-in-Aid for Scientific Research on Priority Areas, Ministry of Education, Culture, Sports, Science and Technology of Japan (MEXT), and Hiroshima University Quantum design is a computational approach to the development of new materials with specified properties and functionalities. The basic ingredient is the use of quantum simulations to design a material that meets a given specification of properties and functionalities. For this to be successful, the quantum simulation should be highly reliable and be applicable to systems of realistic size. A central interest is, therefore, the development of new methods of quantum simulation and quantum design. This includes methods beyond the local density approximation of density functional theory (LDA), order-N methods, methods dealing with excitations and reactions, and so on, as well as the application of these methods to the design of new materials and devices. The field of quantum design has developed rapidly in the past few years and this conference provides an international forum for experimental and theoretical researchers to exchange ideas. A total of 183 delegates from 8 countries participated in the conference. There were 18 invited talks, 16 oral presentations and 100 posters. There were many new ideas and we foresee dramatic progress in the coming years. The 2nd International Conference on Quantum Simulators and Design will be held in Tokyo, Japan, 31 May-3 June 2008.
Computational Analysis of Arc-Jet Wedge Tests Including Ablation and Shape Change
NASA Technical Reports Server (NTRS)
Goekcen, Tahir; Chen, Yih-Kanq; Skokova, Kristina A.; Milos, Frank S.
2010-01-01
Coupled fluid-material response analyses of arc-jet wedge ablation tests conducted in a NASA Ames arc-jet facility are considered. These tests were conducted using blunt wedge models placed in a free jet downstream of the 6-inch diameter conical nozzle in the Ames 60-MW Interaction Heating Facility. The fluid analysis includes computational Navier-Stokes simulations of the nonequilibrium flowfield in the facility nozzle and test box as well as the flowfield over the models. The material response analysis includes simulation of two-dimensional surface ablation and internal heat conduction, thermal decomposition, and pyrolysis gas flow. For ablating test articles undergoing shape change, the material response and fluid analyses are coupled in order to calculate the time dependent surface heating and pressure distributions that result from shape change. The ablating material used in these arc-jet tests was Phenolic Impregnated Carbon Ablator. Effects of the test article shape change on fluid and material response simulations are demonstrated, and computational predictions of surface recession, shape change, and in-depth temperatures are compared with the experimental measurements.
Analysis of internal flows relative to the space shuttle main engine
NASA Technical Reports Server (NTRS)
1987-01-01
Cooperative efforts between the Lockheed-Huntsville Computational Mechanics Group and the NASA-MSFC Computational Fluid Dynamics staff has resulted in improved capabilities for numerically simulating incompressible flows generic to the Space Shuttle Main Engine (SSME). A well established and documented CFD code was obtained, modified, and applied to laminar and turbulent flows of the type occurring in the SSME Hot Gas Manifold. The INS3D code was installed on the NASA-MSFC CRAY-XMP computer system and is currently being used by NASA engineers. Studies to perform a transient analysis of the FPB were conducted. The COBRA/TRAC code is recommended for simulating the transient flow of oxygen into the LOX manifold. Property data for modifying the code to represent LOX/GOX flow was collected. The ALFA code was developed and recommended for representing the transient combustion in the preburner. These two codes will couple through the transient boundary conditions to simulate the startup and/or shutdown of the fuel preburner. A study, NAS8-37461, is currently being conducted to implement this modeling effort.
2005-02-01
meeting (January) STA @ ASA Events O ASA Breakfast Panel 0 STA Dinner and N. Ty Smith Lecture Computers in Anesthesia Meeting (October) Immediately...Friday, January 14 7:00am Continental Breakfast 7:00 am Continental Breakfast with Exhibits and Poster viewing 8:00 Welcome and Introductions 8:00...through their choice Friday, January 14 of four workshops. 7:00am Continental Breakfast with Exhibitors and Posters 3:15 Roundtable: Education Research 8
Scramjet Combustor Simulations Using Reduced Chemical Kinetics for Practical Fuels
2003-12-01
the aerospace industry in reducing prototype and testing costs and the time needed to bring products to market . Accurate simulation of chemical...JP-8 kinetics and soot models into the UNICORN CFD code (Montgomery et al., 2003a) NSF Phase I and II SBIRs for development of a computer-assisted...divided by diameter QSS quasi-steady state REI Reaction Engineering International UNICORN UNsteady Ignition and COmbustion with ReactioNs VULCAN Viscous Upwind aLgorithm for Complex flow ANalysis
NASA Astrophysics Data System (ADS)
Kim, Yong W.
Various papers on shock waves are presented. The general topics addressed include: shock formation, focusing, and implosion; shock reflection and diffraction; turbulence; laser-produced plasmas and waves; ionization and shock-plasma interaction; chemical kinetics, pyrolysis, and soot formation; experimental facilities, techniques, and applications; ignition of detonation and combustion; particle entrainment and shock propagation through particle suspension; boundary layers and blast simulation; computational methods and numerical simulation.
Enabling Extreme Scale Earth Science Applications at the Oak Ridge Leadership Computing Facility
NASA Astrophysics Data System (ADS)
Anantharaj, V. G.; Mozdzynski, G.; Hamrud, M.; Deconinck, W.; Smith, L.; Hack, J.
2014-12-01
The Oak Ridge Leadership Facility (OLCF), established at the Oak Ridge National Laboratory (ORNL) under the auspices of the U.S. Department of Energy (DOE), welcomes investigators from universities, government agencies, national laboratories and industry who are prepared to perform breakthrough research across a broad domain of scientific disciplines, including earth and space sciences. Titan, the OLCF flagship system, is currently listed as #2 in the Top500 list of supercomputers in the world, and the largest available for open science. The computational resources are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, sponsored by the U.S. DOE Office of Science. In 2014, over 2.25 billion core hours on Titan were awarded via INCITE projects., including 14% of the allocation toward earth sciences. The INCITE competition is also open to research scientists based outside the USA. In fact, international research projects account for 12% of the INCITE awards in 2014. The INCITE scientific review panel also includes 20% participation from international experts. Recent accomplishments in earth sciences at OLCF include the world's first continuous simulation of 21,000 years of earth's climate history (2009); and an unprecedented simulation of a magnitude 8 earthquake over 125 sq. miles. One of the ongoing international projects involves scaling the ECMWF Integrated Forecasting System (IFS) model to over 200K cores of Titan. ECMWF is a partner in the EU funded Collaborative Research into Exascale Systemware, Tools and Applications (CRESTA) project. The significance of the research carried out within this project is the demonstration of techniques required to scale current generation Petascale capable simulation codes towards the performance levels required for running on future Exascale systems. One of the techniques pursued by ECMWF is to use Fortran2008 coarrays to overlap computations and communications and to reduce the total volume of data communicated. Use of Titan has enabled ECMWF to plan future scalability developments and resource requirements. We will also discuss the best practices developed over the years in navigating logistical, legal and regulatory hurdles involved in supporting the facility's diverse user community.
Parameter Sweep and Optimization of Loosely Coupled Simulations Using the DAKOTA Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elwasif, Wael R; Bernholdt, David E; Pannala, Sreekanth
2012-01-01
The increasing availability of large scale computing capabilities has accelerated the development of high-fidelity coupled simulations. Such simulations typically involve the integration of models that implement various aspects of the complex phenomena under investigation. Coupled simulations are playing an integral role in fields such as climate modeling, earth systems modeling, rocket simulations, computational chemistry, fusion research, and many other computational fields. Model coupling provides scientists with systematic ways to virtually explore the physical, mathematical, and computational aspects of the problem. Such exploration is rarely done using a single execution of a simulation, but rather by aggregating the results from manymore » simulation runs that, together, serve to bring to light novel knowledge about the system under investigation. Furthermore, it is often the case (particularly in engineering disciplines) that the study of the underlying system takes the form of an optimization regime, where the control parameter space is explored to optimize an objective functions that captures system realizability, cost, performance, or a combination thereof. Novel and flexible frameworks that facilitate the integration of the disparate models into a holistic simulation are used to perform this research, while making efficient use of the available computational resources. In this paper, we describe the integration of the DAKOTA optimization and parameter sweep toolkit with the Integrated Plasma Simulator (IPS), a component-based framework for loosely coupled simulations. The integration allows DAKOTA to exploit the internal task and resource management of the IPS to dynamically instantiate simulation instances within a single IPS instance, allowing for greater control over the trade-off between efficiency of resource utilization and time to completion. We present a case study showing the use of the combined DAKOTA-IPS system to aid in the design of a lithium ion battery (LIB) cell, by studying a coupled system involving the electrochemistry and ion transport at the lower length scales and thermal energy transport at the device scales. The DAKOTA-IPS system provides a flexible tool for use in optimization and parameter sweep studies involving loosely coupled simulations that is suitable for use in situations where changes to the constituent components in the coupled simulation are impractical due to intellectual property or code heritage issues.« less
Schappals, Michael; Mecklenfeld, Andreas; Kröger, Leif; Botan, Vitalie; Köster, Andreas; Stephan, Simon; García, Edder J; Rutkai, Gabor; Raabe, Gabriele; Klein, Peter; Leonhard, Kai; Glass, Colin W; Lenhard, Johannes; Vrabec, Jadran; Hasse, Hans
2017-09-12
Thermodynamic properties are often modeled by classical force fields which describe the interactions on the atomistic scale. Molecular simulations are used for retrieving thermodynamic data from such models, and many simulation techniques and computer codes are available for that purpose. In the present round robin study, the following fundamental question is addressed: Will different user groups working with different simulation codes obtain coinciding results within the statistical uncertainty of their data? A set of 24 simple simulation tasks is defined and solved by five user groups working with eight molecular simulation codes: DL_POLY, GROMACS, IMC, LAMMPS, ms2, NAMD, Tinker, and TOWHEE. Each task consists of the definition of (1) a pure fluid that is described by a force field and (2) the conditions under which that property is to be determined. The fluids are four simple alkanes: ethane, propane, n-butane, and iso-butane. All force fields consider internal degrees of freedom: OPLS, TraPPE, and a modified OPLS version with bond stretching vibrations. Density and potential energy are determined as a function of temperature and pressure on a grid which is specified such that all states are liquid. The user groups worked independently and reported their results to a central instance. The full set of results was disclosed to all user groups only at the end of the study. During the study, the central instance gave only qualitative feedback. The results reveal the challenges of carrying out molecular simulations. Several iterations were needed to eliminate gross errors. For most simulation tasks, the remaining deviations between the results of the different groups are acceptable from a practical standpoint, but they are often outside of the statistical errors of the individual simulation data. However, there are also cases where the deviations are unacceptable. This study highlights similarities between computer experiments and laboratory experiments, which are both subject not only to statistical error but also to systematic error.
A Note on the Incremental Validity of Aggregate Predictors.
ERIC Educational Resources Information Center
Day, H. D.; Marshall, David
Three computer simulations were conducted to show that very high aggregate predictive validity coefficients can occur when the across-case variability in absolute score stability occurring in both the predictor and criterion matrices is quite small. In light of the increase in internal consistency reliability achieved by the method of aggregation…
Tian, Wei; Han, Xu; Zuo, Wangda; ...
2018-01-31
This paper presents a comprehensive review of the open literature on motivations, methods and applications of linking stratified airflow simulation to building energy simulation (BES). First, we reviewed the motivations for coupling prediction models for building energy and indoor environment. This review classified various exchanged data in different applications as interface data and state data, and found that choosing different data sets may lead to varying performance of stability, convergence, and speed for the co-simulation. Second, our review shows that an external coupling scheme is substantially more popular in implementations of co-simulation than an internal coupling scheme. The external couplingmore » is shown to be generally faster in computational speed, as well as easier to implement, maintain and expand than the internal coupling. Third, the external coupling can be carried out in different data synchronization schemes, including static coupling and dynamic coupling. In comparison, the static coupling that performs data exchange only once is computationally faster and more stable than the dynamic coupling. However, concerning accuracy, the dynamic coupling that requires multiple times of data exchange is more accurate than the static coupling. Furthermore, the review identified that the implementation of the external coupling can be achieved through customized interfaces, middleware, and standard interfaces. The customized interface is straightforward but may be limited to a specific coupling application. The middleware is versatile and user-friendly but usually limited in data synchronization schemes. The standard interface is versatile and promising, but may be difficult to implement. Current applications of the co-simulation are mainly energy performance evaluation and control studies. Finally, we discussed the limitations of the current research and provided an overview for future research.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Wei; Han, Xu; Zuo, Wangda
This paper presents a comprehensive review of the open literature on motivations, methods and applications of linking stratified airflow simulation to building energy simulation (BES). First, we reviewed the motivations for coupling prediction models for building energy and indoor environment. This review classified various exchanged data in different applications as interface data and state data, and found that choosing different data sets may lead to varying performance of stability, convergence, and speed for the co-simulation. Second, our review shows that an external coupling scheme is substantially more popular in implementations of co-simulation than an internal coupling scheme. The external couplingmore » is shown to be generally faster in computational speed, as well as easier to implement, maintain and expand than the internal coupling. Third, the external coupling can be carried out in different data synchronization schemes, including static coupling and dynamic coupling. In comparison, the static coupling that performs data exchange only once is computationally faster and more stable than the dynamic coupling. However, concerning accuracy, the dynamic coupling that requires multiple times of data exchange is more accurate than the static coupling. Furthermore, the review identified that the implementation of the external coupling can be achieved through customized interfaces, middleware, and standard interfaces. The customized interface is straightforward but may be limited to a specific coupling application. The middleware is versatile and user-friendly but usually limited in data synchronization schemes. The standard interface is versatile and promising, but may be difficult to implement. Current applications of the co-simulation are mainly energy performance evaluation and control studies. Finally, we discussed the limitations of the current research and provided an overview for future research.« less
Radaelli, A G; Augsburger, L; Cebral, J R; Ohta, M; Rüfenacht, D A; Balossino, R; Benndorf, G; Hose, D R; Marzo, A; Metcalfe, R; Mortier, P; Mut, F; Reymond, P; Socci, L; Verhegghe, B; Frangi, A F
2008-07-19
This paper presents the results of the Virtual Intracranial Stenting Challenge (VISC) 2007, an international initiative whose aim was to establish the reproducibility of state-of-the-art haemodynamical simulation techniques in subject-specific stented models of intracranial aneurysms (IAs). IAs are pathological dilatations of the cerebral artery walls, which are associated with high mortality and morbidity rates due to subarachnoid haemorrhage following rupture. The deployment of a stent as flow diverter has recently been indicated as a promising treatment option, which has the potential to protect the aneurysm by reducing the action of haemodynamical forces and facilitating aneurysm thrombosis. The direct assessment of changes in aneurysm haemodynamics after stent deployment is hampered by limitations in existing imaging techniques and currently requires resorting to numerical simulations. Numerical simulations also have the potential to assist in the personalized selection of an optimal stent design prior to intervention. However, from the current literature it is difficult to assess the level of technological advancement and the reproducibility of haemodynamical predictions in stented patient-specific models. The VISC 2007 initiative engaged in the development of a multicentre-controlled benchmark to analyse differences induced by diverse grid generation and computational fluid dynamics (CFD) technologies. The challenge also represented an opportunity to provide a survey of available technologies currently adopted by international teams from both academic and industrial institutions for constructing computational models of stented aneurysms. The results demonstrate the ability of current strategies in consistently quantifying the performance of three commercial intracranial stents, and contribute to reinforce the confidence in haemodynamical simulation, thus taking a step forward towards the introduction of simulation tools to support diagnostics and interventional planning.
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.
2014-01-01
Computational Aerodynamic simulations of a 1484 ft/sec tip speed quiet high-speed fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, low-noise research fan/nacelle model that has undergone experimental testing in the 9- by 15-foot Low Speed Wind Tunnel at the NASA Glenn Research Center. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating points simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, which includes a core duct and a bypass duct that merge upstream of the fan system nozzle. As a result, only fan rotational speed and the system bypass ratio, set by means of a translating nozzle plug, were adjusted in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. Computed blade row flow fields at all fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the computed flow fields reveals no excessive or critical boundary layer separations or related secondary-flow problems, with the exception of the hub boundary layer at the core duct entrance. At that location a significant flow separation is present. The region of local flow recirculation extends through a mixing plane, however, which for the particular mixing-plane model used is now known to exaggerate the recirculation. In any case, the flow separation has relatively little impact on the computed rotor and FEGV flow fields.
Modeling a flexible representation machinery of human concept learning.
Matsuka, Toshihiko; Sakamoto, Yasuaki; Chouchourelou, Arieta
2008-01-01
It is widely acknowledged that categorically organized abstract knowledge plays a significant role in high-order human cognition. Yet, there are many unknown issues about the nature of how categories are internally represented in our mind. Traditionally, it has been considered that there is a single innate internal representation system for categorical knowledge, such as Exemplars, Prototypes, or Rules. However, results of recent empirical and computational studies collectively suggest that the human internal representation system is apparently capable of exhibiting behaviors consistent with various types of internal representation schemes. We, then, hypothesized that humans' representational system as a dynamic mechanism, capable of selecting a representation scheme that meets situational characteristics, including complexities of category structure. The present paper introduces a framework for a cognitive model that integrates robust and flexible internal representation machinery. Three simulation studies were conducted. The results showed that SUPERSET, our new model, successfully exhibited cognitive behaviors that are consistent with three main theories of the human internal representation system. Furthermore, a simulation study on social cognitive behaviors showed that the model was capable of acquiring knowledge with high commonality, even for a category structure with numerous valid conceptualizations.
Passot, Jean-Baptiste; Luque, Niceto R.; Arleo, Angelo
2013-01-01
The cerebellum is thought to mediate sensorimotor adaptation through the acquisition of internal models of the body-environment interaction. These representations can be of two types, identified as forward and inverse models. The first predicts the sensory consequences of actions, while the second provides the correct commands to achieve desired state transitions. In this paper, we propose a composite architecture consisting of multiple cerebellar internal models to account for the adaptation performance of humans during sensorimotor learning. The proposed model takes inspiration from the cerebellar microcomplex circuit, and employs spiking neurons to process information. We investigate the intrinsic properties of the cerebellar circuitry subserving efficient adaptation properties, and we assess the complementary contributions of internal representations by simulating our model in a procedural adaptation task. Our simulation results suggest that the coupling of internal models enhances learning performance significantly (compared with independent forward and inverse models), and it allows for the reproduction of human adaptation capabilities. Furthermore, we provide a computational explanation for the performance improvement observed after one night of sleep in a wide range of sensorimotor tasks. We predict that internal model coupling is a necessary condition for the offline consolidation of procedural memories. PMID:23874289
NASA Astrophysics Data System (ADS)
Hobson, T.; Clarkson, V.
2012-09-01
As a result of continual space activity since the 1950s, there are now a large number of man-made Resident Space Objects (RSOs) orbiting the Earth. Because of the large number of items and their relative speeds, the possibility of destructive collisions involving important space assets is now of significant concern to users and operators of space-borne technologies. As a result, a growing number of international agencies are researching methods for improving techniques to maintain Space Situational Awareness (SSA). Computer simulation is a method commonly used by many countries to validate competing methodologies prior to full scale adoption. The use of supercomputing and/or reduced scale testing is often necessary to effectively simulate such a complex problem on todays computers. Recently the authors presented a simulation aimed at reducing the computational burden by selecting the minimum level of fidelity necessary for contrasting methodologies and by utilising multi-core CPU parallelism for increased computational efficiency. The resulting simulation runs on a single PC while maintaining the ability to effectively evaluate competing methodologies. Nonetheless, the ability to control the scale and expand upon the computational demands of the sensor management system is limited. In this paper, we examine the advantages of increasing the parallelism of the simulation by means of General Purpose computing on Graphics Processing Units (GPGPU). As many sub-processes pertaining to SSA management are independent, we demonstrate how parallelisation via GPGPU has the potential to significantly enhance not only research into techniques for maintaining SSA, but also to enhance the level of sophistication of existing space surveillance sensors and sensor management systems. Nonetheless, the use of GPGPU imposes certain limitations and adds to the implementation complexity, both of which require consideration to achieve an effective system. We discuss these challenges and how they can be overcome. We further describe an application of the parallelised system where visibility prediction is used to enhance sensor management. This facilitates significant improvement in maximum catalogue error when RSOs become temporarily unobservable. The objective is to demonstrate the enhanced scalability and increased computational capability of the system.
A variational approach to multi-phase motion of gas, liquid and solid based on the level set method
NASA Astrophysics Data System (ADS)
Yokoi, Kensuke
2009-07-01
We propose a simple and robust numerical algorithm to deal with multi-phase motion of gas, liquid and solid based on the level set method [S. Osher, J.A. Sethian, Front propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation, J. Comput. Phys. 79 (1988) 12; M. Sussman, P. Smereka, S. Osher, A level set approach for capturing solution to incompressible two-phase flow, J. Comput. Phys. 114 (1994) 146; J.A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge University Press, 1999; S. Osher, R. Fedkiw, Level Set Methods and Dynamics Implicit Surface, Applied Mathematical Sciences, vol. 153, Springer, 2003]. In Eulerian framework, to simulate interaction between a moving solid object and an interfacial flow, we need to define at least two functions (level set functions) to distinguish three materials. In such simulations, in general two functions overlap and/or disagree due to numerical errors such as numerical diffusion. In this paper, we resolved the problem using the idea of the active contour model [M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, International Journal of Computer Vision 1 (1988) 321; V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, International Journal of Computer Vision 22 (1997) 61; G. Sapiro, Geometric Partial Differential Equations and Image Analysis, Cambridge University Press, 2001; R. Kimmel, Numerical Geometry of Images: Theory, Algorithms, and Applications, Springer-Verlag, 2003] introduced in the field of image processing.
Haraldsson, Henrik; Kefayati, Sarah; Ahn, Sinyeob; Dyverfeldt, Petter; Lantz, Jonas; Karlsson, Matts; Laub, Gerhard; Ebbers, Tino; Saloner, David
2018-04-01
To measure the Reynolds stress tensor using 4D flow MRI, and to evaluate its contribution to computed pressure maps. A method to assess both velocity and Reynolds stress using 4D flow MRI is presented and evaluated. The Reynolds stress is compared by cross-sectional integrals of the Reynolds stress invariants. Pressure maps are computed using the pressure Poisson equation-both including and neglecting the Reynolds stress. Good agreement is seen for Reynolds stress between computational fluid dynamics, simulated MRI, and MRI experiment. The Reynolds stress can significantly influence the computed pressure loss for simulated (eg, -0.52% vs -15.34% error; P < 0.001) and experimental (eg, 306 ± 11 vs 203 ± 6 Pa; P < 0.001) data. A 54% greater pressure loss is seen at the highest experimental flow rate when accounting for Reynolds stress (P < 0.001). 4D flow MRI with extended motion-encoding enables quantification of both the velocity and the Reynolds stress tensor. The additional information provided by this method improves the assessment of pressure gradients across a stenosis in the presence of turbulence. Unlike conventional methods, which are only valid if the flow is laminar, the proposed method is valid for both laminar and disturbed flow, a common presentation in diseased vessels. Magn Reson Med 79:1962-1971, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
New Challenges in Computational Thermal Hydraulics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadigaroglu, George; Lakehal, Djamel
New needs and opportunities drive the development of novel computational methods for the design and safety analysis of light water reactors (LWRs). Some new methods are likely to be three dimensional. Coupling is expected between system codes, computational fluid dynamics (CFD) modules, and cascades of computations at scales ranging from the macro- or system scale to the micro- or turbulence scales, with the various levels continuously exchanging information back and forth. The ISP-42/PANDA and the international SETH project provide opportunities for testing applications of single-phase CFD methods to LWR safety problems. Although industrial single-phase CFD applications are commonplace, computational multifluidmore » dynamics is still under development. However, first applications are appearing; the state of the art and its potential uses are discussed. The case study of condensation of steam/air mixtures injected from a downward-facing vent into a pool of water is a perfect illustration of a simulation cascade: At the top of the hierarchy of scales, system behavior can be modeled with a system code; at the central level, the volume-of-fluid method can be applied to predict large-scale bubbling behavior; at the bottom of the cascade, direct-contact condensation can be treated with direct numerical simulation, in which turbulent flow (in both the gas and the liquid), interfacial dynamics, and heat/mass transfer are directly simulated without resorting to models.« less
Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Gulshan B., E-mail: gbsharma@ucalgary.ca; University of Pittsburgh, Swanson School of Engineering, Department of Bioengineering, Pittsburgh, Pennsylvania 15213; University of Calgary, Schulich School of Engineering, Department of Mechanical and Manufacturing Engineering, Calgary, Alberta T2N 1N4
Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respondmore » over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula’s material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element’s remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual specimen. Low predicted bone density was lower than actual specimen. Differences were probably due to applied muscle and joint reaction loads, boundary conditions, and values of constants used. Work is underway to study this. Nonetheless, the results demonstrate three dimensional bone remodeling simulation validity and potential. Such adaptive predictions take physiological bone remodeling simulations one step closer to reality. Computational analyses are needed that integrate biological remodeling rules and predict how bone will respond over time. We expect the combination of computational static stress analyses together with adaptive bone remodeling simulations to become effective tools for regenerative medicine research.« less
Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine
NASA Astrophysics Data System (ADS)
Sharma, Gulshan B.; Robertson, Douglas D.
2013-07-01
Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respond over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula's material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element's remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual specimen. Low predicted bone density was lower than actual specimen. Differences were probably due to applied muscle and joint reaction loads, boundary conditions, and values of constants used. Work is underway to study this. Nonetheless, the results demonstrate three dimensional bone remodeling simulation validity and potential. Such adaptive predictions take physiological bone remodeling simulations one step closer to reality. Computational analyses are needed that integrate biological remodeling rules and predict how bone will respond over time. We expect the combination of computational static stress analyses together with adaptive bone remodeling simulations to become effective tools for regenerative medicine research.
PREFACE: New trends in Computer Simulations in Physics and not only in physics
NASA Astrophysics Data System (ADS)
Shchur, Lev N.; Krashakov, Serge A.
2016-02-01
In this volume we have collected papers based on the presentations given at the International Conference on Computer Simulations in Physics and beyond (CSP2015), held in Moscow, September 6-10, 2015. We hope that this volume will be helpful and scientifically interesting for readers. The Conference was organized for the first time with the common efforts of the Moscow Institute for Electronics and Mathematics (MIEM) of the National Research University Higher School of Economics, the Landau Institute for Theoretical Physics, and the Science Center in Chernogolovka. The name of the Conference emphasizes the multidisciplinary nature of computational physics. Its methods are applied to the broad range of current research in science and society. The choice of venue was motivated by the multidisciplinary character of the MIEM. It is a former independent university, which has recently become the part of the National Research University Higher School of Economics. The Conference Computer Simulations in Physics and beyond (CSP) is planned to be organized biannually. This year's Conference featured 99 presentations, including 21 plenary and invited talks ranging from the analysis of Irish myths with recent methods of statistical physics, to computing with novel quantum computers D-Wave and D-Wave2. This volume covers various areas of computational physics and emerging subjects within the computational physics community. Each section was preceded by invited talks presenting the latest algorithms and methods in computational physics, as well as new scientific results. Both parallel and poster sessions paid special attention to numerical methods, applications and results. For all the abstracts presented at the conference please follow the link http://csp2015.ac.ru/files/book5x.pdf
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glotzer, S. C.; Kim, S.; Cummings, P. T.
This WTEC panel report assesses the international research and development activities in the field of Simulation- Based Engineering and Science (SBE&S). SBE&S involves the use of computer modeling and simulation to solve mathematical formulations of physical models of engineered and natural systems. SBE&S today has reached a level of predictive capability that it now firmly complements the traditional pillars of theory and experimentation/observation. As a result, computer simulation is more pervasive today – and having more impact – than at any other time in human history. Many critical technologies, including those to develop new energy sources and to shift themore » cost-benefit factors in healthcare, are on the horizon that cannot be understood, developed, or utilized without simulation. A panel of experts reviewed and assessed the state of the art in SBE&S as well as levels of activity overseas in the broad thematic areas of life sciences and medicine, materials, and energy and sustainability; and in the crosscutting issues of next generation hardware and algorithms; software development; engineering simulations; validation, verification, and uncertainty quantification; multiscale modeling and simulation; and SBE&S education. The panel hosted a U.S. baseline workshop, conducted a bibliometric analysis, consulted numerous experts and reports, and visited 59 institutions and companies throughout East Asia and Western Europe to explore the active research projects in those institutions, the computational infrastructure used for the projects, the funding schemes that enable the research, the collaborative interactions among universities, national laboratories, and corporate research centers, and workforce needs and development for SBE&S.« less
Computations of Drop Collision and Coalescence
NASA Technical Reports Server (NTRS)
Tryggvason, Gretar; Juric, Damir; Nas, Selman; Mortazavi, Saeed
1996-01-01
Computations of drops collisions, coalescence, and other problems involving drops are presented. The computations are made possible by a finite difference/front tracking technique that allows direct solutions of the Navier-Stokes equations for a multi-fluid system with complex, unsteady internal boundaries. This method has been used to examine the various collision modes for binary collisions of drops of equal size, mixing of two drops of unequal size, behavior of a suspension of drops in linear and parabolic shear flows, and the thermal migration of several drops. The key results from these simulations are reviewed. Extensions of the method to phase change problems and preliminary results for boiling are also shown.
Stochastic simulation of ecohydrological interactions between vegetation and groundwater
NASA Astrophysics Data System (ADS)
Dwelle, M. C.; Ivanov, V. Y.; Sargsyan, K.
2017-12-01
The complex interactions between groundwater and vegetation in the Amazon rainforest may yield vital ecophysiological interactions in specific landscape niches such as buffering plant water stress during dry season or suppression of water uptake due to anoxic conditions. Representation of such processes is greatly impacted by both external and internal sources of uncertainty: inaccurate data and subjective choice of model representation. The models that can simulate these processes are complex and computationally expensive, and therefore make it difficult to address uncertainty using traditional methods. We use the ecohydrologic model tRIBS+VEGGIE and a novel uncertainty quantification framework applied to the ZF2 watershed near Manaus, Brazil. We showcase the capability of this framework for stochastic simulation of vegetation-hydrology dynamics. This framework is useful for simulation with internal and external stochasticity, but this work will focus on internal variability of groundwater depth distribution and model parameterizations. We demonstrate the capability of this framework to make inferences on uncertain states of groundwater depth from limited in situ data, and how the realizations of these inferences affect the ecohydrological interactions between groundwater dynamics and vegetation function. We place an emphasis on the probabilistic representation of quantities of interest and how this impacts the understanding and interpretation of the dynamics at the groundwater-vegetation interface.
An intelligent interactive simulator of clinical reasoning in general surgery.
Wang, S.; el Ayeb, B.; Echavé, V.; Preiss, B.
1993-01-01
We introduce an interactive computer environment for teaching in general surgery and for diagnostic assistance. The environment consists of a knowledge-based system coupled with an intelligent interface that allows users to acquire conceptual knowledge and clinical reasoning techniques. Knowledge is represented internally within a probabilistic framework and externally through a interface inspired by Concept Graphics. Given a set of symptoms, the internal knowledge framework computes the most probable set of diseases as well as best alternatives. The interface displays CGs illustrating the results and prompting essential facts of a medical situation or a process. The system is then ready to receive additional information or to suggest further investigation. Based on the new information, the system will narrow the solutions with increased belief coefficients. PMID:8130508
An ABC estimate of pedigree error rate: application in dog, sheep and cattle breeds.
Leroy, G; Danchin-Burge, C; Palhiere, I; Baumung, R; Fritz, S; Mériaux, J C; Gautier, M
2012-06-01
On the basis of correlations between pairwise individual genealogical kinship coefficients and allele sharing distances computed from genotyping data, we propose an approximate Bayesian computation (ABC) approach to assess pedigree file reliability through gene-dropping simulations. We explore the features of the method using simulated data sets and show precision increases with the number of markers. An application is further made with five dog breeds, four sheep breeds and one cattle breed raised in France and displaying various characteristics and population sizes, using microsatellite or SNP markers. Depending on the breeds, pedigree error estimations range between 1% and 9% in dog breeds, 1% and 10% in sheep breeds and 4% in cattle breeds. © 2011 The Authors, Animal Genetics © 2011 Stichting International Foundation for Animal Genetics.
NASA Technical Reports Server (NTRS)
1991-01-01
Analytical Design Service Corporation, Ann Arbor, MI, used NASTRAN (a NASA Structural Analysis program that analyzes a design and predicts how parts will perform) in tests of transmissions, engine cooling systems, internal engine parts, and body components. They also use it to design future automobiles. Analytical software can save millions by allowing computer simulated analysis of performance even before prototypes are built.
A university/industry panel will report on the progress and findings of a fivesteve-year project funded by the US Environmental Protection Agency. The project's end product will be a Web-based, 3D computer-simulated residential environment with a decision support system to assist...
Signature modelling and radiometric rendering equations in infrared scene simulation systems
NASA Astrophysics Data System (ADS)
Willers, Cornelius J.; Willers, Maria S.; Lapierre, Fabian
2011-11-01
The development and optimisation of modern infrared systems necessitates the use of simulation systems to create radiometrically realistic representations (e.g. images) of infrared scenes. Such simulation systems are used in signature prediction, the development of surveillance and missile sensors, signal/image processing algorithm development and aircraft self-protection countermeasure system development and evaluation. Even the most cursory investigation reveals a multitude of factors affecting the infrared signatures of realworld objects. Factors such as spectral emissivity, spatial/volumetric radiance distribution, specular reflection, reflected direct sunlight, reflected ambient light, atmospheric degradation and more, all affect the presentation of an object's instantaneous signature. The signature is furthermore dynamically varying as a result of internal and external influences on the object, resulting from the heat balance comprising insolation, internal heat sources, aerodynamic heating (airborne objects), conduction, convection and radiation. In order to accurately render the object's signature in a computer simulation, the rendering equations must therefore account for all the elements of the signature. In this overview paper, the signature models, rendering equations and application frameworks of three infrared simulation systems are reviewed and compared. The paper first considers the problem of infrared scene simulation in a framework for simulation validation. This approach provides concise definitions and a convenient context for considering signature models and subsequent computer implementation. The primary radiometric requirements for an infrared scene simulator are presented next. The signature models and rendering equations implemented in OSMOSIS (Belgian Royal Military Academy), DIRSIG (Rochester Institute of Technology) and OSSIM (CSIR & Denel Dynamics) are reviewed. In spite of these three simulation systems' different application focus areas, their underlying physics-based approach is similar. The commonalities and differences between the different systems are investigated, in the context of their somewhat different application areas. The application of an infrared scene simulation system towards the development of imaging missiles and missile countermeasures are briefly described. Flowing from the review of the available models and equations, recommendations are made to further enhance and improve the signature models and rendering equations in infrared scene simulators.
NASA Astrophysics Data System (ADS)
Sawicki, J.; Siedlaczek, P.; Staszczyk, A.
2018-03-01
A numerical three-dimensional model for computing residual stresses generated in cross section of steel 42CrMo4 after nitriding is presented. The diffusion process is analyzed by the finite-element method. The internal stresses are computed using the obtained profile of the distribution of the nitrogen concentration. The special features of the intricate geometry of the treated articles including edges and angles are considered. Comparative analysis of the results of the simulation and of the experimental measurement of residual stresses is performed by the Waisman-Philips method.
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.; Fang, Jun; Kurbatskii, Konstantin A.
1996-01-01
A set of nonhomogeneous radiation and outflow conditions which automatically generate prescribed incoming acoustic or vorticity waves and, at the same time, are transparent to outgoing sound waves produced internally in a finite computation domain is proposed. This type of boundary condition is needed for the numerical solution of many exterior aeroacoustics problems. In computational aeroacoustics, the computation scheme must be as nondispersive ans nondissipative as possible. It must also support waves with wave speeds which are nearly the same as those of the original linearized Euler equations. To meet these requirements, a high-order/large-stencil scheme is necessary The proposed nonhomogeneous radiation and outflow boundary conditions are designed primarily for use in conjunction with such high-order/large-stencil finite difference schemes.
Computational Hemodynamic Simulation of Human Circulatory System under Altered Gravity
NASA Technical Reports Server (NTRS)
Kim. Chang Sung; Kiris, Cetin; Kwak, Dochan
2003-01-01
A computational hemodynamics approach is presented to simulate the blood flow through the human circulatory system under altered gravity conditions. Numerical techniques relevant to hemodynamics issues are introduced to non-Newtonian modeling for flow characteristics governed by red blood cells, distensible wall motion due to the heart pulse, and capillary bed modeling for outflow boundary conditions. Gravitational body force terms are added to the Navier-Stokes equations to study the effects of gravity on internal flows. Six-type gravity benchmark problems are originally presented to provide the fundamental understanding of gravitational effects on the human circulatory system. For code validation, computed results are compared with steady and unsteady experimental data for non-Newtonian flows in a carotid bifurcation model and a curved circular tube, respectively. This computational approach is then applied to the blood circulation in the human brain as a target problem. A three-dimensional, idealized Circle of Willis configuration is developed with minor arteries truncated based on anatomical data. Demonstrated is not only the mechanism of the collateral circulation but also the effects of gravity on the distensible wall motion and resultant flow patterns.
Simulating the Gradually Deteriorating Performance of an RTG
NASA Technical Reports Server (NTRS)
Wood, Eric G.; Ewell, Richard C.; Patel, Jagdish; Hanks, David R.; Lozano, Juan A.; Snyder, G. Jeffrey; Noon, Larry
2008-01-01
Degra (now in version 3) is a computer program that simulates the performance of a radioisotope thermoelectric generator (RTG) over its lifetime. Degra is provided with a graphical user interface that is used to edit input parameters that describe the initial state of the RTG and the time-varying loads and environment to which it will be exposed. Performance is computed by modeling the flows of heat from the radioactive source and through the thermocouples, also allowing for losses, to determine the temperature drop across the thermocouples. This temperature drop is used to determine the open-circuit voltage, electrical resistance, and thermal conductance of the thermocouples. Output power can then be computed by relating the open-circuit voltage and the electrical resistance of the thermocouples to a specified time-varying load voltage. Degra accounts for the gradual deterioration of performance attributable primarily to decay of the radioactive source and secondarily to gradual deterioration of the thermoelectric material. To provide guidance to an RTG designer, given a minimum of input, Degra computes the dimensions, masses, and thermal conductances of important internal structures as well as the overall external dimensions and total mass.
Fang, Juan; Gong, He; Kong, Lingyan; Zhu, Dong
2013-12-20
Bone can adjust its morphological structure to adapt to the changes of mechanical environment, i.e. the bone structure change is related to mechanical loading. This implies that osteoarthritis may be closely associated with knee joint deformity. The purposes of this paper were to simulate the internal bone mineral density (BMD) change in three-dimensional (3D) proximal tibia under different mechanical environments, as well as to explore the relationship between mechanical environment and bone morphological abnormity. The right proximal tibia was scanned with CT to reconstruct a 3D proximal tibia model in MIMICS, then it was imported to finite element software ANSYS to establish 3D finite element model. The internal structure of 3D proximal tibia of young normal people was simulated using quantitative bone remodeling theory in combination with finite element method, then based on the changing pattern of joint contact force on the tibial plateau in valgus knees, the mechanical loading was changed, and the simulated normal tibia structure was used as initial structure to simulate the internal structure of 3D proximal tibia for old people with 6° valgus deformity. Four regions of interest (ROIs) were selected in the proximal tibia to quantitatively analyze BMD and compare with the clinical measurements. The simulation results showed that the BMD distribution in 3D proximal tibia was consistent with clinical measurements in normal knees and that in valgus knees was consistent with the measurement of patients with osteoarthritis in clinics. It is shown that the change of mechanical environment is the main cause for the change of subchondral bone structure, and being under abnormal mechanical environment for a long time may lead to osteoarthritis. Besides, the simulation method adopted in this paper can more accurately simulate the internal structure of 3D proximal tibia under different mechanical environments. It helps to better understand the mechanism of osteoarthritis and provides theoretical basis and computational method for the prevention and treatment of osteoarthritis. It can also serve as basis for further study on periprosthetic BMD changes after total knee arthroplasty, and provide a theoretical basis for optimization design of prosthesis.
2013-01-01
Background Bone can adjust its morphological structure to adapt to the changes of mechanical environment, i.e. the bone structure change is related to mechanical loading. This implies that osteoarthritis may be closely associated with knee joint deformity. The purposes of this paper were to simulate the internal bone mineral density (BMD) change in three-dimensional (3D) proximal tibia under different mechanical environments, as well as to explore the relationship between mechanical environment and bone morphological abnormity. Methods The right proximal tibia was scanned with CT to reconstruct a 3D proximal tibia model in MIMICS, then it was imported to finite element software ANSYS to establish 3D finite element model. The internal structure of 3D proximal tibia of young normal people was simulated using quantitative bone remodeling theory in combination with finite element method, then based on the changing pattern of joint contact force on the tibial plateau in valgus knees, the mechanical loading was changed, and the simulated normal tibia structure was used as initial structure to simulate the internal structure of 3D proximal tibia for old people with 6° valgus deformity. Four regions of interest (ROIs) were selected in the proximal tibia to quantitatively analyze BMD and compare with the clinical measurements. Results The simulation results showed that the BMD distribution in 3D proximal tibia was consistent with clinical measurements in normal knees and that in valgus knees was consistent with the measurement of patients with osteoarthritis in clinics. Conclusions It is shown that the change of mechanical environment is the main cause for the change of subchondral bone structure, and being under abnormal mechanical environment for a long time may lead to osteoarthritis. Besides, the simulation method adopted in this paper can more accurately simulate the internal structure of 3D proximal tibia under different mechanical environments. It helps to better understand the mechanism of osteoarthritis and provides theoretical basis and computational method for the prevention and treatment of osteoarthritis. It can also serve as basis for further study on periprosthetic BMD changes after total knee arthroplasty, and provide a theoretical basis for optimization design of prosthesis. PMID:24359345
Pressure Oscillations and Structural Vibrations in Space Shuttle RSRM and ETM-3 Motors
NASA Technical Reports Server (NTRS)
Mason, D. R.; Morstadt, R. A.; Cannon, S. M.; Gross, E. G.; Nielsen, D. B.
2004-01-01
The complex interactions between internal motor pressure oscillations resulting from vortex shedding, the motor's internal acoustic modes, and the motor's structural vibration modes were assessed for the Space Shuttle four-segment booster Reusable Solid Rocket Motor and for the five-segment engineering test motor ETM-3. Two approaches were applied 1) a predictive procedure based on numerically solving modal representations of a solid rocket motor s acoustic equations of motion and 2) a computational fluid dynamics two-dimensional axi-symmetric large eddy simulation at discrete motor burn times.
Investigation of parabolic computational techniques for internal high-speed viscous flows
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Power, G. D.
1985-01-01
A feasibility study was conducted to assess the applicability of an existing parabolic analysis (ADD-Axisymmetric Diffuser Duct), developed previously for subsonic viscous internal flows, to mixed supersonic/subsonic flows with heat addition simulating a SCRAMJET combustor. A study was conducted with the ADD code modified to include additional convection effects in the normal momentum equation when supersonic expansion and compression waves were present. It is concluded from the present study that for the class of problems where strong viscous/inviscid interactions are present a global iteration procedure is required.
Photographic coverage of STS-112 during EVA 3 in VR Lab.
2002-08-21
JSC2002-E-34618 (21 August 2002) --- Astronaut Piers J. Sellers, STS-112 mission specialist, uses virtual reality hardware in the Space Vehicle Mockup Facility at the Johnson Space Center (JSC) to rehearse some of his duties on the upcoming mission to the International Space Station (ISS). This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the International Space Station (ISS) hardware with which they will be working.
1988-07-01
Mallik and Ghosh f71, Haddow, Barr and Nook [81, Miles [9), and Nayfeh [10). Four first-order ordinary differential equations that describe the...amplitude- and phase-modulated responses in their analog-computer simulations of a special form of equations (1) and (2). Hatwal, Mallik and Ghosh [71...BAJAJ 1978 Journal of Applied Mechanics 43, 895- 902. Bifurcations in dynamical systems with internal resonance. 7. H. HATWAL, A. K. MALLIK and A
2011-02-01
provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently ...transition characteristics as well as the effectiveness of 2-D strip trips to simulate the joint between the nosecap and body of the vehicle and 3-D...diamond shaped trips, to simulate the fasteners on a closeout panel that will be on one side of the flight vehicle. In order to accomplish this, global
Signal decomposition for surrogate modeling of a constrained ultrasonic design space
NASA Astrophysics Data System (ADS)
Homa, Laura; Sparkman, Daniel; Wertz, John; Welter, John; Aldrin, John C.
2018-04-01
The U.S. Air Force seeks to improve the methods and measures by which the lifecycle of composite structures are managed. Nondestructive evaluation of damage - particularly internal damage resulting from impact - represents a significant input to that improvement. Conventional ultrasound can detect this damage; however, full 3D characterization has not been demonstrated. A proposed approach for robust characterization uses model-based inversion through fitting of simulated results to experimental data. One challenge with this approach is the high computational expense of the forward model to simulate the ultrasonic B-scans for each damage scenario. A potential solution is to construct a surrogate model using a subset of simulated ultrasonic scans built using a highly accurate, computationally expensive forward model. However, the dimensionality of these simulated B-scans makes interpolating between them a difficult and potentially infeasible problem. Thus, we propose using the chirplet decomposition to reduce the dimensionality of the data, and allow for interpolation in the chirplet parameter space. By applying the chirplet decomposition, we are able to extract the salient features in the data and construct a surrogate forward model.
Analysis of impact of general-purpose graphics processor units in supersonic flow modeling
NASA Astrophysics Data System (ADS)
Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.
2017-06-01
Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.
Initial Computations of Vertical Displacement Events with NIMROD
NASA Astrophysics Data System (ADS)
Bunkers, Kyle; Sovinec, C. R.
2014-10-01
Disruptions associated with vertical displacement events (VDEs) have potential for causing considerable physical damage to ITER and other tokamak experiments. We report on initial computations of generic axisymmetric VDEs using the NIMROD code [Sovinec et al., JCP 195, 355 (2004)]. An implicit thin-wall computation has been implemented to couple separate internal and external regions without numerical stability limitations. A simple rectangular cross-section domain generated with the NIMEQ code [Howell and Sovinec, CPC (2014)] modified to use a symmetry condition at the midplane is used to test linear and nonlinear axisymmetric VDE computation. As current in simulated external coils for large- R / a cases is varied, there is a clear n = 0 stability threshold which lies below the decay-index criterion for the current-loop model of a tokamak to model VDEs [Mukhovatov and Shafranov, Nucl. Fusion 11, 605 (1971)]; a scan of wall distance indicates the offset is due to the influence of the conducting wall. Results with a vacuum region surrounding a resistive wall will also be presented. Initial nonlinear computations show large vertical displacement of an intact simulated tokamak. This effort is supported by U.S. Department of Energy Grant DE-FG02-06ER54850.
Analysis of Change in the Wind Speed Ratio according to Apartment Layout and Solutions
Hyung, Won-gil; Kim, Young-Moon; You, Ki-Pyo
2014-01-01
Apartment complexes in various forms are built in downtown areas. The arrangement of an apartment complex has great influence on the wind flow inside it. There are issues of residents' walking due to gust occurrence within apartment complexes, problems with pollutant emission due to airflow congestion, and heat island and cool island phenomena in apartment complexes. Currently, the forms of internal arrangements of apartment complexes are divided into the flat type and the tower type. In the present study, a wind tunnel experiment and computational fluid dynamics (CFD) simulation were performed with respect to internal wind flows in different apartment arrangement forms. Findings of the wind tunnel experiment showed that the internal form and arrangement of an apartment complex had significant influence on its internal airflow. The wind velocity of the buildings increased by 80% at maximum due to the proximity effects between the buildings. The CFD simulation for relaxing such wind flows indicated that the wind velocity reduced by 40% or more at maximum when the paths between the lateral sides of the buildings were extended. PMID:24688430
Analysis of change in the wind speed ratio according to apartment layout and solutions.
Hyung, Won-gil; Kim, Young-Moon; You, Ki-Pyo
2014-01-01
Apartment complexes in various forms are built in downtown areas. The arrangement of an apartment complex has great influence on the wind flow inside it. There are issues of residents' walking due to gust occurrence within apartment complexes, problems with pollutant emission due to airflow congestion, and heat island and cool island phenomena in apartment complexes. Currently, the forms of internal arrangements of apartment complexes are divided into the flat type and the tower type. In the present study, a wind tunnel experiment and computational fluid dynamics (CFD) simulation were performed with respect to internal wind flows in different apartment arrangement forms. Findings of the wind tunnel experiment showed that the internal form and arrangement of an apartment complex had significant influence on its internal airflow. The wind velocity of the buildings increased by 80% at maximum due to the proximity effects between the buildings. The CFD simulation for relaxing such wind flows indicated that the wind velocity reduced by 40% or more at maximum when the paths between the lateral sides of the buildings were extended.
The 6th International Conference on Computer Science and Computational Mathematics (ICCSCM 2017)
NASA Astrophysics Data System (ADS)
2017-09-01
The ICCSCM 2017 (The 6th International Conference on Computer Science and Computational Mathematics) has aimed to provide a platform to discuss computer science and mathematics related issues including Algebraic Geometry, Algebraic Topology, Approximation Theory, Calculus of Variations, Category Theory; Homological Algebra, Coding Theory, Combinatorics, Control Theory, Cryptology, Geometry, Difference and Functional Equations, Discrete Mathematics, Dynamical Systems and Ergodic Theory, Field Theory and Polynomials, Fluid Mechanics and Solid Mechanics, Fourier Analysis, Functional Analysis, Functions of a Complex Variable, Fuzzy Mathematics, Game Theory, General Algebraic Systems, Graph Theory, Group Theory and Generalizations, Image Processing, Signal Processing and Tomography, Information Fusion, Integral Equations, Lattices, Algebraic Structures, Linear and Multilinear Algebra; Matrix Theory, Mathematical Biology and Other Natural Sciences, Mathematical Economics and Financial Mathematics, Mathematical Physics, Measure Theory and Integration, Neutrosophic Mathematics, Number Theory, Numerical Analysis, Operations Research, Optimization, Operator Theory, Ordinary and Partial Differential Equations, Potential Theory, Real Functions, Rings and Algebras, Statistical Mechanics, Structure Of Matter, Topological Groups, Wavelets and Wavelet Transforms, 3G/4G Network Evolutions, Ad-Hoc, Mobile, Wireless Networks and Mobile Computing, Agent Computing & Multi-Agents Systems, All topics related Image/Signal Processing, Any topics related Computer Networks, Any topics related ISO SC-27 and SC- 17 standards, Any topics related PKI(Public Key Intrastructures), Artifial Intelligences(A.I.) & Pattern/Image Recognitions, Authentication/Authorization Issues, Biometric authentication and algorithms, CDMA/GSM Communication Protocols, Combinatorics, Graph Theory, and Analysis of Algorithms, Cryptography and Foundation of Computer Security, Data Base(D.B.) Management & Information Retrievals, Data Mining, Web Image Mining, & Applications, Defining Spectrum Rights and Open Spectrum Solutions, E-Comerce, Ubiquitous, RFID, Applications, Fingerprint/Hand/Biometrics Recognitions and Technologies, Foundations of High-performance Computing, IC-card Security, OTP, and Key Management Issues, IDS/Firewall, Anti-Spam mail, Anti-virus issues, Mobile Computing for E-Commerce, Network Security Applications, Neural Networks and Biomedical Simulations, Quality of Services and Communication Protocols, Quantum Computing, Coding, and Error Controls, Satellite and Optical Communication Systems, Theory of Parallel Processing and Distributed Computing, Virtual Visions, 3-D Object Retrievals, & Virtual Simulations, Wireless Access Security, etc. The success of ICCSCM 2017 is reflected in the received papers from authors around the world from several countries which allows a highly multinational and multicultural idea and experience exchange. The accepted papers of ICCSCM 2017 are published in this Book. Please check http://www.iccscm.com for further news. A conference such as ICCSCM 2017 can only become successful using a team effort, so herewith we want to thank the International Technical Committee and the Reviewers for their efforts in the review process as well as their valuable advices. We are thankful to all those who contributed to the success of ICCSCM 2017. The Secretary
Effects of internal gain assumptions in building energy calculations
NASA Astrophysics Data System (ADS)
Christensen, C.; Perkins, R.
1981-01-01
The utilization of direct solar gains in buildings can be affected by operating profiles, such as schedules for internal gains, thermostat controls, and ventilation rates. Building energy analysis methods use various assumptions about these profiles. The effects of typical internal gain assumptions in energy calculations are described. Heating and cooling loads from simulations using the DOE 2.1 computer code are compared for various internal gain inputs: typical hourly profiles, constant average profiles, and zero gain profiles. Prototype single-family-detached and multifamily-attached residential units are studied with various levels of insulation and infiltration. Small detached commercial buildings and attached zones in large commercial buildings are studied with various levels of internal gains. The results indicate that calculations of annual heating and cooling loads are sensitive to internal gains, but in most cases are relatively insensitive to hourly variations in internal gains.
NASA Astrophysics Data System (ADS)
Zagorska, A.; Bliznakova, K.; Buchakliev, Z.
2015-09-01
In 2012, the International Commission on Radiological Protection has recommended a reduction of the dose limits to the eye lens for occupational exposure. Recent studies showed that in interventional rooms is possible to reach these limits especially without using protective equipment. The aim of this study was to calculate the scattered energy spectra distribution at the level of the operator's head. For this purpose, an in-house developed Monte Carlo-based computer application was used to design computational phantoms (patient and operator), the acquisition geometry as well as to simulate the photon transport through the designed system. The initial spectra from 70 kV tube voltage and 8 different filtrations were calculated according to the IPEM Report 78. An experimental study was carried out to verify the results from the simulations. The calculated scattered radiation distributions were compared to the initial incident on the patient spectra. Results showed that there is no large difference between the effective energies of the scattered spectra registered in front of the operator's head obtained from simulations of all 8 incident spectra. The results from the experimental study agreed well to simulations as well.
High resolution, MRI-based, segmented, computerized head phantom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zubal, I.G.; Harrell, C.R.; Smith, E.O.
1999-01-01
The authors have created a high-resolution software phantom of the human brain which is applicable to voxel-based radiation transport calculations yielding nuclear medicine simulated images and/or internal dose estimates. A software head phantom was created from 124 transverse MRI images of a healthy normal individual. The transverse T2 slices, recorded in a 256x256 matrix from a GE Signa 2 scanner, have isotropic voxel dimensions of 1.5 mm and were manually segmented by the clinical staff. Each voxel of the phantom contains one of 62 index numbers designating anatomical, neurological, and taxonomical structures. The result is stored as a 256x256x128 bytemore » array. Internal volumes compare favorably to those described in the ICRP Reference Man. The computerized array represents a high resolution model of a typical human brain and serves as a voxel-based anthropomorphic head phantom suitable for computer-based modeling and simulation calculations. It offers an improved realism over previous mathematically described software brain phantoms, and creates a reference standard for comparing results of newly emerging voxel-based computations. Such voxel-based computations lead the way to developing diagnostic and dosimetry calculations which can utilize patient-specific diagnostic images. However, such individualized approaches lack fast, automatic segmentation schemes for routine use; therefore, the high resolution, typical head geometry gives the most realistic patient model currently available.« less
Modeling Cross-Situational Word–Referent Learning: Prior Questions
Yu, Chen; Smith, Linda B.
2013-01-01
Both adults and young children possess powerful statistical computation capabilities—they can infer the referent of a word from highly ambiguous contexts involving many words and many referents by aggregating cross-situational statistical information across contexts. This ability has been explained by models of hypothesis testing and by models of associative learning. This article describes a series of simulation studies and analyses designed to understand the different learning mechanisms posited by the 2 classes of models and their relation to each other. Variants of a hypothesis-testing model and a simple or dumb associative mechanism were examined under different specifications of information selection, computation, and decision. Critically, these 3 components of the models interact in complex ways. The models illustrate a fundamental tradeoff between amount of data input and powerful computations: With the selection of more information, dumb associative models can mimic the powerful learning that is accomplished by hypothesis-testing models with fewer data. However, because of the interactions among the component parts of the models, the associative model can mimic various hypothesis-testing models, producing the same learning patterns but through different internal components. The simulations argue for the importance of a compositional approach to human statistical learning: the experimental decomposition of the processes that contribute to statistical learning in human learners and models with the internal components that can be evaluated independently and together. PMID:22229490
Cross Domain Deterrence: Livermore Technical Report, 2014-2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, Peter D.; Bahney, Ben; Matarazzo, Celeste
2016-08-03
Lawrence Livermore National Laboratory (LLNL) is an original collaborator on the project titled “Deterring Complex Threats: The Effects of Asymmetry, Interdependence, and Multi-polarity on International Strategy,” (CDD Project) led by the UC Institute on Global Conflict and Cooperation at UCSD under PIs Jon Lindsay and Erik Gartzke , and funded through the DoD Minerva Research Initiative. In addition to participating in workshops and facilitating interaction among UC social scientists, LLNL is leading the computational modeling effort and assisting with empirical case studies to probe the viability of analytic, modeling and data analysis concepts. This report summarizes LLNL work on themore » CDD Project to date, primarily in Project Years 1-2, corresponding to Federal fiscal year 2015. LLNL brings two unique domains of expertise to bear on this Project: (1) access to scientific expertise on the technical dimensions of emerging threat technology, and (2) high performance computing (HPC) expertise, required for analyzing the complexity of bargaining interactions in the envisioned threat models. In addition, we have a small group of researchers trained as social scientists who are intimately familiar with the International Relations research. We find that pairing simulation scientists, who are typically trained in computer science, with domain experts, social scientists in this case, is the most effective route to developing powerful new simulation tools capable of representing domain concepts accurately and answering challenging questions in the field.« less
Parametric instability and wave turbulence driven by tidal excitation of internal waves
NASA Astrophysics Data System (ADS)
Le Reun, Thomas; Favier, Benjamin; Le Bars, Michael
2018-04-01
We investigate the stability of stratified fluid layers undergoing homogeneous and periodic tidal deformation. We first introduce a local model which allows to study velocity and buoyancy fluctuations in a Lagrangian domain periodically stretched and sheared by the tidal base flow. While keeping the key physical ingredients only, such a model is efficient to simulate planetary regimes where tidal amplitudes and dissipation are small. With this model, we prove that tidal flows are able to drive parametric subharmonic resonances of internal waves, in a way reminiscent of the elliptical instability in rotating fluids. The growth rates computed via Direct Numerical Simulations (DNS) are in very good agreement with WKB analysis and Floquet theory. We also investigate the turbulence driven by this instability mechanism. With spatio-temporal analysis, we show that it is a weak internal wave turbulence occurring at small Froude and buoyancy Reynolds numbers. When the gap between the excitation and the Brunt-V\\"ais\\"al\\"a frequencies is increased, the frequency spectrum of this wave turbulence displays a -2 power law reminiscent of the high-frequency branch of the Garett and Munk spectrum (Garrett & Munk 1979) which has been measured in the oceans. In addition, we find that the mixing efficiency is altered compared to what is computed in the context of DNS of stratified turbulence excited at small Froude and large buoyancy Reynolds numbers and is consistent with a superposition of waves.
NASA Technical Reports Server (NTRS)
Roddy, D.; Hatfield, D.; Hassig, P.; Rosenblatt, M.; Soderblom, L.; Dejong, E.
1992-01-01
We have completed computer simulations that model shock effects in the venusian atmosphere caused during the passage of two cometlike bodies 100 m and 1000 m in diameter and an asteroidlike body 10 km in diameter. Our objective is to examine hypervelocity-generated shock effects in the venusian atmosphere for bodies of different types and sizes in order to understand the following: (1) their deceleration and depth of penetration through the atmosphere; and (2) the onset of possible ground-surface shock effects such as splotches, craters, and ejecta formations. The three bodies were chosen to include both a range of general conditions applicable to Venus as well as three specific cases of current interest. These calculations use a new multiphase computer code (DICE-MAZ) designed by California Research & Technology for shock-dynamics simulations in complex environments. The code was tested and calibrated in large-scale explosion, cratering, and ejecta research. It treats a wide range of different multiphase conditions, including material types (vapor, melt, solid), particle-size distributions, and shock-induced dynamic changes in velocities, pressures, temperatures (internal energies), densities, and other related parameters, all of which were recorded in our calculations.
Bunderson, Nathan E.; Bingham, Jeffrey T.; Sohn, M. Hongchul; Ting, Lena H.; Burkholder, Thomas J.
2015-01-01
Neuromusculoskeletal models solve the basic problem of determining how the body moves under the influence of external and internal forces. Existing biomechanical modeling programs often emphasize dynamics with the goal of finding a feed-forward neural program to replicate experimental data or of estimating force contributions or individual muscles. The computation of rigid-body dynamics, muscle forces, and activation of the muscles are often performed separately. We have developed an intrinsically forward computational platform (Neuromechanic, www.neuromechanic.com) that explicitly represents the interdependencies among rigid body dynamics, frictional contact, muscle mechanics, and neural control modules. This formulation has significant advantages for optimization and forward simulation, particularly with application to neural controllers with feedback or regulatory features. Explicit inclusion of all state dependencies allows calculation of system derivatives with respect to kinematic states as well as muscle and neural control states, thus affording a wealth of analytical tools, including linearization, stability analyses and calculation of initial conditions for forward simulations. In this review, we describe our algorithm for generating state equations and explain how they may be used in integration, linearization and stability analysis tools to provide structural insights into the neural control of movement. PMID:23027632
Bunderson, Nathan E; Bingham, Jeffrey T; Sohn, M Hongchul; Ting, Lena H; Burkholder, Thomas J
2012-10-01
Neuromusculoskeletal models solve the basic problem of determining how the body moves under the influence of external and internal forces. Existing biomechanical modeling programs often emphasize dynamics with the goal of finding a feed-forward neural program to replicate experimental data or of estimating force contributions or individual muscles. The computation of rigid-body dynamics, muscle forces, and activation of the muscles are often performed separately. We have developed an intrinsically forward computational platform (Neuromechanic, www.neuromechanic.com) that explicitly represents the interdependencies among rigid body dynamics, frictional contact, muscle mechanics, and neural control modules. This formulation has significant advantages for optimization and forward simulation, particularly with application to neural controllers with feedback or regulatory features. Explicit inclusion of all state dependencies allows calculation of system derivatives with respect to kinematic states and muscle and neural control states, thus affording a wealth of analytical tools, including linearization, stability analyses and calculation of initial conditions for forward simulations. In this review, we describe our algorithm for generating state equations and explain how they may be used in integration, linearization, and stability analysis tools to provide structural insights into the neural control of movement. Copyright © 2012 John Wiley & Sons, Ltd.
Modeling of anomalous electron mobility in Hall thrusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koo, Justin W.; Boyd, Iain D.
Accurate modeling of the anomalous electron mobility is absolutely critical for successful simulation of Hall thrusters. In this work, existing computational models for the anomalous electron mobility are used to simulate the UM/AFRL P5 Hall thruster (a 5 kW laboratory model) in a two-dimensional axisymmetric hybrid particle-in-cell Monte Carlo collision code. Comparison to experimental results indicates that, while these computational models can be tuned to reproduce the correct thrust or discharge current, it is very difficult to match all integrated performance parameters (thrust, power, discharge current, etc.) simultaneously. Furthermore, multiple configurations of these computational models can produce reasonable integrated performancemore » parameters. A semiempirical electron mobility profile is constructed from a combination of internal experimental data and modeling assumptions. This semiempirical electron mobility profile is used in the code and results in more accurate simulation of both the integrated performance parameters and the mean potential profile of the thruster. Results indicate that the anomalous electron mobility, while absolutely necessary in the near-field region, provides a substantially smaller contribution to the total electron mobility in the high Hall current region near the thruster exit plane.« less
Kar, Julia; Quesada, Peter M
2012-08-01
Anterior cruciate ligament (ACL) injuries are commonly incurred by recreational and professional women athletes during non-contact jumping maneuvers in sports like basketball and volleyball, where incidences of ACL injury is more frequent to females compared to males. What remains a numerical challenge is in vivo calculation of ACL strain and internal force. This study investigated effects of increasing stop-jump height on neuromuscular and bio-mechanical properties of knee and ACL, when performed by young female recreational athletes. The underlying hypothesis is increasing stop-jump (platform) height increases knee valgus angles and external moments which also increases ACL strain and internal force. Using numerical analysis tools comprised of Inverse Kinematics, Computed Muscle Control and Forward Dynamics, a novel approach is presented for computing ACL strain and internal force based on (1) knee joint kinematics and (2) optimization of muscle activation, with ACL insertion into musculoskeletal model. Results showed increases in knee valgus external moments and angles with increasing stop-jump height. Increase in stop-jump height from 30 to 50 cm lead to increase in average peak valgus external moment from 40.5 ± 3.2 to 43.2 ± 3.7 Nm which was co-incidental with increase in average peak ACL strain, from 9.3 ± 3.1 to 13.7 ± 1.1%, and average peak ACL internal force, from 1056.1 ± 71.4 to 1165.4 ± 123.8 N for the right side with comparable increases in the left. In effect this study demonstrates a technique for estimating dynamic changes to knee and ACL variables by conducting musculoskeletal simulation on motion analysis data, collected from actual stop-jump tasks performed by young recreational women athletes.
Quasi-Static Viscoelastic Finite Element Model of an Aircraft Tire
NASA Technical Reports Server (NTRS)
Johnson, Arthur R.; Tanner, John A.; Mason, Angela J.
1999-01-01
An elastic large displacement thick-shell mixed finite element is modified to allow for the calculation of viscoelastic stresses. Internal strain variables are introduced at the element's stress nodes and are employed to construct a viscous material model. First order ordinary differential equations relate the internal strain variables to the corresponding elastic strains at the stress nodes. The viscous stresses are computed from the internal strain variables using viscous moduli which are a fraction of the elastic moduli. The energy dissipated by the action of the viscous stresses is included in the mixed variational functional. The nonlinear quasi-static viscous equilibrium equations are then obtained. Previously developed Taylor expansions of the nonlinear elastic equilibrium equations are modified to include the viscous terms. A predictor-corrector time marching solution algorithm is employed to solve the algebraic-differential equations. The viscous shell element is employed to computationally simulate a stair-step loading and unloading of an aircraft tire in contact with a frictionless surface.
Efficient Calculation of Exact Exchange Within the Quantum Espresso Software Package
NASA Astrophysics Data System (ADS)
Barnes, Taylor; Kurth, Thorsten; Carrier, Pierre; Wichmann, Nathan; Prendergast, David; Kent, Paul; Deslippe, Jack
Accurate simulation of condensed matter at the nanoscale requires careful treatment of the exchange interaction between electrons. In the context of plane-wave DFT, these interactions are typically represented through the use of approximate functionals. Greater accuracy can often be obtained through the use of functionals that incorporate some fraction of exact exchange; however, evaluation of the exact exchange potential is often prohibitively expensive. We present an improved algorithm for the parallel computation of exact exchange in Quantum Espresso, an open-source software package for plane-wave DFT simulation. Through the use of aggressive load balancing and on-the-fly transformation of internal data structures, our code exhibits speedups of approximately an order of magnitude for practical calculations. Additional optimizations are presented targeting the many-core Intel Xeon-Phi ``Knights Landing'' architecture, which largely powers NERSC's new Cori system. We demonstrate the successful application of the code to difficult problems, including simulation of water at a platinum interface and computation of the X-ray absorption spectra of transition metal oxides.
Simulation of Complex Cracking in Plain Weave C/SiC Composite under Biaxial Loading
NASA Technical Reports Server (NTRS)
Cheng, Ron-Bin; Hsu, Su-Yuen
2012-01-01
Finite element analysis is performed on a mesh, based on computed geometry of a plain weave C/SiC composite with assumed internal stacking, to reveal the pattern of internal damage due to biaxial normal cyclic loading. The simulation encompasses intertow matrix cracking, matrix cracking inside the tows, and separation at the tow-intertow matrix and tow-tow interfaces. All these dissipative behaviors are represented by traction-separation cohesive laws. Not aimed at quantitatively predicting the overall stress-strain relation, the simulation, however, does not take the actual process of fiber debonding into account. The fiber tows are represented by a simple rule-of-mixture model where the reinforcing phase is a hypothetical one-dimensional material. Numerical results indicate that for the plain weave C/SiC composite, 1) matrix-crack initiation sites are primarily determined by large intertow matrix voids and interlayer tow-tow contacts, 2) the pattern of internal damage strongly depends on the loading path and initial stress, 3) compressive loading inflicts virtually no damage evolution. KEY WORDS: ceramic matrix composite, plain weave, cohesive model, brittle failure, smeared crack model, progressive damage, meso-mechanical analysis, finite element.
NASA Technical Reports Server (NTRS)
Gotsis, Pascal K.; Chamis, Christos C.; Minnetyan, Levon
1996-01-01
Graphite/epoxy composite thin shell structures were simulated to investigate damage and fracture progression due to internal pressure and axial loading. Defective and defect-free structures (thin cylinders) were examined. The three different laminates examined had fiber orientations of (90/0/+/-0)(sub s), where 0 is 45, 60, and 75 deg. CODSTRAN, an integrated computer code that scales up constituent level properties to the structural level and accounts for all possible failure modes, was used to simulate composite degradation under loading. Damage initiation, growth, accumulation, and propagation to fracture were included in the simulation. Burst pressures for defective and defect-free shells were compared to evaluate damage tolerance. The results showed that damage initiation began with matrix failure whereas damage and/or fracture progression occurred as a result of additional matrix failure and fiber fracture. In both thin cylinder cases examined (defective and defect-free), the optimum layup configuration was (90/0/+/-60)(sub s) because it had the best damage tolerance with respect to the burst pressure.
Rodgers, Christopher T; Robson, Matthew D
2016-02-01
Combining spectra from receive arrays, particularly X-nuclear spectra with low signal-to-noise ratios (SNRs), is challenging. We test whether data-driven combination methods are better than using computed coil sensitivities. Several combination algorithms are recast into the notation of Roemer's classic formula, showing that they differ primarily in their estimation of coil receive sensitivities. This viewpoint reveals two extensions of the whitened singular-value decomposition (WSVD) algorithm, using temporal or temporal + spatial apodization to improve the coil sensitivities, and thus the combined spectral SNR. Radiofrequency fields from an array were simulated and used to make synthetic spectra. These were combined with 10 algorithms. The combined spectra were then assessed in terms of their SNR. Validation used phantoms and cardiac (31) P spectra from five subjects at 3T. Combined spectral SNRs from simulations, phantoms, and humans showed the same trends. In phantoms, the combined SNR using computed coil sensitivities was lower than with WSVD combination whenever the WSVD SNR was >14 (or >11 with temporal apodization, or >9 with temporal + spatial apodization). These new apodized WSVD methods gave higher SNRs than other data-driven methods. In the human torso, at frequencies ≥49 MHz, data-driven combination is preferable to using computed coil sensitivities. Magn Reson, 2015. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. Magn Reson Med 75:473-487, 2016. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.
Arkheia: Data Management and Communication for Open Computational Neuroscience
Antolík, Ján; Davison, Andrew P.
2018-01-01
Two trends have been unfolding in computational neuroscience during the last decade. First, a shift of focus to increasingly complex and heterogeneous neural network models, with a concomitant increase in the level of collaboration within the field (whether direct or in the form of building on top of existing tools and results). Second, a general trend in science toward more open communication, both internally, with other potential scientific collaborators, and externally, with the wider public. This multi-faceted development toward more integrative approaches and more intense communication within and outside of the field poses major new challenges for modelers, as currently there is a severe lack of tools to help with automatic communication and sharing of all aspects of a simulation workflow to the rest of the community. To address this important gap in the current computational modeling software infrastructure, here we introduce Arkheia. Arkheia is a web-based open science platform for computational models in systems neuroscience. It provides an automatic, interactive, graphical presentation of simulation results, experimental protocols, and interactive exploration of parameter searches, in a web browser-based application. Arkheia is focused on automatic presentation of these resources with minimal manual input from users. Arkheia is written in a modular fashion with a focus on future development of the platform. The platform is designed in an open manner, with a clearly defined and separated API for database access, so that any project can write its own backend translating its data into the Arkheia database format. Arkheia is not a centralized platform, but allows any user (or group of users) to set up their own repository, either for public access by the general population, or locally for internal use. Overall, Arkheia provides users with an automatic means to communicate information about not only their models but also individual simulation results and the entire experimental context in an approachable graphical manner, thus facilitating the user's ability to collaborate in the field and outreach to a wider audience. PMID:29556187
Arkheia: Data Management and Communication for Open Computational Neuroscience.
Antolík, Ján; Davison, Andrew P
2018-01-01
Two trends have been unfolding in computational neuroscience during the last decade. First, a shift of focus to increasingly complex and heterogeneous neural network models, with a concomitant increase in the level of collaboration within the field (whether direct or in the form of building on top of existing tools and results). Second, a general trend in science toward more open communication, both internally, with other potential scientific collaborators, and externally, with the wider public. This multi-faceted development toward more integrative approaches and more intense communication within and outside of the field poses major new challenges for modelers, as currently there is a severe lack of tools to help with automatic communication and sharing of all aspects of a simulation workflow to the rest of the community. To address this important gap in the current computational modeling software infrastructure, here we introduce Arkheia. Arkheia is a web-based open science platform for computational models in systems neuroscience. It provides an automatic, interactive, graphical presentation of simulation results, experimental protocols, and interactive exploration of parameter searches, in a web browser-based application. Arkheia is focused on automatic presentation of these resources with minimal manual input from users. Arkheia is written in a modular fashion with a focus on future development of the platform. The platform is designed in an open manner, with a clearly defined and separated API for database access, so that any project can write its own backend translating its data into the Arkheia database format. Arkheia is not a centralized platform, but allows any user (or group of users) to set up their own repository, either for public access by the general population, or locally for internal use. Overall, Arkheia provides users with an automatic means to communicate information about not only their models but also individual simulation results and the entire experimental context in an approachable graphical manner, thus facilitating the user's ability to collaborate in the field and outreach to a wider audience.
NASA Astrophysics Data System (ADS)
Wayson, Michael B.; Bolch, Wesley E.
2018-04-01
Various computational tools are currently available that facilitate patient organ dosimetry in diagnostic nuclear medicine, yet they are typically restricted to reporting organ doses to ICRP-defined reference phantoms. The present study, while remaining computational phantom based, provides straightforward tools to adjust reference phantom organ dose for both internal photon and electron sources. A wide variety of monoenergetic specific absorbed fractions were computed using radiation transport simulations for tissue spheres of varying size and separation distance. Scaling methods were then constructed for both photon and electron self-dose and cross-dose, with data validation provided from patient-specific voxel phantom simulations, as well as via comparison to the scaling methodology given in MIRD Pamphlet No. 11. Photon and electron self-dose was found to be dependent on both radiation energy and sphere size. Photon cross-dose was found to be mostly independent of sphere size. Electron cross-dose was found to be dependent on sphere size when the spheres were in close proximity, owing to differences in electron range. The validation studies showed that this dataset was more effective than the MIRD 11 method at predicting patient-specific photon doses for at both high and low energies, but gave similar results at photon energies between 100 keV and 1 MeV. The MIRD 11 method for electron self-dose scaling was accurate for lower energies but began to break down at higher energies. The photon cross-dose scaling methodology developed in this study showed gains in accuracy of up to 9% for actual patient studies, and the electron cross-dose scaling methodology showed gains in accuracy up to 9% as well when only the bremsstrahlung component of the cross-dose was scaled. These dose scaling methods are readily available for incorporation into internal dosimetry software for diagnostic phantom-based organ dosimetry.
Wayson, Michael B; Bolch, Wesley E
2018-04-13
Various computational tools are currently available that facilitate patient organ dosimetry in diagnostic nuclear medicine, yet they are typically restricted to reporting organ doses to ICRP-defined reference phantoms. The present study, while remaining computational phantom based, provides straightforward tools to adjust reference phantom organ dose for both internal photon and electron sources. A wide variety of monoenergetic specific absorbed fractions were computed using radiation transport simulations for tissue spheres of varying size and separation distance. Scaling methods were then constructed for both photon and electron self-dose and cross-dose, with data validation provided from patient-specific voxel phantom simulations, as well as via comparison to the scaling methodology given in MIRD Pamphlet No. 11. Photon and electron self-dose was found to be dependent on both radiation energy and sphere size. Photon cross-dose was found to be mostly independent of sphere size. Electron cross-dose was found to be dependent on sphere size when the spheres were in close proximity, owing to differences in electron range. The validation studies showed that this dataset was more effective than the MIRD 11 method at predicting patient-specific photon doses for at both high and low energies, but gave similar results at photon energies between 100 keV and 1 MeV. The MIRD 11 method for electron self-dose scaling was accurate for lower energies but began to break down at higher energies. The photon cross-dose scaling methodology developed in this study showed gains in accuracy of up to 9% for actual patient studies, and the electron cross-dose scaling methodology showed gains in accuracy up to 9% as well when only the bremsstrahlung component of the cross-dose was scaled. These dose scaling methods are readily available for incorporation into internal dosimetry software for diagnostic phantom-based organ dosimetry.
Prediction of the properties of PVD/CVD coatings with the use of FEM analysis
NASA Astrophysics Data System (ADS)
Śliwa, Agata; Mikuła, Jarosław; Gołombek, Klaudiusz; Tański, Tomasz; Kwaśny, Waldemar; Bonek, Mirosław; Brytan, Zbigniew
2016-12-01
The aim of this paper is to present the results of the prediction of the properties of PVD/CVD coatings with the use of finite element method (FEM) analysis. The possibility of employing the FEM in the evaluation of stress distribution in multilayer Ti/Ti(C,N)/CrN, Ti/Ti(C,N)/(Ti,Al)N, Ti/(Ti,Si)N/(Ti,Si)N, and Ti/DLC/DLC coatings by taking into account their deposition conditions on magnesium alloys has been discussed in the paper. The difference in internal stresses in the zone between the coating and the substrate is caused by, first of all, the difference between the mechanical and thermal properties of the substrate and the coating, and also by the structural changes that occur in these materials during the fabrication process, especially during the cooling process following PVD and CVD treatment. The experimental values of stresses were determined based on X-ray diffraction patterns that correspond to the modelled values, which in turn can be used to confirm the correctness of the accepted mathematical model for testing the problem. An FEM model was established for the purpose of building a computer simulation of the internal stresses in the coatings. The accuracy of the FEM model was verified by comparing the results of the computer simulation of the stresses with experimental results. A computer simulation of the stresses was carried out in the ANSYS environment using the FEM method. Structure observations, chemical composition measurements, and mechanical property characterisations of the investigated materials has been carried out to give a background for the discussion of the results that were recorded during the modelling process.
Computational discovery of metal-organic frameworks with high gas deliverable capacity
NASA Astrophysics Data System (ADS)
Bao, Yi
Metal-organic frameworks (MOFs) are a rapidly emerging class of nanoporous materials with largely tunable chemistry and diverse applications in gas storage, gas purification, catalysis, sensing and drug delivery. Efforts have been made to develop new MOFs with desirable properties both experimentally and computationally for decades. To guide experimental synthesis, we here develop a computational methodology to explore MOFs with high gas deliverable capacity. This de novo design procedure applies known chemical reactions, considers synthesizability and geometric requirements of organic linkers, and efficiently evolves a population of MOFs to optimize a desirable property. We identify 48 MOFs with higher methane deliverable capacity at 65-5.8 bar condition than the MOF-5 reference in nine networks. In a more comprehensive work, we predict two sets of MOFs with high methane deliverable capacity at a 65-5.8 bar loading-delivery condition or a 35-5.8 bar loading-delivery condition. We also optimize a set of MOFs with high methane accessible internal surface area to investigate the relationship between deliverable capacities and internal surface area. This methodology can be extended to MOFs with multiple types of linkers and multiple SBUs. Flexibile MOFs may allow for sophisticated heat management strategies and also provide higher gas deliverable capacity than rigid frameworks. We investigate flexible MOFs, such as MIL-53 families, and Fe(bdp) and Co(bdp) analogs, to understand the structural phase transition of frameworks and the resulting influence on heat of adsorption. Challenges of simulating a system with a flexible host structure and incoming guest molecules are discussed. Preliminary results from isotherm simulation using the hybrid MC/MD simulation scheme on MIL-53(Cr) are presented. Suggestions for proceeding to understand the free energy profile of flexible MOFs are provided.
Passive motion paradigm: an alternative to optimal control.
Mohan, Vishwanathan; Morasso, Pietro
2011-01-01
IN THE LAST YEARS, OPTIMAL CONTROL THEORY (OCT) HAS EMERGED AS THE LEADING APPROACH FOR INVESTIGATING NEURAL CONTROL OF MOVEMENT AND MOTOR COGNITION FOR TWO COMPLEMENTARY RESEARCH LINES: behavioral neuroscience and humanoid robotics. In both cases, there are general problems that need to be addressed, such as the "degrees of freedom (DoFs) problem," the common core of production, observation, reasoning, and learning of "actions." OCT, directly derived from engineering design techniques of control systems quantifies task goals as "cost functions" and uses the sophisticated formal tools of optimal control to obtain desired behavior (and predictions). We propose an alternative "softer" approach passive motion paradigm (PMP) that we believe is closer to the biomechanics and cybernetics of action. The basic idea is that actions (overt as well as covert) are the consequences of an internal simulation process that "animates" the body schema with the attractor dynamics of force fields induced by the goal and task-specific constraints. This internal simulation offers the brain a way to dynamically link motor redundancy with task-oriented constraints "at runtime," hence solving the "DoFs problem" without explicit kinematic inversion and cost function computation. We argue that the function of such computational machinery is not only restricted to shaping motor output during action execution but also to provide the self with information on the feasibility, consequence, understanding and meaning of "potential actions." In this sense, taking into account recent developments in neuroscience (motor imagery, simulation theory of covert actions, mirror neuron system) and in embodied robotics, PMP offers a novel framework for understanding motor cognition that goes beyond the engineering control paradigm provided by OCT. Therefore, the paper is at the same time a review of the PMP rationale, as a computational theory, and a perspective presentation of how to develop it for designing better cognitive architectures.
Effects of Whole-Body Motion Simulation on Flight Skill Development.
1981-10-01
computation requirements, compared to the implementation allowing for a deviate internal model, provided further motivation for assuming a correct...We are left with two more likely explanations for the apparent trends: (1) subjects were motivated differently by the different task configurations...because of modeling constraints. The notion of task-related motivational differences are explored in Appendix E. Sensitivity analysis performed with
Computational upscaling of Drucker-Prager plasticity from micro-CT images of synthetic porous rock
NASA Astrophysics Data System (ADS)
Liu, Jie; Sarout, Joel; Zhang, Minchao; Dautriat, Jeremie; Veveakis, Emmanouil; Regenauer-Lieb, Klaus
2018-01-01
Quantifying rock physical properties is essential for the mining and petroleum industry. Microtomography provides a new way to quantify the relationship between the microstructure and the mechanical and transport properties of a rock. Studies reporting the use microtomographic images to derive permeability and elastic moduli of rocks are common; only rare studies were devoted to yield and failure parameters using this technique. In this study, we simulate the macroscale plastic properties of a synthetic sandstone sample made of calcite-cemented quartz grains using the microscale information obtained from microtomography. The computations rely on the concept of representative volume elements (RVEs). The mechanical RVE is determined using the upper and lower bounds of finite-element computations for elasticity. We present computational upscaling methods from microphysical processes to extract the plasticity parameters of the RVE and compare results to experimental data. The yield stress, cohesion and internal friction angle of the matrix (solid part) of the rock were obtained with reasonable accuracy. Computations of plasticity of a series of models of different volume-sizes showed almost overlapping stress-strain curves, suggesting that the mechanical RVE determined by elastic computations is also valid for plastic yielding. Furthermore, a series of models were created by self-similarly inflating/deflating the porous models, that is keeping a similar structure while achieving different porosity values. The analysis of these models showed that yield stress, cohesion and internal friction angle linearly decrease with increasing porosity in the porosity range between 8 and 28 per cent. The internal friction angle decreases the most significantly, while cohesion remains stable.
A technique for evaluating the application of the pin-level stuck-at fault model to VLSI circuits
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Finelli, George B.
1987-01-01
Accurate fault models are required to conduct the experiments defined in validation methodologies for highly reliable fault-tolerant computers (e.g., computers with a probability of failure of 10 to the -9 for a 10-hour mission). Described is a technique by which a researcher can evaluate the capability of the pin-level stuck-at fault model to simulate true error behavior symptoms in very large scale integrated (VLSI) digital circuits. The technique is based on a statistical comparison of the error behavior resulting from faults applied at the pin-level of and internal to a VLSI circuit. As an example of an application of the technique, the error behavior of a microprocessor simulation subjected to internal stuck-at faults is compared with the error behavior which results from pin-level stuck-at faults. The error behavior is characterized by the time between errors and the duration of errors. Based on this example data, the pin-level stuck-at fault model is found to deliver less than ideal performance. However, with respect to the class of faults which cause a system crash, the pin-level, stuck-at fault model is found to provide a good modeling capability.
TOUGH3: A new efficient version of the TOUGH suite of multiphase flow and transport simulators
NASA Astrophysics Data System (ADS)
Jung, Yoojin; Pau, George Shu Heng; Finsterle, Stefan; Pollyea, Ryan M.
2017-11-01
The TOUGH suite of nonisothermal multiphase flow and transport simulators has been updated by various developers over many years to address a vast range of challenging subsurface problems. The increasing complexity of the simulated processes as well as the growing size of model domains that need to be handled call for an improvement in the simulator's computational robustness and efficiency. Moreover, modifications have been frequently introduced independently, resulting in multiple versions of TOUGH that (1) led to inconsistencies in feature implementation and usage, (2) made code maintenance and development inefficient, and (3) caused confusion to users and developers. TOUGH3-a new base version of TOUGH-addresses these issues. It consolidates both the serial (TOUGH2 V2.1) and parallel (TOUGH2-MP V2.0) implementations, enabling simulations to be performed on desktop computers and supercomputers using a single code. New PETSc parallel linear solvers are added to the existing serial solvers of TOUGH2 and the Aztec solver used in TOUGH2-MP. The PETSc solvers generally perform better than the Aztec solvers in parallel and the internal TOUGH3 linear solver in serial. TOUGH3 also incorporates many new features, addresses bugs, and improves the flexibility of data handling. Due to the improved capabilities and usability, TOUGH3 is more robust and efficient for solving tough and computationally demanding problems in diverse scientific and practical applications related to subsurface flow modeling.
Updated Panel-Method Computer Program
NASA Technical Reports Server (NTRS)
Ashby, Dale L.
1995-01-01
Panel code PMARC_12 (Panel Method Ames Research Center, version 12) computes potential-flow fields around complex three-dimensional bodies such as complete aircraft models. Contains several advanced features, including internal mathematical modeling of flow, time-stepping wake model for simulating either steady or unsteady motions, capability for Trefftz computation of drag induced by plane, and capability for computation of off-body and on-body streamlines, and capability of computation of boundary-layer parameters by use of two-dimensional integral boundary-layer method along surface streamlines. Investigators interested in visual representations of phenomena, may want to consider obtaining program GVS (ARC-13361), General visualization System. GVS is Silicon Graphics IRIS program created to support scientific-visualization needs of PMARC_12. GVS available separately from COSMIC. PMARC_12 written in standard FORTRAN 77, with exception of NAMELIST extension used for input.
Computation and analysis of backward ray-tracing in aero-optics flow fields.
Xu, Liang; Xue, Deting; Lv, Xiaoyi
2018-01-08
A backward ray-tracing method is proposed for aero-optics simulation. Different from forward tracing, the backward tracing direction is from the internal sensor to the distant target. Along this direction, the tracing in turn goes through the internal gas region, the aero-optics flow field, and the freestream. The coordinate value, the density, and the refractive index are calculated at each tracing step. A stopping criterion is developed to ensure the tracing stops at the outer edge of the aero-optics flow field. As a demonstration, the analysis is carried out for a typical blunt nosed vehicle. The backward tracing method and stopping criterion greatly simplify the ray-tracing computations in the aero-optics flow field, and they can be extended to our active laser illumination aero-optics study because of the reciprocity principle.
Additive Manufacturing and High-Performance Computing: a Disruptive Latent Technology
NASA Astrophysics Data System (ADS)
Goodwin, Bruce
2015-03-01
This presentation will discuss the relationship between recent advances in Additive Manufacturing (AM) technology, High-Performance Computing (HPC) simulation and design capabilities, and related advances in Uncertainty Quantification (UQ), and then examines their impacts upon national and international security. The presentation surveys how AM accelerates the fabrication process, while HPC combined with UQ provides a fast track for the engineering design cycle. The combination of AM and HPC/UQ almost eliminates the engineering design and prototype iterative cycle, thereby dramatically reducing cost of production and time-to-market. These methods thereby present significant benefits for US national interests, both civilian and military, in an age of austerity. Finally, considering cyber security issues and the advent of the ``cloud,'' these disruptive, currently latent technologies may well enable proliferation and so challenge both nuclear and non-nuclear aspects of international security.
Using Numerical Modeling to Simulate Space Capsule Ground Landings
NASA Technical Reports Server (NTRS)
Heymsfield, Ernie; Fasanella, Edwin L.
2009-01-01
Experimental work is being conducted at the National Aeronautics and Space Administration s (NASA) Langley Research Center (LaRC) to investigate ground landing capabilities of the Orion crew exploration vehicle (CEV). The Orion capsule is NASA s replacement for the Space Shuttle. The Orion capsule will service the International Space Station and be used for future space missions to the Moon and to Mars. To evaluate the feasibility of Orion ground landings, a series of capsule impact tests are being performed at the NASA Langley Landing and Impact Research Facility (LandIR). The experimental results derived at LandIR provide means to validate and calibrate nonlinear dynamic finite element models, which are also being developed during this study. Because of the high cost and time involvement intrinsic to full-scale testing, numerical simulations are favored over experimental work. Subsequent to a numerical model validated by actual test responses, impact simulations will be conducted to study multiple impact scenarios not practical to test. Twenty-one swing tests using the LandIR gantry were conducted during the June 07 through October 07 time period to evaluate the Orion s impact response. Results for two capsule initial pitch angles, 0deg and -15deg , along with their computer simulations using LS-DYNA are presented in this article. A soil-vehicle friction coefficient of 0.45 was determined by comparing the test stopping distance with computer simulations. In addition, soil modeling accuracy is presented by comparing vertical penetrometer impact tests with computer simulations for the soil model used during the swing tests.
Simulating the influence of scatter and beam hardening in dimensional computed tomography
NASA Astrophysics Data System (ADS)
Lifton, J. J.; Carmignato, S.
2017-10-01
Cone-beam x-ray computed tomography (XCT) is a radiographic scanning technique that allows the non-destructive dimensional measurement of an object’s internal and external features. XCT measurements are influenced by a number of different factors that are poorly understood. This work investigates how non-linear x-ray attenuation caused by beam hardening and scatter influences XCT-based dimensional measurements through the use of simulated data. For the measurement task considered, both scatter and beam hardening are found to influence dimensional measurements when evaluated using the ISO50 surface determination method. On the other hand, only beam hardening is found to influence dimensional measurements when evaluated using an advanced surface determination method. Based on the results presented, recommendations on the use of beam hardening and scatter correction for dimensional XCT are given.
NASA Astrophysics Data System (ADS)
Class, G.; Meyder, R.; Stratmanns, E.
1985-12-01
The large data base for validation and development of computer codes for two-phase flow, generated at the COSIMA facility, is reviewed. The aim of COSIMA is to simulate the hydraulic, thermal, and mechanical conditions in the subchannel and the cladding of fuel rods in pressurized water reactors during the blowout phase of a loss of coolant accident. In terms of fuel rod behavior, it is found that during blowout under realistic conditions only small strains are reached. For cladding rupture extremely high rod internal pressures are necessary. The behavior of fuel rod simulators and the effect of thermocouples attached to the cladding outer surface are clarified. Calculations performed with the codes RELAP and DRUFAN show satisfactory agreement with experiments. This can be improved by updating the phase separation models in the codes.
Xia, J J; Gateno, J; Teichgraeber, J F; Yuan, P; Chen, K-C; Li, J; Zhang, X; Tang, Z; Alfi, D M
2015-12-01
The success of craniomaxillofacial (CMF) surgery depends not only on the surgical techniques, but also on an accurate surgical plan. The adoption of computer-aided surgical simulation (CASS) has created a paradigm shift in surgical planning. However, planning an orthognathic operation using CASS differs fundamentally from planning using traditional methods. With this in mind, the Surgical Planning Laboratory of Houston Methodist Research Institute has developed a CASS protocol designed specifically for orthognathic surgery. The purpose of this article is to present an algorithm using virtual tools for planning a double-jaw orthognathic operation. This paper will serve as an operation manual for surgeons wanting to incorporate CASS into their clinical practice. Copyright © 2015 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Airside HVAC BESTEST: HVAC Air-Distribution System Model Test Cases for ASHRAE Standard 140
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ronald; Neymark, Joel; Kennedy, Mike D.
This paper summarizes recent work to develop new airside HVAC equipment model analytical verification test cases for ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs. The analytical verification test method allows comparison of simulation results from a wide variety of building energy simulation programs with quasi-analytical solutions, further described below. Standard 140 is widely cited for evaluating software for use with performance-path energy efficiency analysis, in conjunction with well-known energy-efficiency standards including ASHRAE Standard 90.1, the International Energy Conservation Code, and other international standards. Airside HVAC Equipment is a common area ofmore » modelling not previously explicitly tested by Standard 140. Integration of the completed test suite into Standard 140 is in progress.« less
Laparoscopic Skills Are Improved With LapMentor™ Training
Andreatta, Pamela B.; Woodrum, Derek T.; Birkmeyer, John D.; Yellamanchilli, Rajani K.; Doherty, Gerard M.; Gauger, Paul G.; Minter, Rebecca M.
2006-01-01
Objective: To determine if prior training on the LapMentor™ laparoscopic simulator leads to improved performance of basic laparoscopic skills in the animate operating room environment. Summary Background Data: Numerous influences have led to the development of computer-aided laparoscopic simulators: a need for greater efficiency in training, the unique and complex nature of laparoscopic surgery, and the increasing demand that surgeons demonstrate competence before proceeding to the operating room. The LapMentor™ simulator is expensive, however, and its use must be validated and justified prior to implementation into surgical training programs. Methods: Nineteen surgical interns were randomized to training on the LapMentor™ laparoscopic simulator (n = 10) or to a control group (no simulator training, n = 9). Subjects randomized to the LapMentor™ trained to expert criterion levels 2 consecutive times on 6 designated basic skills modules. All subjects then completed a series of laparoscopic exercises in a live porcine model, and performance was assessed independently by 2 blinded reviewers. Time, accuracy rates, and global assessments of performance were recorded with an interrater reliability between reviewers of 0.99. Results: LapMentor™ trained interns completed the 30° camera navigation exercise in significantly less time than control interns (166 ± 52 vs. 220 ± 39 seconds, P < 0.05); they also achieved higher accuracy rates in identifying the required objects with the laparoscope (96% ± 8% vs. 82% ± 15%, P < 0.05). Similarly, on the two-handed object transfer exercise, task completion time for LapMentor™ trained versus control interns was 130 ± 23 versus 184 ± 43 seconds (P < 0.01) with an accuracy rate of 98% ± 5% versus 80% ± 13% (P < 0.001). Additionally, LapMentor™ trained interns outperformed control subjects with regard to camera navigation skills, efficiency of motion, optimal instrument handling, perceptual ability, and performance of safe electrocautery. Conclusions: This study demonstrates that prior training on the LapMentor™ laparoscopic simulator leads to improved resident performance of basic skills in the animate operating room environment. This work marks the first prospective, randomized evaluation of the LapMentor™ simulator, and provides evidence that LapMentor™ training may lead to improved operating room performance. PMID:16772789
NASA Astrophysics Data System (ADS)
Tirapu Azpiroz, Jaione; Burr, Geoffrey W.; Rosenbluth, Alan E.; Hibbs, Michael
2008-03-01
In the Hyper-NA immersion lithography regime, the electromagnetic response of the reticle is known to deviate in a complicated manner from the idealized Thin-Mask-like behavior. Already, this is driving certain RET choices, such as the use of polarized illumination and the customization of reticle film stacks. Unfortunately, full 3-D electromagnetic mask simulations are computationally intensive. And while OPC-compatible mask electromagnetic field (EMF) models can offer a reasonable tradeoff between speed and accuracy for full-chip OPC applications, full understanding of these complex physical effects demands higher accuracy. Our paper describes recent advances in leveraging High Performance Computing as a critical step towards lithographic modeling of the full manufacturing process. In this paper, highly accurate full 3-D electromagnetic simulation of very large mask layouts are conducted in parallel with reasonable turnaround time, using a Blue- Gene/L supercomputer and a Finite-Difference Time-Domain (FDTD) code developed internally within IBM. A 3-D simulation of a large 2-D layout spanning 5μm×5μm at the wafer plane (and thus (20μm×20μm×0.5μm at the mask) results in a simulation with roughly 12.5GB of memory (grid size of 10nm at the mask, single-precision computation, about 30 bytes/grid point). FDTD is flexible and easily parallelizable to enable full simulations of such large layout in approximately an hour using one BlueGene/L "midplane" containing 512 dual-processor nodes with 256MB of memory per processor. Our scaling studies on BlueGene/L demonstrate that simulations up to 100μm × 100μm at the mask can be computed in a few hours. Finally, we will show that the use of a subcell technique permits accurate simulation of features smaller than the grid discretization, thus improving on the tradeoff between computational complexity and simulation accuracy. We demonstrate the correlation of the real and quadrature components that comprise the Boundary Layer representation of the EMF behavior of a mask blank to intensity measurements of the mask diffraction patterns by an Aerial Image Measurement System (AIMS) with polarized illumination. We also discuss how this model can become a powerful tool for the assessment of the impact to the lithographic process of a mask blank.
NVSIM: UNIX-based thermal imaging system simulator
NASA Astrophysics Data System (ADS)
Horger, John D.
1993-08-01
For several years the Night Vision and Electronic Sensors Directorate (NVESD) has been using an internally developed forward looking infrared (FLIR) simulation program. In response to interest in the simulation part of these projects by other organizations, NVESD has been working on a new version of the simulation, NVSIM, that will be made generally available to the FLIR using community. NVSIM uses basic FLIR specification data, high resolution thermal input imagery and spatial domain image processing techniques to produce simulated image outputs from a broad variety of FLIRs. It is being built around modular programming techniques to allow simpler addition of more sensor effects. The modularity also allows selective inclusion and exclusion of individual sensor effects at run time. The simulation has been written in the industry standard ANSI C programming language under the widely used UNIX operating system to make it easily portable to a wide variety of computer platforms.
Computational simulation of extravehicular activity dynamics during a satellite capture attempt.
Schaffner, G; Newman, D J; Robinson, S K
2000-01-01
A more quantitative approach to the analysis of astronaut extravehicular activity (EVA) tasks is needed because of their increasing complexity, particularly in preparation for the on-orbit assembly of the International Space Station. Existing useful EVA computer analyses produce either high-resolution three-dimensional computer images based on anthropometric representations or empirically derived predictions of astronaut strength based on lean body mass and the position and velocity of body joints but do not provide multibody dynamic analysis of EVA tasks. Our physics-based methodology helps fill the current gap in quantitative analysis of astronaut EVA by providing a multisegment human model and solving the equations of motion in a high-fidelity simulation of the system dynamics. The simulation work described here improves on the realism of previous efforts by including three-dimensional astronaut motion, incorporating joint stops to account for the physiological limits of range of motion, and incorporating use of constraint forces to model interaction with objects. To demonstrate the utility of this approach, the simulation is modeled on an actual EVA task, namely, the attempted capture of a spinning Intelsat VI satellite during STS-49 in May 1992. Repeated capture attempts by an EVA crewmember were unsuccessful because the capture bar could not be held in contact with the satellite long enough for the capture latches to fire and successfully retrieve the satellite.
Dissociable roles of internal feelings and face recognition ability in facial expression decoding.
Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia
2016-05-15
The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Ballard, Jerrell R., Jr.; Howington, Stacy E.; Cinnella, Pasquale; Smith, James A.
2011-01-01
The temperature and moisture regimes in a forest are key components in the forest ecosystem dynamics. Observations and studies indicate that the internal temperature distribution and moisture content of the tree influence not only growth and development, but onset and cessation of cambial activity [1], resistance to insect predation[2], and even affect the population dynamics of the insects [3]. Moreover, temperature directly affects the uptake and metabolism of population from the soil into the tree tissue [4]. Additional studies show that soil and atmospheric temperatures are significant parameters that limit the growth of trees and impose treeline elevation limitation [5]. Directional thermal infrared radiance effects have long been observed in natural backgrounds [6]. In earlier work, we illustrated the use of physically-based models to simulate directional effects in thermal imaging [7-8]. In this paper, we illustrated the use of physically-based models to simulate directional effects in thermal, and net radiation in a adeciduous forest using our recently developed three-dimensional, macro-scale computational tool that simulates the heat and mass transfer interaction in a soil-root-stem systems (SRSS). The SRSS model includes the coupling of existing heat and mass transport tools to stimulate the diurnal internal and external temperatures, internal fluid flow and moisture distribution, and heat flow in the system.
Element fracture technique for hypervelocity impact simulation
NASA Astrophysics Data System (ADS)
Zhang, Xiao-tian; Li, Xiao-gang; Liu, Tao; Jia, Guang-hui
2015-05-01
Hypervelocity impact dynamics is the theoretical support of spacecraft shielding against space debris. The numerical simulation has become an important approach for obtaining the ballistic limits of the spacecraft shields. Currently, the most widely used algorithm for hypervelocity impact is the smoothed particle hydrodynamics (SPH). Although the finite element method (FEM) is widely used in fracture mechanics and low-velocity impacts, the standard FEM can hardly simulate the debris cloud generated by hypervelocity impact. This paper presents a successful application of the node-separation technique for hypervelocity impact debris cloud simulation. The node-separation technique assigns individual/coincident nodes for the adjacent elements, and it applies constraints to the coincident node sets in the modeling step. In the explicit iteration, the cracks are generated by releasing the constrained node sets that meet the fracture criterion. Additionally, the distorted elements are identified from two aspects - self-piercing and phase change - and are deleted so that the constitutive computation can continue. FEM with the node-separation technique is used for thin-wall hypervelocity impact simulations. The internal structures of the debris cloud in the simulation output are compared with that in the test X-ray graphs under different material fracture criteria. It shows that the pressure criterion is more appropriate for hypervelocity impact. The internal structures of the debris cloud are also simulated and compared under different thickness-to-diameter ratios (t/D). The simulation outputs show the same spall pattern with the tests. Finally, the triple-plate impact case is simulated with node-separation FEM.
NASA Astrophysics Data System (ADS)
Clarke, Peter; Varghese, Philip; Goldstein, David
2018-01-01
A discrete velocity method is developed for gas mixtures of diatomic molecules with both rotational and vibrational energy states. A full quantized model is described, and rotation-translation and vibration-translation energy exchanges are simulated using a Larsen-Borgnakke exchange model. Elastic and inelastic molecular interactions are modeled during every simulated collision to help produce smooth internal energy distributions. The method is verified by comparing simulations of homogeneous relaxation by our discrete velocity method to numerical solutions of the Jeans and Landau-Teller equations, and to direct simulation Monte Carlo. We compute the structure of a 1D shock using this method, and determine how the rotational energy distribution varies with spatial location in the shock and with position in velocity space.
NASA Astrophysics Data System (ADS)
Zunz, Violette; Goosse, Hugues; Dubinkina, Svetlana
2013-04-01
The sea ice extent in the Southern Ocean has increased since 1979 but the causes of this expansion have not been firmly identified. In particular, the contribution of internal variability and external forcing to this positive trend has not been fully established. In this region, the lack of observations and the overestimation of internal variability of the sea ice by contemporary General Circulation Models (GCMs) make it difficult to understand the behaviour of the sea ice. Nevertheless, if its evolution is governed by the internal variability of the system and if this internal variability is in some way predictable, a suitable initialization method should lead to simulations results that better fit the reality. Current GCMs decadal predictions are generally initialized through a nudging towards some observed fields. This relatively simple method does not seem to be appropriated to the initialization of sea ice in the Southern Ocean. The present study aims at identifying an initialization method that could improve the quality of the predictions of Southern Ocean sea ice at decadal timescales. We use LOVECLIM, an Earth-system Model of Intermediate Complexity that allows us to perform, within a reasonable computational time, the large amount of simulations required to test systematically different initialization procedures. These involve three data assimilation methods: a nudging, a particle filter and an efficient particle filter. In a first step, simulations are performed in an idealized framework, i.e. data from a reference simulation of LOVECLIM are used instead of observations, herein after called pseudo-observations. In this configuration, the internal variability of the model obviously agrees with the one of the pseudo-observations. This allows us to get rid of the issues related to the overestimation of the internal variability by models compared to the observed one. This way, we can work out a suitable methodology to assess the efficiency of the initialization procedures tested. It also allows us determine the upper limit of improvement that can be expected if more sophisticated initialization methods are used in decadal prediction simulations and if models have an internal variability agreeing with the observed one. Furthermore, since pseudo-observations are available everywhere at any time step, we also analyse the differences between simulations initialized with a complete dataset of pseudo-observations and the ones for which pseudo-observations data are not assimilated everywhere. In a second step, simulations are realized in a realistic framework, i.e. through the use of actual available observations. The same data assimilation methods are tested in order to check if more sophisticated methods can improve the reliability and the accuracy of decadal prediction simulations, even if they are performed with models that overestimate the internal variability of the sea ice extent in the Southern Ocean.
NASA Astrophysics Data System (ADS)
Brandt, Douglas; Hiller, John R.; Moloney, Michael J.
1995-10-01
The Consortium for Upper Level Physics Software (CUPS) has developed a comprehensive series of Nine Book/Software packages that Wiley will publish in FY `95 and `96. CUPS is an international group of 27 physicists, all with extensive backgrounds in the research, teaching, and development of instructional software. The project is being supported by the National Science Foundation (PHY-9014548), and it has received other support from the IBM Corp., Apple Computer Corp., and George Mason University. The Simulations being developed are: Astrophysics, Classical Mechanics, Electricity & Magnetism, Modern Physics, Nuclear and Particle Physics, Quantum Mechanics, Solid State, Thermal and Statistical, and Wave and Optics.
Propulsion system mathematical model for a lift/cruise fan V/STOL aircraft
NASA Technical Reports Server (NTRS)
Cole, G. L.; Sellers, J. F.; Tinling, B. E.
1980-01-01
A propulsion system mathematical model is documented that allows calculation of internal engine parameters during transient operation. A non-realtime digital computer simulation of the model is presented. It is used to investigate thrust response and modulation requirements as well as the impact of duty cycle on engine life and design criteria. Comparison of simulation results with steady-state cycle deck calculations showed good agreement. The model was developed for a specific 3-fan subsonic V/STOL aircraft application, but it can be adapted for use with any similar lift/cruise V/STOL configuration.
Virtual Reality Simulation of the International Space Welding Experiment
NASA Technical Reports Server (NTRS)
Phillips, James A.
1996-01-01
Virtual Reality (VR) is a set of breakthrough technologies that allow a human being to enter and fully experience a 3-dimensional, computer simulated environment. A true virtual reality experience meets three criteria: (1) It involves 3-dimensional computer graphics; (2) It includes real-time feedback and response to user actions; and (3) It must provide a sense of immersion. Good examples of a virtual reality simulator are the flight simulators used by all branches of the military to train pilots for combat in high performance jet fighters. The fidelity of such simulators is extremely high -- but so is the price tag, typically millions of dollars. Virtual reality teaching and training methods are manifestly effective, and we have therefore implemented a VR trainer for the International Space Welding Experiment. My role in the development of the ISWE trainer consisted of the following: (1) created texture-mapped models of the ISWE's rotating sample drum, technology block, tool stowage assembly, sliding foot restraint, and control panel; (2) developed C code for control panel button selection and rotation of the sample drum; (3) In collaboration with Tim Clark (Antares Virtual Reality Systems), developed a serial interface box for the PC and the SGI Indigo so that external control devices, similar to ones actually used on the ISWE, could be used to control virtual objects in the ISWE simulation; (4) In collaboration with Peter Wang (SFFP) and Mark Blasingame (Boeing), established the interference characteristics of the VIM 1000 head-mounted-display and tested software filters to correct the problem; (5) In collaboration with Peter Wang and Mark Blasingame, established software and procedures for interfacing the VPL DataGlove and the Polhemus 6DOF position sensors to the SGI Indigo serial ports. The majority of the ISWE modeling effort was conducted on a PC-based VR Workstation, described below.
Yildirim, Ilyas; Park, Hajeung; Disney, Matthew D.; Schatz, George C.
2013-01-01
One class of functionally important RNA is repeating transcripts that cause disease through various mechanisms. For example, expanded r(CAG) repeats can cause Huntington’s and other disease through translation of toxic proteins. Herein, crystal structure of r[5ʹUUGGGC(CAG)3GUCC]2, a model of CAG expanded transcripts, refined to 1.65 Å resolution is disclosed that show both anti-anti and syn-anti orientations for 1×1 nucleotide AA internal loops. Molecular dynamics (MD) simulations using Amber force field in explicit solvent were run for over 500 ns on model systems r(5ʹGCGCAGCGC)2 (MS1) and r(5ʹCCGCAGCGG)2 (MS2). In these MD simulations, both anti-anti and syn-anti AA base pairs appear to be stable. While anti-anti AA base pairs were dynamic and sampled multiple anti-anti conformations, no syn-anti↔anti-anti transformations were observed. Umbrella sampling simulations were run on MS2, and a 2D free energy surface was created to extract transformation pathways. In addition, over 800 ns explicit solvent MD simulation was run on r[5ʹGGGC(CAG)3GUCC]2, which closely represents the refined crystal structure. One of the terminal AA base pairs (syn-anti conformation), transformed to anti-anti conformation. The pathway followed in this transformation was the one predicted by umbrella sampling simulations. Further analysis showed a binding pocket near AA base pairs in syn-anti conformations. Computational results combined with the refined crystal structure show that global minimum conformation of 1×1 nucleotide AA internal loops in r(CAG) repeats is anti-anti but can adopt syn-anti depending on the environment. These results are important to understand RNA dynamic-function relationships and develop small molecules that target RNA dynamic ensembles. PMID:23441937
Corrosion Monitors for Embedded Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, Alex L.; Pfeifer, Kent B.; Casias, Adrian L.
2017-05-01
We have developed and characterized novel in-situ corrosion sensors to monitor and quantify the corrosive potential and history of localized environments. Embedded corrosion sensors can provide information to aid health assessments of internal electrical components including connectors, microelectronics, wires, and other susceptible parts. When combined with other data (e.g. temperature and humidity), theory, and computational simulation, the reliability of monitored systems can be predicted with higher fidelity.
Investigation of the effect of reducing scan resolution on simulated information-augmented sawing
Suraphan Thawornwong; Luis G. Occena; Daniel L. Schmoldt
2000-01-01
In the past few years, computed tomography (CT) scanning technology has been applied to the detection of internal defects in hardwood logs for the purpose of obtaining a priori information that can be used to arrive at better log breakdown or sawing decisions. Since today sawyers cannot even see the inside of the log until the log faces are revealed...
Lumber value differences from reduced CT spatial resolution and simulated log sawing
Suraphan Thawornwong; Luis G. Occena; Daniel L. Schmoldt
2003-01-01
In the past few years, computed tomography (CT) scanning technology has been applied to the detection of internal defects in hardwood logs for the purpose of obtaining a priori information that can be used to arrive at better log sawing decisions. Because sawyers currently cannot even see the inside of a log until the log faces are revealed by sawing, there is little...
ERIC Educational Resources Information Center
Herborn, Katharina; Mustafic, Maida; Greiff, Samuel
2017-01-01
Collaborative problem solving (CPS) assessment is a new academic research field with a number of educational implications. In 2015, the Programme for International Student Assessment (PISA) assessed CPS with a computer-simulated human-agent (H-A) approach that claimed to measure 12 individual CPS skills for the first time. After reviewing the…
Evaluating the Limits of Network Topology Inference Via Virtualized Network Emulation
2015-06-01
76 xi Figure 5.33 Hop-plot of five best reduction methods. KDD most closely matches the Internet plot...respectively, located around the world. These monitors provide locations from which to perform network measurement experiments, primarily using the ping ...International Symposium on Modeling, Analysis and Simulation of Computer Telecommunication Systems. IEEE, 2001, pp. 346–353. 90 [21] C. Jin , Q. Chen, and S
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dennig, Yasmin
Sandia National Laboratories has a long history of significant contributions to the high performance community and industry. Our innovative computer architectures allowed the United States to become the first to break the teraFLOP barrier—propelling us to the international spotlight. Our advanced simulation and modeling capabilities have been integral in high consequence US operations such as Operation Burnt Frost. Strong partnerships with industry leaders, such as Cray, Inc. and Goodyear, have enabled them to leverage our high performance computing (HPC) capabilities to gain a tremendous competitive edge in the marketplace. As part of our continuing commitment to providing modern computing infrastructuremore » and systems in support of Sandia missions, we made a major investment in expanding Building 725 to serve as the new home of HPC systems at Sandia. Work is expected to be completed in 2018 and will result in a modern facility of approximately 15,000 square feet of computer center space. The facility will be ready to house the newest National Nuclear Security Administration/Advanced Simulation and Computing (NNSA/ASC) Prototype platform being acquired by Sandia, with delivery in late 2019 or early 2020. This new system will enable continuing advances by Sandia science and engineering staff in the areas of operating system R&D, operation cost effectiveness (power and innovative cooling technologies), user environment and application code performance.« less
Ferraro, M; Foster, D H
1991-01-01
Under certain experimental conditions, visual discrimination performance in multielement images is closely related to visual identification performance: elements of the image are distinguished only insofar as they appear to have distinct, discrete, internal characterizations. This report is concerned with the detailed relationship between such internal characterizations and observable discrimination performance. Two types of general processes that might underline discrimination are considered. The first is based on computing all possible internal image characterizations that could allow a correct decision, each characterization weighted by the probability of its occurrence and of a correct decision being made. The second process is based on computing the difference between the probabilities associated with the internal characterizations of the individual image elements, the difference quantified naturally with an l(p) norm. The relationship between the two processes was investigated analytically and by Monte Carlo simulations over a plausible range of numbers n of the internal characterizations of each of the m elements in the image. The predictions of the two processes were found to be closely similar. The relationship was precisely one-to-one, however, only for n = 2, m = 3, 4, 6, and for n greater than 2, m = 3, 4, p = 2. For all other cases tested, a one-to-one relationship was shown to be impossible.
AnimatLab: a 3D graphics environment for neuromechanical simulations.
Cofer, David; Cymbalyuk, Gennady; Reid, James; Zhu, Ying; Heitler, William J; Edwards, Donald H
2010-03-30
The nervous systems of animals evolved to exert dynamic control of behavior in response to the needs of the animal and changing signals from the environment. To understand the mechanisms of dynamic control requires a means of predicting how individual neural and body elements will interact to produce the performance of the entire system. AnimatLab is a software tool that provides an approach to this problem through computer simulation. AnimatLab enables a computational model of an animal's body to be constructed from simple building blocks, situated in a virtual 3D world subject to the laws of physics, and controlled by the activity of a multicellular, multicompartment neural circuit. Sensor receptors on the body surface and inside the body respond to external and internal signals and then excite central neurons, while motor neurons activate Hill muscle models that span the joints and generate movement. AnimatLab provides a common neuromechanical simulation environment in which to construct and test models of any skeletal animal, vertebrate or invertebrate. The use of AnimatLab is demonstrated in a neuromechanical simulation of human arm flexion and the myotactic and contact-withdrawal reflexes. Copyright (c) 2010 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Smith, Marilyn J.; Lim, Joon W.; vanderWall, Berend G.; Baeder, James D.; Biedron, Robert T.; Boyd, D. Douglas, Jr.; Jayaraman, Buvana; Jung, Sung N.; Min, Byung-Young
2012-01-01
Over the past decade, there have been significant advancements in the accuracy of rotor aeroelastic simulations with the application of computational fluid dynamics methods coupled with computational structural dynamics codes (CFD/CSD). The HART II International Workshop database, which includes descent operating conditions with strong blade-vortex interactions (BVI), provides a unique opportunity to assess the ability of CFD/CSD to capture these physics. In addition to a baseline case with BVI, two additional cases with 3/rev higher harmonic blade root pitch control (HHC) are available for comparison. The collaboration during the workshop permits assessment of structured, unstructured, and hybrid overset CFD/CSD methods from across the globe on the dynamics, aerodynamics, and wake structure. Evaluation of the plethora of CFD/CSD methods indicate that the most important numerical variables associated with most accurately capturing BVI are a two-equation or detached eddy simulation (DES)-based turbulence model and a sufficiently small time step. An appropriate trade-off between grid fidelity and spatial accuracy schemes also appears to be pertinent for capturing BVI on the advancing rotor disk. Overall, the CFD/CSD methods generally fall within the same accuracy; cost-effective hybrid Navier-Stokes/Lagrangian wake methods provide accuracies within 50% the full CFD/CSD methods for most parameters of interest, except for those highly influenced by torsion. The importance of modeling the fuselage is observed, and other computational requirements are discussed.
From transistor to trapped-ion computers for quantum chemistry.
Yung, M-H; Casanova, J; Mezzacapo, A; McClean, J; Lamata, L; Aspuru-Guzik, A; Solano, E
2014-01-07
Over the last few decades, quantum chemistry has progressed through the development of computational methods based on modern digital computers. However, these methods can hardly fulfill the exponentially-growing resource requirements when applied to large quantum systems. As pointed out by Feynman, this restriction is intrinsic to all computational models based on classical physics. Recently, the rapid advancement of trapped-ion technologies has opened new possibilities for quantum control and quantum simulations. Here, we present an efficient toolkit that exploits both the internal and motional degrees of freedom of trapped ions for solving problems in quantum chemistry, including molecular electronic structure, molecular dynamics, and vibronic coupling. We focus on applications that go beyond the capacity of classical computers, but may be realizable on state-of-the-art trapped-ion systems. These results allow us to envision a new paradigm of quantum chemistry that shifts from the current transistor to a near-future trapped-ion-based technology.
From transistor to trapped-ion computers for quantum chemistry
Yung, M.-H.; Casanova, J.; Mezzacapo, A.; McClean, J.; Lamata, L.; Aspuru-Guzik, A.; Solano, E.
2014-01-01
Over the last few decades, quantum chemistry has progressed through the development of computational methods based on modern digital computers. However, these methods can hardly fulfill the exponentially-growing resource requirements when applied to large quantum systems. As pointed out by Feynman, this restriction is intrinsic to all computational models based on classical physics. Recently, the rapid advancement of trapped-ion technologies has opened new possibilities for quantum control and quantum simulations. Here, we present an efficient toolkit that exploits both the internal and motional degrees of freedom of trapped ions for solving problems in quantum chemistry, including molecular electronic structure, molecular dynamics, and vibronic coupling. We focus on applications that go beyond the capacity of classical computers, but may be realizable on state-of-the-art trapped-ion systems. These results allow us to envision a new paradigm of quantum chemistry that shifts from the current transistor to a near-future trapped-ion-based technology. PMID:24395054
Computational Study of Droplet Trains Impacting a Smooth Solid Surface
NASA Astrophysics Data System (ADS)
Markt, David, Jr.; Pathak, Ashish; Raessi, Mehdi; Lee, Seong-Young; Zhao, Emma
2017-11-01
The study of droplet impingement is vital to understanding the fluid dynamics of fuel injection in modern internal combustion engines. One widely accepted model was proposed by Yarin and Weiss (JFM, 1995), developed from experiments of single trains of ethanol droplets impacting a substrate. The model predicts the onset of splashing and the mass ejected upon splashing. In this study, using an in-house 3D multiphase flow solver, the experiments of Yarin and Weiss were computationally simulated. The experimentally observed splashing threshold was captured by the simulations, thus validating the solver's ability to accurately simulate the splashing dynamics. Then, we performed simulations of cases with multiple droplet trains, which have high relevance to dense fuel sprays, where droplets impact within the spreading diameters of their neighboring droplets, leading to changes in splashing dynamics due to interactions of spreading films. For both single and multi-train simulations the amount of splashed mass was calculated as a function of time, allowing a quantitative comparison between the two cases. Furthermore, using a passive scalar the amount of splashed mass per impinging droplet was also calculated. This work is supported by the Department of Energy, Office of Energy Efficiency and Renewable Energy (EERE) and the Department of Defense, Tank and Automotive Research, Development, and Engineering Center (TARDEC), under Award Number DE-EE0007292.
Progressive mechanical indentation of large-format Li-ion cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hsin; Kumar, Abhishek; Simunovic, Srdjan
We used large format Li-ion cells to study the mechanical responses of single cells of thickness 6.5 mm and stacks of three cells under compressive loading. We carried out various sequences of increasing depth indentations using a 1.0 inch (25.4 mm) diameter steel ball with steel plate as a rigid support surface. The indentation depths were between 0.025 and 0.250 with main indentation increments tests of 0.025 steps. Increment steps of 0.100 and 0.005 were used to pinpoint the onset of internal-short that occurred between 0.245 and 0.250 . The indented cells were disassembled and inspected for internal damage. Loadmore » vs. time curves were compared with the developed computer models. Separator thinning leading to the short circuit was simulated using both isotropic and anisotropic mechanical properties. This study show that separators behave differently when tested as a single layer vs. a stack in a typical pouch cell. The collective responses of the multiple layers must be taken into account in failure analysis. A model that resolves the details of the individual internal cell components was able to simulate the internal deformation of the large format cells and the onset of failure assumed to coincide with the onset of internal short circuit.« less
Lew, S; Hämäläinen, M S; Okada, Y
2017-12-01
To evaluate whether a full-coverage fetal-maternal scanner can noninvasively monitor ongoing electrophysiological activity of maternal and fetal organs. A simulation study was carried out for a scanner with an array of magnetic field sensors placed all around the torso from the chest to the hip within a horizontal magnetic shielding enclosure. The magnetic fields from internal organs and an external noise source were computed for a pregnant woman with a 35-week old fetus. Signal processing methods were used to reject the external and internal interferences, to visualize uterine activity, and to detect activity of fetal heart and brain. External interference was reduced by a factor of 1000, sufficient for detecting signals from internal organs when combined with passive and active shielding. The scanner rejects internal interferences better than partial-coverage arrays. It can be used to estimate currents around the uterus. It clearly detects spontaneous activity from the fetal heart and brain without averaging and weaker evoked brain activity at all fetal head positions after averaging. The simulated device will be able to monitor the ongoing activity of the fetal and maternal organs. This type of scanner may become a novel tool in fetal medicine. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Progressive mechanical indentation of large-format Li-ion cells
NASA Astrophysics Data System (ADS)
Wang, Hsin; Kumar, Abhishek; Simunovic, Srdjan; Allu, Srikanth; Kalnaus, Sergiy; Turner, John A.; Helmers, Jacob C.; Rules, Evan T.; Winchester, Clinton S.; Gorney, Philip
2017-02-01
Large format Li-ion cells were used to study the mechanical responses of single cells of thickness 6.5 mm and stacks of three cells under compressive loading. Various sequences of increasing depth indentations were carried out using a 1.0 inch (25.4 mm) diameter steel ball with steel plate as a rigid support surface. The indentation depths were between 0.025″ and 0.250″ with main indentation increments tests of 0.025″ steps. Increment steps of 0.100″ and 0.005″ were used to pinpoint the onset of internal-short that occurred between 0.245″ and 0.250″. The indented cells were disassembled and inspected for internal damage. Load vs. time curves were compared with the developed computer models. Separator thinning leading to the short circuit was simulated using both isotropic and anisotropic mechanical properties. Our study show that separators behave differently when tested as a single layer vs. a stack in a typical pouch cell. The collective responses of the multiple layers must be taken into account in failure analysis. A model that resolves the details of the individual internal cell components was able to simulate the internal deformation of the large format cells and the onset of failure assumed to coincide with the onset of internal short circuit.
Progressive mechanical indentation of large-format Li-ion cells
Wang, Hsin; Kumar, Abhishek; Simunovic, Srdjan; ...
2016-12-07
We used large format Li-ion cells to study the mechanical responses of single cells of thickness 6.5 mm and stacks of three cells under compressive loading. We carried out various sequences of increasing depth indentations using a 1.0 inch (25.4 mm) diameter steel ball with steel plate as a rigid support surface. The indentation depths were between 0.025 and 0.250 with main indentation increments tests of 0.025 steps. Increment steps of 0.100 and 0.005 were used to pinpoint the onset of internal-short that occurred between 0.245 and 0.250 . The indented cells were disassembled and inspected for internal damage. Loadmore » vs. time curves were compared with the developed computer models. Separator thinning leading to the short circuit was simulated using both isotropic and anisotropic mechanical properties. This study show that separators behave differently when tested as a single layer vs. a stack in a typical pouch cell. The collective responses of the multiple layers must be taken into account in failure analysis. A model that resolves the details of the individual internal cell components was able to simulate the internal deformation of the large format cells and the onset of failure assumed to coincide with the onset of internal short circuit.« less
Methods and computer readable medium for improved radiotherapy dosimetry planning
Wessol, Daniel E.; Frandsen, Michael W.; Wheeler, Floyd J.; Nigg, David W.
2005-11-15
Methods and computer readable media are disclosed for ultimately developing a dosimetry plan for a treatment volume irradiated during radiation therapy with a radiation source concentrated internally within a patient or incident from an external beam. The dosimetry plan is available in near "real-time" because of the novel geometric model construction of the treatment volume which in turn allows for rapid calculations to be performed for simulated movements of particles along particle tracks therethrough. The particles are exemplary representations of alpha, beta or gamma emissions emanating from an internal radiation source during various radiotherapies, such as brachytherapy or targeted radionuclide therapy, or they are exemplary representations of high-energy photons, electrons, protons or other ionizing particles incident on the treatment volume from an external source. In a preferred embodiment, a medical image of a treatment volume irradiated during radiotherapy having a plurality of pixels of information is obtained.
Collecting data from a sensor network in a single-board computer
NASA Astrophysics Data System (ADS)
Casciati, F.; Casciati, S.; Chen, Z.-C.; Faravelli, L.; Vece, M.
2015-07-01
The EU-FP7 project SPARTACUS, currently in progress, sees the international cooperation of several partners toward the design and implementation of a satellite based asset tracking for supporting emergency management in crisis operations. Due to the emergency environment, one has to rely on a low power consumption wireless communication. Therefore, the communication hardware and software must be designed to match requirements which can only be foreseen at the level of more or less likely scenarios. The latter aspect suggests a deep use of a simulator (instead of a real network of sensors) to cover extreme situations. The former power consumption remark suggests the use of a minimal computer (Raspberry Pi) as data collector. In this paper, the results of a broad simulation campaign are reported in order to investigate the accuracy of the received data and the global power consumption for each of the considered scenarios.
RACORO Extended-Term Aircraft Observations of Boundary-Layer Clouds
NASA Technical Reports Server (NTRS)
Vogelmann, Andrew M.; McFarquhar, Greg M.; Ogren, John A.; Turner, David D.; Comstock, Jennifer M.; Feingold, Graham; Long, Charles N.; Jonsson, Haflidi H.; Bucholtz, Anthony; Collins, Don R.;
2012-01-01
Small boundary-layer clouds are ubiquitous over many parts of the globe and strongly influence the Earths radiative energy balance. However, our understanding of these clouds is insufficient to solve pressing scientific problems. For example, cloud feedback represents the largest uncertainty amongst all climate feedbacks in general circulation models (GCM). Several issues complicate understanding boundary-layer clouds and simulating them in GCMs. The high spatial variability of boundary-layer clouds poses an enormous computational challenge, since their horizontal dimensions and internal variability occur at spatial scales much finer than the computational grids used in GCMs. Aerosol-cloud interactions further complicate boundary-layer cloud measurement and simulation. Additionally, aerosols influence processes such as precipitation and cloud lifetime. An added complication is that at small scales (order meters to 10s of meters) distinguishing cloud from aerosol is increasingly difficult, due to the effects of aerosol humidification, cloud fragments and photon scattering between clouds.
Simulation Enabled Safeguards Assessment Methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robert Bean; Trond Bjornard; Thomas Larson
2007-09-01
It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements inmore » functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed.« less
NASA Astrophysics Data System (ADS)
Zerkle, Ronald D.; Prakash, Chander
1995-03-01
This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.
2-D and 3-D mixing flow analyses of a scramjet-afterbody configuration
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Eleshaky, Mohamed E.; Engelund, Walter C.
1989-01-01
A cold simulant gas study of propulsion/airframe integration for a hypersonic vehicle powered by a scramjet engine is presented. The specific heat ratio of the hot exhaust gases are matched by utilizing a cold mixture of argon and Freon-12. Solutions are obtained for a hypersonic corner flow and a supersonic rectangular flow in order to provide the upstream boundary conditions. The computational test examples also provide a comparison of this flow with that of air as the expanding supersonic jet, where the specific heats are assumed to be constant. It is shown that the three-dimensional computational fluid capabilities developed for these types of flow may be utilized to augment the conventional wind tunnel studies of scramjet afterbody flows using cold simulant exhaust gases, which in turn can help in the design of a scramjet internal-external nozzle.
NASA Astrophysics Data System (ADS)
Hernández Vera, Mario; Wester, Roland; Gianturco, Francesco Antonio
2018-01-01
We construct the velocity map images of the proton transfer reaction between helium and molecular hydrogen ion {{{H}}}2+. We perform simulations of imaging experiments at one representative total collision energy taking into account the inherent aberrations of the velocity mapping in order to explore the feasibility of direct comparisons between theory and future experiments planned in our laboratory. The asymptotic angular distributions of the fragments in a 3D velocity space is determined from the quantum state-to-state differential reactive cross sections and reaction probabilities which are computed by using the time-independent coupled channel hyperspherical coordinate method. The calculations employ an earlier ab initio potential energy surface computed at the FCI/cc-pVQZ level of theory. The present simulations indicate that the planned experiments would be selective enough to differentiate between product distributions resulting from different initial internal states of the reactants.
NASA Technical Reports Server (NTRS)
Zerkle, Ronald D.; Prakash, Chander
1995-01-01
This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.
Rapid Parallel Calculation of shell Element Based On GPU
NASA Astrophysics Data System (ADS)
Wanga, Jian Hua; Lia, Guang Yao; Lib, Sheng; Li, Guang Yao
2010-06-01
Long computing time bottlenecked the application of finite element. In this paper, an effective method to speed up the FEM calculation by using the existing modern graphic processing unit and programmable colored rendering tool was put forward, which devised the representation of unit information in accordance with the features of GPU, converted all the unit calculation into film rendering process, solved the simulation work of all the unit calculation of the internal force, and overcame the shortcomings of lowly parallel level appeared ever before when it run in a single computer. Studies shown that this method could improve efficiency and shorten calculating hours greatly. The results of emulation calculation about the elasticity problem of large number cells in the sheet metal proved that using the GPU parallel simulation calculation was faster than using the CPU's. It is useful and efficient to solve the project problems in this way.
NASA Astrophysics Data System (ADS)
Passerini, Tiziano; Veneziani, Alessandro; Sangalli, Laura; Secchi, Piercesare; Vantini, Simone
2010-11-01
In cerebral blood circulation, the interplay of arterial geometrical features and flow dynamics is thought to play a significant role in the development of aneurysms. In the framework of the Aneurisk project, patient-specific morphology reconstructions were conducted with the open-source software VMTK (www.vmtk.org) on a set of computational angiography images provided by Ospedale Niguarda (Milano, Italy). Computational fluid dynamics (CFD) simulations were performed with a software based on the library LifeV (www.lifev.org). The joint statistical analysis of geometries and simulations highlights the possible association of certain spatial patterns of radius, curvature and shear load along the Internal Carotid Artery (ICA) with the presence, position and previous event of rupture of an aneurysm in the entire cerebral vasculature. Moreover, some possible landmarks are identified to be monitored for the assessment of a Potential Rupture Risk Index.
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, A; Zbijewski, W; Bolch, W
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less
Numerical investigations in three-dimensional internal flows
NASA Astrophysics Data System (ADS)
Rose, William C.
1988-08-01
An investigation into the use of computational fluid dynamics (CFD) was performed to examine the expected heat transfer rates that will occur within the NASA-Ames 100 megawatt arc heater nozzle. This nozzle was tentatively designed and identified to provide research for a directly connected combustion experiment specifically related to the National Aerospace Plane Program (NASP) aircraft, and is expected to simulate the flow field entering the combustor section. It was found that extremely fine grids, that is very small mesh spacing near the wall, are required to accurately model the heat transfer process and, in fact, must contain a point within the laminar sublayer if results are to be taken directly from a numerical simulation code. In the present study, an alternative to this very fine mesh and its attendant increase in computational time was invoked and is based on a wall-function method. It was shown that solutions could be obtained that give accurate indications of surface heat transfer rate throughout the nozzle in approximately 1/100 of the computer time required to do the simulation directly without the use of the wall-function implementation. Finally, a maximum heating value in the throat region of the proposed slit nozzle for the 100 megawatt arc heater was shown to be approximately 6 MW per square meter.
Ören, Ünal; Hiller, Mauritius; Andersson, M
2017-04-28
A Monte Carlo-based stand-alone program, IDACstar (Internal Dose Assessment by Computer), was developed, dedicated to perform radiation dose calculations using complex voxel simulations. To test the program, two irradiation situations were simulated, one hypothetical contamination case with 600 MBq of 99mTc and one extravasation case involving 370 MBq of 18F-FDG. The effective dose was estimated to be 0.042 mSv for the contamination case and 4.5 mSv for the extravasation case. IDACstar has demonstrated that dosimetry results from contamination or extravasation cases can be acquired with great ease. An effective tool for radiation protection applications is provided with IDACstar allowing physicists at nuclear medicine departments to easily quantify the radiation risk of stochastic effects when a radiation accident has occurred. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Multiple Exposure of Rendezvous Docking Simulator - Gemini Program
1964-02-07
Multiple exposure of Rendezvous Docking Simulator. Francis B. Smith, described the simulator as follows: The rendezvous and docking operation of the Gemini spacecraft with the Agena and of the Apollo Command Module with the Lunar Excursion Module have been the subject of simulator studies for several years. This figure illustrates the Gemini-Agena rendezvous docking simulator at Langley. The Gemini spacecraft was supported in a gimbal system by an overhead crane and gantry arrangement which provided 6 degrees of freedom - roll, pitch, yaw, and translation in any direction - all controllable by the astronaut in the spacecraft. Here again the controls fed into a computer which in turn provided an input to the servos driving the spacecraft so that it responded to control motions in a manner which accurately simulated the Gemini spacecraft. -- Published in Barton C. Hacker and James M. Grimwood, On the Shoulders of Titans: A History of Project Gemini, NASA SP-4203 Francis B. Smith, Simulators for Manned Space Research, Paper presented at the 1966 IEEE International convention, March 21-25, 1966.
NASA Astrophysics Data System (ADS)
Li, Ye; Yuan, Bing; Yang, Kai; Zhang, Xianren; Yan, Bing; Cao, Dapeng
2017-02-01
The nanoparticles (NPs) functionalized with charged ligands are of particular significance due to their potential drug/gene delivery and biomedical applications. However, the molecular mechanism of endocytosis of the charged NPs by cells, especially the effect of the NP-NP and NP-biomembrane interactions on the internalization pathways is still poorly understood. In this work, we systematically investigate the internalization behaviors of the positively charged NPs by combining experiment technology and dissipative particle dynamics (DPD) simulation. We experimentally find an interesting but highly counterintuitive phenomenon, i.e. the multiple positively charged NPs prefer to enter cells cooperatively although the like-charged NPs have obvious electrostatic repulsion. Furthermore, we adopt the DPD simulation to confirm the experimental findings, and reveal that the mechanism of the cooperative endocytosis between like-charged NPs is definitely caused by the interplay of particle size, the charged ligand density on particle surface and local concentration of NPs. Importantly, we not only observe the normal cooperative endocytosis of like-charged NPs in cell biomembrane like neutral NP case, but also predict the ‘bud’ cooperative endocytosis of like-charged NPs which is absence in the neutral NP case. The results indicate that electrostatic repulsion between the positively charged nanoparticles plays an important role in the ‘bud’ cooperative endocytosis of like-charged NPs.
Lessons Learned from Numerical Simulations of the F-16XL Aircraft at Flight Conditions
NASA Technical Reports Server (NTRS)
Rizzi, Arthur; Jirasek, Adam; Lamar, John; Crippa, Simone; Badcock, Kenneth; Boelens, Oklo
2009-01-01
Nine groups participating in the Cranked Arrow Wing Aerodynamics Project International (CAWAPI) project have contributed steady and unsteady viscous simulations of a full-scale, semi-span model of the F-16XL aircraft. Three different categories of flight Reynolds/Mach number combinations were computed and compared with flight-test measurements for the purpose of code validation and improved understanding of the flight physics. Steady-state simulations are done with several turbulence models of different complexity with no topology information required and which overcome Boussinesq-assumption problems in vortical flows. Detached-eddy simulation (DES) and its successor delayed detached-eddy simulation (DDES) have been used to compute the time accurate flow development. Common structured and unstructured grids as well as individually-adapted unstructured grids were used. Although discrepancies are observed in the comparisons, overall reasonable agreement is demonstrated for surface pressure distribution, local skin friction and boundary velocity profiles at subsonic speeds. The physical modeling, steady or unsteady, and the grid resolution both contribute to the discrepancies observed in the comparisons with flight data, but at this time it cannot be determined how much each part contributes to the whole. Overall it can be said that the technology readiness of CFD-simulation technology for the study of vehicle performance has matured since 2001 such that it can be used today with a reasonable level of confidence for complex configurations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nikolic, R J
This month's issue has the following articles: (1) Dawn of a New Era of Scientific Discovery - Commentary by Edward I. Moses; (2) At the Frontiers of Fundamental Science Research - Collaborators from national laboratories, universities, and international organizations are using the National Ignition Facility to probe key fundamental science questions; (3) Livermore Responds to Crisis in Post-Earthquake Japan - More than 70 Laboratory scientists provided round-the-clock expertise in radionuclide analysis and atmospheric dispersion modeling as part of the nation's support to Japan following the March 2011 earthquake and nuclear accident; (4) A Comprehensive Resource for Modeling, Simulation, and Experimentsmore » - A new Web-based resource called MIDAS is a central repository for material properties, experimental data, and computer models; and (5) Finding Data Needles in Gigabit Haystacks - Livermore computer scientists have developed a novel computer architecture based on 'persistent' memory to ease data-intensive computations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Othman, M. N. K., E-mail: najibkhir86@gmail.com, E-mail: zuradzman@unimap.edu.my, E-mail: hazry@unimap.edu.my, E-mail: khairunizam@unimap.edu.my, E-mail: shahriman@unimap.edu.my, E-mail: s.yaacob@unimap.edu.my, E-mail: syedfaiz@unimap.edu.my, E-mail: abadal@unimap.edu.my; Zuradzman, M. Razlan, E-mail: najibkhir86@gmail.com, E-mail: zuradzman@unimap.edu.my, E-mail: hazry@unimap.edu.my, E-mail: khairunizam@unimap.edu.my, E-mail: shahriman@unimap.edu.my, E-mail: s.yaacob@unimap.edu.my, E-mail: syedfaiz@unimap.edu.my, E-mail: abadal@unimap.edu.my; Hazry, D., E-mail: najibkhir86@gmail.com, E-mail: zuradzman@unimap.edu.my, E-mail: hazry@unimap.edu.my, E-mail: khairunizam@unimap.edu.my, E-mail: shahriman@unimap.edu.my, E-mail: s.yaacob@unimap.edu.my, E-mail: syedfaiz@unimap.edu.my, E-mail: abadal@unimap.edu.my
2014-12-04
This paper explain the analysis of internal air flow velocity of a bladeless vertical takeoff and landing (VTOL) Micro Aerial Vehicle (MAV) hemisphere body. In mechanical design, before produce a prototype model, several analyses should be done to ensure the product's effectiveness and efficiency. There are two types of analysis method can be done in mechanical design; mathematical modeling and computational fluid dynamic. In this analysis, I used computational fluid dynamic (CFD) by using SolidWorks Flow Simulation software. The idea came through to overcome the problem of ordinary quadrotor UAV which has larger size due to using four rotors andmore » the propellers are exposed to environment. The bladeless MAV body is designed to protect all electronic parts, which means it can be used in rainy condition. It also has been made to increase the thrust produced by the ducted propeller compare to exposed propeller. From the analysis result, the air flow velocity at the ducted area increased to twice the inlet air. This means that the duct contribute to the increasing of air velocity.« less
NASA Astrophysics Data System (ADS)
Othman, M. N. K.; Zuradzman, M. Razlan; Hazry, D.; Khairunizam, Wan; Shahriman, A. B.; Yaacob, S.; Ahmed, S. Faiz; Hussain, Abadalsalam T.
2014-12-01
This paper explain the analysis of internal air flow velocity of a bladeless vertical takeoff and landing (VTOL) Micro Aerial Vehicle (MAV) hemisphere body. In mechanical design, before produce a prototype model, several analyses should be done to ensure the product's effectiveness and efficiency. There are two types of analysis method can be done in mechanical design; mathematical modeling and computational fluid dynamic. In this analysis, I used computational fluid dynamic (CFD) by using SolidWorks Flow Simulation software. The idea came through to overcome the problem of ordinary quadrotor UAV which has larger size due to using four rotors and the propellers are exposed to environment. The bladeless MAV body is designed to protect all electronic parts, which means it can be used in rainy condition. It also has been made to increase the thrust produced by the ducted propeller compare to exposed propeller. From the analysis result, the air flow velocity at the ducted area increased to twice the inlet air. This means that the duct contribute to the increasing of air velocity.
NASA Technical Reports Server (NTRS)
Flourens, F.; Morel, T.; Gauthier, D.; Serafin, D.
1991-01-01
Numerical techniques such as Finite Difference Time Domain (FDTD) computer programs, which were first developed to analyze the external electromagnetic environment of an aircraft during a wave illumination, a lightning event, or any kind of current injection, are now very powerful investigative tools. The program called GORFF-VE, was extended to compute the inner electromagnetic fields that are generated by the penetration of the outer fields through large apertures made in the all metallic body. Then, the internal fields can drive the electrical response of a cable network. The coupling between the inside and the outside of the helicopter is implemented using Huygen's principle. Moreover, the spectacular increase of computer resources, as calculations speed and memory capacity, allows the modellization structures as complex as these of helicopters with accuracy. This numerical model was exploited, first, to analyze the electromagnetic environment of an in-flight helicopter for several injection configurations, and second, to design a coaxial return path to simulate the lightning aircraft interaction with a strong current injection. The E field and current mappings are the result of these calculations.
The biomechanical effect of clavicular shortening on shoulder muscle function, a simulation study.
Hillen, Robert J; Bolsterlee, Bart; Veeger, Dirkjan H E J
2016-08-01
Malunion of the clavicle with shortening after mid shaft fractures can give rise to long-term residual complaints. The cause of these complaints is as yet unclear. In this study we analysed data of an earlier experimental cadaveric study on changes of shoulder biomechanics with progressive shortening of the clavicle. The data was used in a musculoskeletal computer model to examine the effect of clavicle shortening on muscle function, expressed as maximal muscle moments for abduction and internal rotation. Clavicle shortening results in changes of maximal muscle moments around the shoulder girdle. The mean values at 3.6cm of shortening of maximal muscle moment changes are 16% decreased around the sterno-clavicular joint decreased for both ab- and adduction, 37% increased around the acromion-clavicular joint for adduction and 32% decrease for internal rotation around the gleno-humeral joint in resting position. Shortening of the clavicle affects muscle function in the shoulder in a computer model. This may explain for the residual complaints after short malunion with shortening. Basic Science Study. Biomechanics. Cadaveric data and computer model. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, H.; Yu, X.; Chen, C.; Zeng, L.; Lu, S.; Wu, L.
2016-12-01
In this research, we combined synchrotron-based X-ray micro-computed tomography (SR-mCT), with three-dimensional lattice Bolzmann (LB) method, to quantify how the change in pore space architecture affected macroscopic hydraulic of two clayey soils amended with biochar. SR-mCT was used to characterize pore structures of the soils before and after biochar addition. The high-resolution soil pore structures were then directly used as internal boundary conditions for three-dimensional water flow simulations with the LB method, which was accelerated by graphics processing unit (GPU) parallel computing. It was shown that, due to the changes in soil pore geometry, the application of biochar increased the soil permeability by at least 1 order of magnitude, and decreased the tortuosity by 20-30%. This work was the first physics based modeling study on the effect of biochar amendment on soil hydraulic properties. The developed theories and techniques have promising potential in understanding the mechanisms of water and nutrient transport in soil at the pore scale.
Computational Challenges in the Analysis of Petrophysics Using Microtomography and Upscaling
NASA Astrophysics Data System (ADS)
Liu, J.; Pereira, G.; Freij-Ayoub, R.; Regenauer-Lieb, K.
2014-12-01
Microtomography provides detailed 3D internal structures of rocks in micro- to tens of nano-meter resolution and is quickly turning into a new technology for studying petrophysical properties of materials. An important step is the upscaling of these properties as micron or sub-micron resolution can only be done on the sample-scale of millimeters or even less than a millimeter. We present here a recently developed computational workflow for the analysis of microstructures including the upscaling of material properties. Computations of properties are first performed using conventional material science simulations at micro to nano-scale. The subsequent upscaling of these properties is done by a novel renormalization procedure based on percolation theory. We have tested the workflow using different rock samples, biological and food science materials. We have also applied the technique on high-resolution time-lapse synchrotron CT scans. In this contribution we focus on the computational challenges that arise from the big data problem of analyzing petrophysical properties and its subsequent upscaling. We discuss the following challenges: 1) Characterization of microtomography for extremely large data sets - our current capability. 2) Computational fluid dynamics simulations at pore-scale for permeability estimation - methods, computing cost and accuracy. 3) Solid mechanical computations at pore-scale for estimating elasto-plastic properties - computational stability, cost, and efficiency. 4) Extracting critical exponents from derivative models for scaling laws - models, finite element meshing, and accuracy. Significant progress in each of these challenges is necessary to transform microtomography from the current research problem into a robust computational big data tool for multi-scale scientific and engineering problems.
Large Eddy Simulation of a Supercritical Turbulent Mixing Layer
NASA Astrophysics Data System (ADS)
Sheikhi, Reza; Hadi, Fatemeh; Safari, Mehdi
2017-11-01
Supercritical turbulent flows are relevant to a wide range of applications such as supercritical power cycles, gas turbine combustors, rocket propulsion and internal combustion engines. Large eddy simulation (LES) analysis of such flows involves solving mass, momentum, energy and scalar transport equations with inclusion of generalized diffusion fluxes. These equations are combined with a real gas equation of state and the corresponding thermodynamic mixture variables. Subgrid scale models are needed for not only the conventional convective terms but also the additional high pressure effects arising due to the nonlinearity associated with generalized diffusion fluxes and real gas equation of state. In this study, LES is carried out to study the high pressure turbulent mixing of methane with carbon dioxide in a temporally developing mixing layer under supercritical condition. LES results are assessed by comparing with data obtained from direct numerical simulation (DNS) of the same layer. LES predictions agree favorably with DNS data and represent several key supercritical turbulent flow features such as high density gradient regions. Supported by DOE Grant SC0017097; computational support is provided by DOE National Energy Research Scientific Computing Center.
Turbulent flow in a 180 deg bend: Modeling and computations
NASA Technical Reports Server (NTRS)
Kaul, Upender K.
1989-01-01
A low Reynolds number k-epsilon turbulence model was presented which yields accurate predictions of the kinetic energy near the wall. The model is validated with the experimental channel flow data of Kreplin and Eckelmann. The predictions are also compared with earlier results from direct simulation of turbulent channel flow. The model is especially useful for internal flows where the inflow boundary condition of epsilon is not easily prescribed. The model partly derives from some observations based on earlier direct simulation results of near-wall turbulence. The low Reynolds number turbulence model together with an existing curvature correction appropriate to spinning cylinder flows was used to simulate the flow in a U-bend with the same radius of curvature as the Space Shuttle Main Engine (SSME) Turn-Around Duct (TAD). The present computations indicate a space varying curvature correction parameter as opposed to a constant parameter as used in the spinning cylinder flows. Comparison with limited available experimental data is made. The comparison is favorable, but detailed experimental data is needed to further improve the curvature model.
A fast mass spring model solver for high-resolution elastic objects
NASA Astrophysics Data System (ADS)
Zheng, Mianlun; Yuan, Zhiyong; Zhu, Weixu; Zhang, Guian
2017-03-01
Real-time simulation of elastic objects is of great importance for computer graphics and virtual reality applications. The fast mass spring model solver can achieve visually realistic simulation in an efficient way. Unfortunately, this method suffers from resolution limitations and lack of mechanical realism for a surface geometry model, which greatly restricts its application. To tackle these problems, in this paper we propose a fast mass spring model solver for high-resolution elastic objects. First, we project the complex surface geometry model into a set of uniform grid cells as cages through *cages mean value coordinate method to reflect its internal structure and mechanics properties. Then, we replace the original Cholesky decomposition method in the fast mass spring model solver with a conjugate gradient method, which can make the fast mass spring model solver more efficient for detailed surface geometry models. Finally, we propose a graphics processing unit accelerated parallel algorithm for the conjugate gradient method. Experimental results show that our method can realize efficient deformation simulation of 3D elastic objects with visual reality and physical fidelity, which has a great potential for applications in computer animation.
Self-evaluation of decision-making: A general Bayesian framework for metacognitive computation.
Fleming, Stephen M; Daw, Nathaniel D
2017-01-01
People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a "second-order" inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one's own actions to metacognitive judgments. In addition, the model provides insight into why subjects' metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Self-Evaluation of Decision-Making: A General Bayesian Framework for Metacognitive Computation
2017-01-01
People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a “second-order” inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one’s own actions to metacognitive judgments. In addition, the model provides insight into why subjects’ metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. PMID:28004960
Impact of detector simulation in particle physics collider experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elvira, V. Daniel
Through the last three decades, precise simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detectormore » simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the accuracy of the physics results and publication turnaround, from data-taking to submission. It also presents the economic impact and cost of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data, taxing heavily the performance of simulation and reconstruction software for increasingly complex detectors. Consequently, it becomes urgent to find solutions to speed up simulation software in order to cope with the increased demand in a time of flat budgets. The study ends with a short discussion on the potential solutions that are being explored, by leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering of HEP code for concurrency and parallel computing.« less
Impact of detector simulation in particle physics collider experiments
Elvira, V. Daniel
2017-06-01
Through the last three decades, precise simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detectormore » simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the accuracy of the physics results and publication turnaround, from data-taking to submission. It also presents the economic impact and cost of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data, taxing heavily the performance of simulation and reconstruction software for increasingly complex detectors. Consequently, it becomes urgent to find solutions to speed up simulation software in order to cope with the increased demand in a time of flat budgets. The study ends with a short discussion on the potential solutions that are being explored, by leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering of HEP code for concurrency and parallel computing.« less
Computational Methods for Configurational Entropy Using Internal and Cartesian Coordinates.
Hikiri, Simon; Yoshidome, Takashi; Ikeguchi, Mitsunori
2016-12-13
The configurational entropy of solute molecules is a crucially important quantity to study various biophysical processes. Consequently, it is necessary to establish an efficient quantitative computational method to calculate configurational entropy as accurately as possible. In the present paper, we investigate the quantitative performance of the quasi-harmonic and related computational methods, including widely used methods implemented in popular molecular dynamics (MD) software packages, compared with the Clausius method, which is capable of accurately computing the change of the configurational entropy upon temperature change. Notably, we focused on the choice of the coordinate systems (i.e., internal or Cartesian coordinates). The Boltzmann-quasi-harmonic (BQH) method using internal coordinates outperformed all the six methods examined here. The introduction of improper torsions in the BQH method improves its performance, and anharmonicity of proper torsions in proteins is identified to be the origin of the superior performance of the BQH method. In contrast, widely used methods implemented in MD packages show rather poor performance. In addition, the enhanced sampling of replica-exchange MD simulations was found to be efficient for the convergent behavior of entropy calculations. Also in folding/unfolding transitions of a small protein, Chignolin, the BQH method was reasonably accurate. However, the independent term without the correlation term in the BQH method was most accurate for the folding entropy among the methods considered in this study, because the QH approximation of the correlation term in the BQH method was no longer valid for the divergent unfolded structures.
2009-01-01
University of California, Berkeley. In this session, Dennis Gannon of Indiana University described the use of high performance computing for storm...Software Development (Session Introduction) Dennis Gannon Indiana University Software for Mesoscale Storm Prediction: Using Supercomputers for On...Ho, D. Ierardi, I. Kolossvary, J. Klepeis, T. Layman, C. McLeavey , M. Moraes, R. Mueller, E. Priest, Y. Shan, J. Spengler, M. Theobald, B. Towles
TOP500 Supercomputers for June 2004
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack
2004-06-23
23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.
Eddy Resolving Global Ocean Prediction including Tides
2013-09-30
atlantic meridional overturning circulation in the subpolar North Atlantic . Journal of Geophysical Research vol 118, doi:10.1002/jgrc,20065. [published, refereed] ...global ocean circulation model was examined using results from years 2005-2009 of a seven and a half year 1/12.5° global simulation that resolves...internal tides, along with barotropic tides and the eddying general circulation . We examined tidal amplitudes computed using 18 183-day windows that
2012-12-04
CAPE CANAVERAL, Fla. – At the Kennedy Space Center Visitor Complex in Florida sixth-grade students use a computer simulation to practice docking a spacecraft to the International Space Station. Between Nov. 26 and Dec. 7, 2012, about 5,300 sixth-graders in Brevard County, Florida were bused to Kennedy's Visitor Complex for Brevard Space Week, an educational program designed to encourage interest in science, technology, engineering and mathematics STEM careers. Photo credit: NASA/Tim Jacobs
1986-10-01
developed by the AEH Group has the advantages: of compactness which makes it easily transportable; computer controlled acquisi- tion, signal processing...be available to a negatively charged aircraft. The experimental arrangement attempts to simulate the streamer propagation and growth in a quasi ...separate foam configurations: the operational configuration of non - conductive foam and a second configuration which contained an experimental
NASA Technical Reports Server (NTRS)
Nguyen, Louis H.; Ramakrishnan, Jayant; Granda, Jose J.
2006-01-01
The assembly and operation of the International Space Station (ISS) require extensive testing and engineering analysis to verify that the Space Station system of systems would work together without any adverse interactions. Since the dynamic behavior of an entire Space Station cannot be tested on earth, math models of the Space Station structures and mechanical systems have to be built and integrated in computer simulations and analysis tools to analyze and predict what will happen in space. The ISS Centrifuge Rotor (CR) is one of many mechanical systems that need to be modeled and analyzed to verify the ISS integrated system performance on-orbit. This study investigates using Bond Graph modeling techniques as quick and simplified ways to generate models of the ISS Centrifuge Rotor. This paper outlines the steps used to generate simple and more complex models of the CR using Bond Graph Computer Aided Modeling Program with Graphical Input (CAMP-G). Comparisons of the Bond Graph CR models with those derived from Euler-Lagrange equations in MATLAB and those developed using multibody dynamic simulation at the National Aeronautics and Space Administration (NASA) Johnson Space Center (JSC) are presented to demonstrate the usefulness of the Bond Graph modeling approach for aeronautics and space applications.
Procedural wound geometry and blood flow generation for medical training simulators
NASA Astrophysics Data System (ADS)
Aras, Rifat; Shen, Yuzhong; Li, Jiang
2012-02-01
Efficient application of wound treatment procedures is vital in both emergency room and battle zone scenes. In order to train first responders for such situations, physical casualty simulation kits, which are composed of tens of individual items, are commonly used. Similar to any other training scenarios, computer simulations can be effective means for wound treatment training purposes. For immersive and high fidelity virtual reality applications, realistic 3D models are key components. However, creation of such models is a labor intensive process. In this paper, we propose a procedural wound geometry generation technique that parameterizes key simulation inputs to establish the variability of the training scenarios without the need of labor intensive remodeling of the 3D geometry. The procedural techniques described in this work are entirely handled by the graphics processing unit (GPU) to enable interactive real-time operation of the simulation and to relieve the CPU for other computational tasks. The visible human dataset is processed and used as a volumetric texture for the internal visualization of the wound geometry. To further enhance the fidelity of the simulation, we also employ a surface flow model for blood visualization. This model is realized as a dynamic texture that is composed of a height field and a normal map and animated at each simulation step on the GPU. The procedural wound geometry and the blood flow model are applied to a thigh model and the efficiency of the technique is demonstrated in a virtual surgery scene.
3D Hydrodynamic Simulation of Classical Novae Explosions
NASA Astrophysics Data System (ADS)
Kendrick, Coleman J.
2015-01-01
This project investigates the formation and lifecycle of classical novae and determines how parameters such as: white dwarf mass, star mass and separation affect the evolution of the rotating binary system. These parameters affect the accretion rate, frequency of the nova explosions and light curves. Each particle in the simulation represents a volume of hydrogen gas and are initialized randomly in the outer shell of the companion star. The forces on each particle include: gravity, centrifugal, coriolis, friction, and Langevin. The friction and Langevin forces are used to model the viscosity and internal pressure of the gas. A velocity Verlet method with a one second time step is used to compute velocities and positions of the particles. A new particle recycling method was developed which was critical for computing an accurate and stable accretion rate and keeping the particle count reasonable. I used C++ and OpenCL to create my simulations and ran them on two Nvidia GTX580s. My simulations used up to 1 million particles and required up to 10 hours to complete. My simulation results for novae U Scorpii and DD Circinus are consistent with professional hydrodynamic simulations and observed experimental data (light curves and outburst frequencies). When the white dwarf mass is increased, the time between explosions decreases dramatically. My model was used to make the first prediction for the next outburst of nova DD Circinus. My simulations also show that the companion star blocks the expanding gas shell leading to an asymmetrical expanding shell.
Stochastic simulation of spatially correlated geo-processes
Christakos, G.
1987-01-01
In this study, developments in the theory of stochastic simulation are discussed. The unifying element is the notion of Radon projection in Euclidean spaces. This notion provides a natural way of reconstructing the real process from a corresponding process observable on a reduced dimensionality space, where analysis is theoretically easier and computationally tractable. Within this framework, the concept of space transformation is defined and several of its properties, which are of significant importance within the context of spatially correlated processes, are explored. The turning bands operator is shown to follow from this. This strengthens considerably the theoretical background of the geostatistical method of simulation, and some new results are obtained in both the space and frequency domains. The inverse problem is solved generally and the applicability of the method is extended to anisotropic as well as integrated processes. Some ill-posed problems of the inverse operator are discussed. Effects of the measurement error and impulses at origin are examined. Important features of the simulated process as described by geomechanical laws, the morphology of the deposit, etc., may be incorporated in the analysis. The simulation may become a model-dependent procedure and this, in turn, may provide numerical solutions to spatial-temporal geologic models. Because the spatial simu??lation may be technically reduced to unidimensional simulations, various techniques of generating one-dimensional realizations are reviewed. To link theory and practice, an example is computed in detail. ?? 1987 International Association for Mathematical Geology.
NASA Astrophysics Data System (ADS)
DeBeer, Chris M.; Pomeroy, John W.
2017-10-01
The spatial heterogeneity of mountain snow cover and ablation is important in controlling patterns of snow cover depletion (SCD), meltwater production, and runoff, yet is not well-represented in most large-scale hydrological models and land surface schemes. Analyses were conducted in this study to examine the influence of various representations of snow cover and melt energy heterogeneity on both simulated SCD and stream discharge from a small alpine basin in the Canadian Rocky Mountains. Simulations were performed using the Cold Regions Hydrological Model (CRHM), where point-scale snowmelt computations were made using a snowpack energy balance formulation and applied to spatial frequency distributions of snow water equivalent (SWE) on individual slope-, aspect-, and landcover-based hydrological response units (HRUs) in the basin. Hydrological routines were added to represent the vertical and lateral transfers of water through the basin and channel system. From previous studies it is understood that the heterogeneity of late winter SWE is a primary control on patterns of SCD. The analyses here showed that spatial variation in applied melt energy, mainly due to differences in net radiation, has an important influence on SCD at multiple scales and basin discharge, and cannot be neglected without serious error in the prediction of these variables. A single basin SWE distribution using the basin-wide mean SWE (SWE ‾) and coefficient of variation (CV; standard deviation/mean) was found to represent the fine-scale spatial heterogeneity of SWE sufficiently well. Simulations that accounted for differences in (SWE ‾) among HRUs but neglected the sub-HRU heterogeneity of SWE were found to yield similar discharge results as simulations that included this heterogeneity, while SCD was poorly represented, even at the basin level. Finally, applying point-scale snowmelt computations based on a single SWE depth for each HRU (thereby neglecting spatial differences in internal snowpack energetics over the distributions) was found to yield similar SCD and discharge results as simulations that resolved internal energy differences. Spatial/internal snowpack melt energy effects are more pronounced at times earlier in spring before the main period of snowmelt and SCD, as shown in previously published work. The paper discusses the importance of these findings as they apply to the warranted complexity of snowmelt process simulation in cold mountain environments, and shows how the end-of-winter SWE distribution represents an effective means of resolving snow cover heterogeneity at multiple scales for modelling, even in steep and complex terrain.
Physics Computing '92: Proceedings of the 4th International Conference
NASA Astrophysics Data System (ADS)
de Groot, Robert A.; Nadrchal, Jaroslav
1993-04-01
The Table of Contents for the book is as follows: * Preface * INVITED PAPERS * Ab Initio Theoretical Approaches to the Structural, Electronic and Vibrational Properties of Small Clusters and Fullerenes: The State of the Art * Neural Multigrid Methods for Gauge Theories and Other Disordered Systems * Multicanonical Monte Carlo Simulations * On the Use of the Symbolic Language Maple in Physics and Chemistry: Several Examples * Nonequilibrium Phase Transitions in Catalysis and Population Models * Computer Algebra, Symmetry Analysis and Integrability of Nonlinear Evolution Equations * The Path-Integral Quantum Simulation of Hydrogen in Metals * Digital Optical Computing: A New Approach of Systolic Arrays Based on Coherence Modulation of Light and Integrated Optics Technology * Molecular Dynamics Simulations of Granular Materials * Numerical Implementation of a K.A.M. Algorithm * Quasi-Monte Carlo, Quasi-Random Numbers and Quasi-Error Estimates * What Can We Learn from QMC Simulations * Physics of Fluctuating Membranes * Plato, Apollonius, and Klein: Playing with Spheres * Steady States in Nonequilibrium Lattice Systems * CONVODE: A REDUCE Package for Differential Equations * Chaos in Coupled Rotators * Symplectic Numerical Methods for Hamiltonian Problems * Computer Simulations of Surfactant Self Assembly * High-dimensional and Very Large Cellular Automata for Immunological Shape Space * A Review of the Lattice Boltzmann Method * Electronic Structure of Solids in the Self-interaction Corrected Local-spin-density Approximation * Dedicated Computers for Lattice Gauge Theory Simulations * Physics Education: A Survey of Problems and Possible Solutions * Parallel Computing and Electronic-Structure Theory * High Precision Simulation Techniques for Lattice Field Theory * CONTRIBUTED PAPERS * Case Study of Microscale Hydrodynamics Using Molecular Dynamics and Lattice Gas Methods * Computer Modelling of the Structural and Electronic Properties of the Supported Metal Catalysis * Ordered Particle Simulations for Serial and MIMD Parallel Computers * "NOLP" -- Program Package for Laser Plasma Nonlinear Optics * Algorithms to Solve Nonlinear Least Square Problems * Distribution of Hydrogen Atoms in Pd-H Computed by Molecular Dynamics * A Ray Tracing of Optical System for Protein Crystallography Beamline at Storage Ring-SIBERIA-2 * Vibrational Properties of a Pseudobinary Linear Chain with Correlated Substitutional Disorder * Application of the Software Package Mathematica in Generalized Master Equation Method * Linelist: An Interactive Program for Analysing Beam-foil Spectra * GROMACS: A Parallel Computer for Molecular Dynamics Simulations * GROMACS Method of Virial Calculation Using a Single Sum * The Interactive Program for the Solution of the Laplace Equation with the Elimination of Singularities for Boundary Functions * Random-Number Generators: Testing Procedures and Comparison of RNG Algorithms * Micro-TOPIC: A Tokamak Plasma Impurities Code * Rotational Molecular Scattering Calculations * Orthonormal Polynomial Method for Calibrating of Cryogenic Temperature Sensors * Frame-based System Representing Basis of Physics * The Role of Massively Data-parallel Computers in Large Scale Molecular Dynamics Simulations * Short-range Molecular Dynamics on a Network of Processors and Workstations * An Algorithm for Higher-order Perturbation Theory in Radiative Transfer Computations * Hydrostochastics: The Master Equation Formulation of Fluid Dynamics * HPP Lattice Gas on Transputers and Networked Workstations * Study on the Hysteresis Cycle Simulation Using Modeling with Different Functions on Intervals * Refined Pruning Techniques for Feed-forward Neural Networks * Random Walk Simulation of the Motion of Transient Charges in Photoconductors * The Optical Hysteresis in Hydrogenated Amorphous Silicon * Diffusion Monte Carlo Analysis of Modern Interatomic Potentials for He * A Parallel Strategy for Molecular Dynamics Simulations of Polar Liquids on Transputer Arrays * Distribution of Ions Reflected on Rough Surfaces * The Study of Step Density Distribution During Molecular Beam Epitaxy Growth: Monte Carlo Computer Simulation * Towards a Formal Approach to the Construction of Large-scale Scientific Applications Software * Correlated Random Walk and Discrete Modelling of Propagation through Inhomogeneous Media * Teaching Plasma Physics Simulation * A Theoretical Determination of the Au-Ni Phase Diagram * Boson and Fermion Kinetics in One-dimensional Lattices * Computational Physics Course on the Technical University * Symbolic Computations in Simulation Code Development and Femtosecond-pulse Laser-plasma Interaction Studies * Computer Algebra and Integrated Computing Systems in Education of Physical Sciences * Coordinated System of Programs for Undergraduate Physics Instruction * Program Package MIRIAM and Atomic Physics of Extreme Systems * High Energy Physics Simulation on the T_Node * The Chapman-Kolmogorov Equation as Representation of Huygens' Principle and the Monolithic Self-consistent Numerical Modelling of Lasers * Authoring System for Simulation Developments * Molecular Dynamics Study of Ion Charge Effects in the Structure of Ionic Crystals * A Computational Physics Introductory Course * Computer Calculation of Substrate Temperature Field in MBE System * Multimagnetical Simulation of the Ising Model in Two and Three Dimensions * Failure of the CTRW Treatment of the Quasicoherent Excitation Transfer * Implementation of a Parallel Conjugate Gradient Method for Simulation of Elastic Light Scattering * Algorithms for Study of Thin Film Growth * Algorithms and Programs for Physics Teaching in Romanian Technical Universities * Multicanonical Simulation of 1st order Transitions: Interface Tension of the 2D 7-State Potts Model * Two Numerical Methods for the Calculation of Periodic Orbits in Hamiltonian Systems * Chaotic Behavior in a Probabilistic Cellular Automata? * Wave Optics Computing by a Networked-based Vector Wave Automaton * Tensor Manipulation Package in REDUCE * Propagation of Electromagnetic Pulses in Stratified Media * The Simple Molecular Dynamics Model for the Study of Thermalization of the Hot Nucleon Gas * Electron Spin Polarization in PdCo Alloys Calculated by KKR-CPA-LSD Method * Simulation Studies of Microscopic Droplet Spreading * A Vectorizable Algorithm for the Multicolor Successive Overrelaxation Method * Tetragonality of the CuAu I Lattice and Its Relation to Electronic Specific Heat and Spin Susceptibility * Computer Simulation of the Formation of Metallic Aggregates Produced by Chemical Reactions in Aqueous Solution * Scaling in Growth Models with Diffusion: A Monte Carlo Study * The Nucleus as the Mesoscopic System * Neural Network Computation as Dynamic System Simulation * First-principles Theory of Surface Segregation in Binary Alloys * Data Smooth Approximation Algorithm for Estimating the Temperature Dependence of the Ice Nucleation Rate * Genetic Algorithms in Optical Design * Application of 2D-FFT in the Study of Molecular Exchange Processes by NMR * Advanced Mobility Model for Electron Transport in P-Si Inversion Layers * Computer Simulation for Film Surfaces and its Fractal Dimension * Parallel Computation Techniques and the Structure of Catalyst Surfaces * Educational SW to Teach Digital Electronics and the Corresponding Text Book * Primitive Trinomials (Mod 2) Whose Degree is a Mersenne Exponent * Stochastic Modelisation and Parallel Computing * Remarks on the Hybrid Monte Carlo Algorithm for the ∫4 Model * An Experimental Computer Assisted Workbench for Physics Teaching * A Fully Implicit Code to Model Tokamak Plasma Edge Transport * EXPFIT: An Interactive Program for Automatic Beam-foil Decay Curve Analysis * Mapping Technique for Solving General, 1-D Hamiltonian Systems * Freeway Traffic, Cellular Automata, and Some (Self-Organizing) Criticality * Photonuclear Yield Analysis by Dynamic Programming * Incremental Representation of the Simply Connected Planar Curves * Self-convergence in Monte Carlo Methods * Adaptive Mesh Technique for Shock Wave Propagation * Simulation of Supersonic Coronal Streams and Their Interaction with the Solar Wind * The Nature of Chaos in Two Systems of Ordinary Nonlinear Differential Equations * Considerations of a Window-shopper * Interpretation of Data Obtained by RTP 4-Channel Pulsed Radar Reflectometer Using a Multi Layer Perceptron * Statistics of Lattice Bosons for Finite Systems * Fractal Based Image Compression with Affine Transformations * Algorithmic Studies on Simulation Codes for Heavy-ion Reactions * An Energy-Wise Computer Simulation of DNA-Ion-Water Interactions Explains the Abnormal Structure of Poly[d(A)]:Poly[d(T)] * Computer Simulation Study of Kosterlitz-Thouless-Like Transitions * Problem-oriented Software Package GUN-EBT for Computer Simulation of Beam Formation and Transport in Technological Electron-Optical Systems * Parallelization of a Boundary Value Solver and its Application in Nonlinear Dynamics * The Symbolic Classification of Real Four-dimensional Lie Algebras * Short, Singular Pulses Generation by a Dye Laser at Two Wavelengths Simultaneously * Quantum Monte Carlo Simulations of the Apex-Oxygen-Model * Approximation Procedures for the Axial Symmetric Static Einstein-Maxwell-Higgs Theory * Crystallization on a Sphere: Parallel Simulation on a Transputer Network * FAMULUS: A Software Product (also) for Physics Education * MathCAD vs. FAMULUS -- A Brief Comparison * First-principles Dynamics Used to Study Dissociative Chemisorption * A Computer Controlled System for Crystal Growth from Melt * A Time Resolved Spectroscopic Method for Short Pulsed Particle Emission * Green's Function Computation in Radiative Transfer Theory * Random Search Optimization Technique for One-criteria and Multi-criteria Problems * Hartley Transform Applications to Thermal Drift Elimination in Scanning Tunneling Microscopy * Algorithms of Measuring, Processing and Interpretation of Experimental Data Obtained with Scanning Tunneling Microscope * Time-dependent Atom-surface Interactions * Local and Global Minima on Molecular Potential Energy Surfaces: An Example of N3 Radical * Computation of Bifurcation Surfaces * Symbolic Computations in Quantum Mechanics: Energies in Next-to-solvable Systems * A Tool for RTP Reactor and Lamp Field Design * Modelling of Particle Spectra for the Analysis of Solid State Surface * List of Participants
Wei, Xuelei; Dong, Fuhui
2011-12-01
To review recent advance in the research and application of computer aided forming techniques for constructing bone tissue engineering scaffolds. The literature concerning computer aided forming techniques for constructing bone tissue engineering scaffolds in recent years was reviewed extensively and summarized. Several studies over last decade have focused on computer aided forming techniques for bone scaffold construction using various scaffold materials, which is based on computer aided design (CAD) and bone scaffold rapid prototyping (RP). CAD include medical CAD, STL, and reverse design. Reverse design can fully simulate normal bone tissue and could be very useful for the CAD. RP techniques include fused deposition modeling, three dimensional printing, selected laser sintering, three dimensional bioplotting, and low-temperature deposition manufacturing. These techniques provide a new way to construct bone tissue engineering scaffolds with complex internal structures. With rapid development of molding and forming techniques, computer aided forming techniques are expected to provide ideal bone tissue engineering scaffolds.
Performance assessment of KORAT-3D on the ANL IBM-SP computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexeyev, A.V.; Zvenigorodskaya, O.A.; Shagaliev, R.M.
1999-09-01
The TENAR code is currently being developed at the Russian Federal Nuclear Center (VNIIEF) as a coupled dynamics code for the simulation of transients in VVER and RBMK systems and other nuclear systems. The neutronic module in this code system is KORAT-3D. This module is also one of the most computationally intensive components of the code system. A parallel version of KORAT-3D has been implemented to achieve the goal of obtaining transient solutions in reasonable computational time, particularly for RBMK calculations that involve the application of >100,000 nodes. An evaluation of the KORAT-3D code performance was recently undertaken on themore » Argonne National Laboratory (ANL) IBM ScalablePower (SP) parallel computer located in the Mathematics and Computer Science Division of ANL. At the time of the study, the ANL IBM-SP computer had 80 processors. This study was conducted under the auspices of a technical staff exchange program sponsored by the International Nuclear Safety Center (INSC).« less
Alternative modeling methods for plasma-based Rf ion sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veitzer, Seth A., E-mail: veitzer@txcorp.com; Kundrapu, Madhusudhan, E-mail: madhusnk@txcorp.com; Stoltz, Peter H., E-mail: phstoltz@txcorp.com
Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H{sup −} source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. Inmore » particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H{sup −} ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.« less
Alternative modeling methods for plasma-based Rf ion sources.
Veitzer, Seth A; Kundrapu, Madhusudhan; Stoltz, Peter H; Beckwith, Kristian R C
2016-02-01
Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H(-) source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H(-) ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.
Lee, Soo Hoon; Kim, Dong Hoon; Kang, Tae-Sin; Kang, Changwoo; Jeong, Jin Hee; Kim, Seong Chun; Kim, Dong Seob
2015-08-01
This study was conducted to evaluate the appropriateness of the chest compression (CC) depth recommended in the current guidelines and simulated external CCs, and to characterize the optimal CC depth for an adult by body mass index (BMI). Adult patients who underwent chest computed tomography as a screening test for latent pulmonary diseases in the health care center were enrolled in this study. We calculated the internal anteroposterior (AP) diameter (IAPD) and external AP diameter (EAPD) of the chest across BMIs (<18.50, 18.50-24.99, 25.00-29.99, and ≥30.00 kg/m(2)) for simulated CC depth. We also calculated the residual chest depths less than 20 mm for simulated CC depth. There was a statistically significant difference in the chest EAPD and IAPD measured at the lower half of the sternum for each BMI groups (EAPD: R(2) = 0.638, P < .001; IAPD: R(2) = 0.297, P < .001). For one-half external AP CC, 100% of the patients, regardless of BMI, had a calculated residual internal chest depth less than 20 mm. For one-fourth external AP CC, no patients had a calculated residual internal chest depth less than 20 mm. For one-third external AP CC, only 6.48% of the patients had a calculated residual internal chest depth less than 20 mm. It is not appropriate that the current CC depth (≥50 mm), expressed only as absolute measurement without a fraction of the depth of the chest, is applied uniformly in all adults. In addition, in terms of safety and efficacy, simulated CC targeting approximately between one-third and one-fourth EAPD CC depth might be appropriate. Copyright © 2015 Elsevier Inc. All rights reserved.
Fast Boundary Element Method for acoustics with the Sparse Cardinal Sine Decomposition
NASA Astrophysics Data System (ADS)
Alouges, François; Aussal, Matthieu; Parolin, Emile
2017-07-01
This paper presents the newly proposed method Sparse Cardinal Sine Decomposition that allows fast convolution on unstructured grids. We focus on its use when coupled with finite element techniques to solve acoustic problems with the (compressed) Boundary Element Method. In addition, we also compare the computational performances of two equivalent Matlab® and Python implementations of the method. We show validation test cases in order to assess the precision of the approach. Eventually, the performance of the method is illustrated by the computation of the acoustic target strength of a realistic submarine from the Benchmark Target Strength Simulation international workshop.
International Instrumentation Symposium, 38th, Las Vegas, NV, Apr. 26-30, 1992, Proceedings
NASA Astrophysics Data System (ADS)
The present volume on aerospace instrumentation discusses computer applications, blast and shock, implementation of the Clean Air Act amendments, and thermal systems. Attention is given to measurement uncertainty/flow measurement, data acquisition and processing, force/acceleration/motion measurements, and hypersonics/reentry vehicle systems. Topics addressed include wind tunnels, real time systems, and pressure effects. Also discussed are a distributed data and control system for space simulation and thermal testing a stepwise shockwave velocity determinator, computer tracking and decision making, the use of silicon diodes for detecting the liquid-vapor interface in hydrogen, and practical methods for analysis of uncertainty propagation.
A Collection of Technical Papers
NASA Technical Reports Server (NTRS)
1995-01-01
Papers presented at the 6th Space Logistics Symposium covered such areas as: The International Space Station; The Hubble Space Telescope; Launch site computer simulation; Integrated logistics support; The Baikonur Cosmodrome; Probabalistic tools for high confidence repair; A simple space station rescue vehicle; Integrated Traffic Model for the International Space Station; Packaging the maintenance shop; Leading edge software support; Storage information management system; Consolidated maintenance inventory logistics planning; Operation concepts for a single stage to orbit vehicle; Mission architecture for human lunar exploration; Logistics of a lunar based solar power satellite scenario; Just in time in space; NASA acquisitions/logistics; Effective transition management; Shuttle logistics; and Revitalized space operations through total quality control management.
Nuclear and Particle Physics Simulations: The Consortium of Upper-Level Physics Software
NASA Astrophysics Data System (ADS)
Bigelow, Roberta; Moloney, Michael J.; Philpott, John; Rothberg, Joseph
1995-06-01
The Consortium for Upper Level Physics Software (CUPS) has developed a comprehensive series of Nine Book/Software packages that Wiley will publish in FY `95 and `96. CUPS is an international group of 27 physicists, all with extensive backgrounds in the research, teaching, and development of instructional software. The project is being supported by the National Science Foundation (PHY-9014548), and it has received other support from the IBM Corp., Apple Computer Corp., and George Mason University. The Simulations being developed are: Astrophysics, Classical Mechanics, Electricity & Magnetism, Modern Physics, Nuclear and Particle Physics, Quantum Mechanics, Solid State, Thermal and Statistical, and Wave and Optics.
A digital computer simulation and study of a direct-energy-transfer power-conditioning system
NASA Technical Reports Server (NTRS)
Burns, W. W., III; Owen, H. A., Jr.; Wilson, T. G.; Rodriguez, G. E.; Paulkovich, J.
1975-01-01
An investigation of the behavior of the power-conditioning system as a whole is a necessity to ensure the integrity of the aggregate system in the case of space applications. An approach for conducting such an investigation is considered. A description is given of the application of a general digital analog simulator program to the study of an aggregate power-conditioning system which is being developed for use on the International Ultraviolet Explorer spacecraft. The function of the direct energy transfer system studied involves a coupling of a solar array through a main distribution bus to the spacecraft electrical loads.
Kwon, Ohin; Woo, Eung Je; Yoon, Jeong-Rock; Seo, Jin Keun
2002-02-01
We developed a new image reconstruction algorithm for magnetic resonance electrical impedance tomography (MREIT). MREIT is a new EIT imaging technique integrated into magnetic resonance imaging (MRI) system. Based on the assumption that internal current density distribution is obtained using magnetic resonance imaging (MRI) technique, the new image reconstruction algorithm called J-substitution algorithm produces cross-sectional static images of resistivity (or conductivity) distributions. Computer simulations show that the spatial resolution of resistivity image is comparable to that of MRI. MREIT provides accurate high-resolution cross-sectional resistivity images making resistivity values of various human tissues available for many biomedical applications.
Fluid Dynamics Lagrangian Simulation Model
NASA Astrophysics Data System (ADS)
Hyman, Ellis
1994-02-01
The work performed by Science Applications International Corporation (SAIC) on this contract, Fluid Dynamics Lagrangian Simulation Model, Contract Number N00014-89-C-2106, SAIC Project Number 01-0157-03-0768, focused on a number of research topics in fluid dynamics. The work was in support of the programs of NRL's Laboratory for Computational Physics and Fluid Dynamics and covered the period from 10 September 1989 to 9 December 1993. In the following sections, we describe each of the efforts and the results obtained. Much of the research work has resulted in journal publications. These are included in Appendices of this report for which the reader is referred for complete details.
Terascale direct numerical simulations of turbulent combustion using S3D
NASA Astrophysics Data System (ADS)
Chen, J. H.; Choudhary, A.; de Supinski, B.; DeVries, M.; Hawkes, E. R.; Klasky, S.; Liao, W. K.; Ma, K. L.; Mellor-Crummey, J.; Podhorszki, N.; Sankaran, R.; Shende, S.; Yoo, C. S.
2009-01-01
Computational science is paramount to the understanding of underlying processes in internal combustion engines of the future that will utilize non-petroleum-based alternative fuels, including carbon-neutral biofuels, and burn in new combustion regimes that will attain high efficiency while minimizing emissions of particulates and nitrogen oxides. Next-generation engines will likely operate at higher pressures, with greater amounts of dilution and utilize alternative fuels that exhibit a wide range of chemical and physical properties. Therefore, there is a significant role for high-fidelity simulations, direct numerical simulations (DNS), specifically designed to capture key turbulence-chemistry interactions in these relatively uncharted combustion regimes, and in particular, that can discriminate the effects of differences in fuel properties. In DNS, all of the relevant turbulence and flame scales are resolved numerically using high-order accurate numerical algorithms. As a consequence terascale DNS are computationally intensive, require massive amounts of computing power and generate tens of terabytes of data. Recent results from terascale DNS of turbulent flames are presented here, illustrating its role in elucidating flame stabilization mechanisms in a lifted turbulent hydrogen/air jet flame in a hot air coflow, and the flame structure of a fuel-lean turbulent premixed jet flame. Computing at this scale requires close collaborations between computer and combustion scientists to provide optimized scaleable algorithms and software for terascale simulations, efficient collective parallel I/O, tools for volume visualization of multiscale, multivariate data and automating the combustion workflow. The enabling computer science, applied to combustion science, is also required in many other terascale physics and engineering simulations. In particular, performance monitoring is used to identify the performance of key kernels in the DNS code, S3D and especially memory intensive loops in the code. Through the careful application of loop transformations, data reuse in cache is exploited thereby reducing memory bandwidth needs, and hence, improving S3D's nodal performance. To enhance collective parallel I/O in S3D, an MPI-I/O caching design is used to construct a two-stage write-behind method for improving the performance of write-only operations. The simulations generate tens of terabytes of data requiring analysis. Interactive exploration of the simulation data is enabled by multivariate time-varying volume visualization. The visualization highlights spatial and temporal correlations between multiple reactive scalar fields using an intuitive user interface based on parallel coordinates and time histogram. Finally, an automated combustion workflow is designed using Kepler to manage large-scale data movement, data morphing, and archival and to provide a graphical display of run-time diagnostics.
Wall shear stress in intracranial aneurysms and adjacent arteries☆
Wang, Fuyu; Xu, Bainan; Sun, Zhenghui; Wu, Chen; Zhang, Xiaojun
2013-01-01
Hemodynamic parameters play an important role in aneurysm formation and growth. However, it is difficult to directly observe a rapidly growing de novo aneurysm in a patient. To investigate possible associations between hemodynamic parameters and the formation and growth of intracranial aneurysms, the present study constructed a computational model of a case with an internal carotid artery aneurysm and an anterior communicating artery aneurysm, based on the CT angiography findings of a patient. To simulate the formation of the anterior communicating artery aneurysm and the growth of the internal carotid artery aneurysm, we then constructed a model that virtually removed the anterior communicating artery aneurysm, and a further two models that also progressively decreased the size of the internal carotid artery aneurysm. Computational simulations of the fluid dynamics of the four models were performed under pulsatile flow conditions, and wall shear stress was compared among the different models. In the three aneurysm growth models, increasing size of the aneurysm was associated with an increased area of low wall shear stress, a significant decrease in wall shear stress at the dome of the aneurysm, and a significant change in the wall shear stress of the parent artery. The wall shear stress of the anterior communicating artery remained low, and was significantly lower than the wall shear stress at the bifurcation of the internal carotid artery or the bifurcation of the middle cerebral artery. After formation of the anterior communicating artery aneurysm, the wall shear stress at the dome of the internal carotid artery aneurysm increased significantly, and the wall shear stress in the upstream arteries also changed significantly. These findings indicate that low wall shear stress may be associated with the initiation and growth of aneurysms, and that aneurysm formation and growth may influence hemodynamic parameters in the local and adjacent arteries. PMID:25206394
Torner, Benjamin; Konnigk, Lucas; Hallier, Sebastian; Kumar, Jitendra; Witte, Matthias; Wurm, Frank-Hendrik
2018-06-01
Numerical flow analysis (computational fluid dynamics) in combination with the prediction of blood damage is an important procedure to investigate the hemocompatibility of a blood pump, since blood trauma due to shear stresses remains a problem in these devices. Today, the numerical damage prediction is conducted using unsteady Reynolds-averaged Navier-Stokes simulations. Investigations with large eddy simulations are rarely being performed for blood pumps. Hence, the aim of the study is to examine the viscous shear stresses of a large eddy simulation in a blood pump and compare the results with an unsteady Reynolds-averaged Navier-Stokes simulation. The simulations were carried out at two operation points of a blood pump. The flow was simulated on a 100M element mesh for the large eddy simulation and a 20M element mesh for the unsteady Reynolds-averaged Navier-Stokes simulation. As a first step, the large eddy simulation was verified by analyzing internal dissipative losses within the pump. Then, the pump characteristics and mean and turbulent viscous shear stresses were compared between the two simulation methods. The verification showed that the large eddy simulation is able to reproduce the significant portion of dissipative losses, which is a global indication that the equivalent viscous shear stresses are adequately resolved. The comparison with the unsteady Reynolds-averaged Navier-Stokes simulation revealed that the hydraulic parameters were in agreement, but differences for the shear stresses were found. The results show the potential of the large eddy simulation as a high-quality comparative case to check the suitability of a chosen Reynolds-averaged Navier-Stokes setup and turbulence model. Furthermore, the results lead to suggest that large eddy simulations are superior to unsteady Reynolds-averaged Navier-Stokes simulations when instantaneous stresses are applied for the blood damage prediction.
Computer-Aided System Engineering and Analysis (CASE/A) Programmer's Manual, Version 5.0
NASA Technical Reports Server (NTRS)
Knox, J. C.
1996-01-01
The Computer Aided System Engineering and Analysis (CASE/A) Version 5.0 Programmer's Manual provides the programmer and user with information regarding the internal structure of the CASE/A 5.0 software system. CASE/A 5.0 is a trade study tool that provides modeling/simulation capabilities for analyzing environmental control and life support systems and active thermal control systems. CASE/A has been successfully used in studies such as the evaluation of carbon dioxide removal in the space station. CASE/A modeling provides a graphical and command-driven interface for the user. This interface allows the user to construct a model by placing equipment components in a graphical layout of the system hardware, then connect the components via flow streams and define their operating parameters. Once the equipment is placed, the simulation time and other control parameters can be set to run the simulation based on the model constructed. After completion of the simulation, graphical plots or text files can be obtained for evaluation of the simulation results over time. Additionally, users have the capability to control the simulation and extract information at various times in the simulation (e.g., control equipment operating parameters over the simulation time or extract plot data) by using "User Operations (OPS) Code." This OPS code is written in FORTRAN with a canned set of utility subroutines for performing common tasks. CASE/A version 5.0 software runs under the VAX VMS(Trademark) environment. It utilizes the Tektronics 4014(Trademark) graphics display system and the VTIOO(Trademark) text manipulation/display system.
The Flostation - an Immersive Cyberspace System
NASA Technical Reports Server (NTRS)
Park, Brian
2006-01-01
A flostation is a computer-controlled apparatus that, along with one or more computer(s) and other computer-controlled equipment, is part of an immersive cyberspace system. The system is said to be immersive in two senses of the word: (1) It supports the body in a modified form neutral posture experienced in zero gravity and (2) it is equipped with computer-controlled display equipment that helps to give the occupant of the chair a feeling of immersion in an environment that the system is designed to simulate. Neutral immersion was conceived during the Gemini program as a means of training astronauts for working in a zerogravity environment. Current derivatives include neutral-buoyancy tanks and the KC-135 airplane, each of which mimics the effects of zero gravity. While these have performed well in simulating the shorter-duration flights typical of the space program to date, a training device that can take astronauts to the next level will be needed for simulating longer-duration flights such as that of the International Space Station. The flostation is expected to satisfy this need. The flostation could also be adapted and replicated for use in commercial ventures ranging from home entertainment to medical treatment. The use of neutral immersion in the flostation enables the occupant to recline in an optimal posture of rest and meditation. This posture, combines savasana (known to practitioners of yoga) and a modified form of the neutral posture assumed by astronauts in outer space. As the occupant relaxes, awareness of the physical body is reduced. The neutral body posture, which can be maintained for hours without discomfort, is extended to the eyes, ears, and hands. The occupant can be surrounded with a full-field-of-view visual display and nearphone sound, and can be stimulated with full-body vibration and motion cueing. Once fully immersed, the occupant can use neutral hand controllers (that is, hand-posture sensors) to control various aspects of the simulated environment.
Gaussian polarizable-ion tight binding.
Boleininger, Max; Guilbert, Anne Ay; Horsfield, Andrew P
2016-10-14
To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).
Gaussian polarizable-ion tight binding
NASA Astrophysics Data System (ADS)
Boleininger, Max; Guilbert, Anne AY; Horsfield, Andrew P.
2016-10-01
To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).
Bayesian experimental design for models with intractable likelihoods.
Drovandi, Christopher C; Pettitt, Anthony N
2013-12-01
In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables. © 2013, The International Biometric Society.
Casha, Aaron R; Camilleri, Liberato; Manché, Alexander; Gatt, Ruben; Attard, Daphne; Gauci, Marilyn; Camilleri-Podesta, Marie-Therese; Mcdonald, Stuart; Grima, Joseph N
2015-11-01
The human rib cage resembles a masonry dome in shape. Masonry domes have a particular construction that mimics stress distribution. Rib cortical thickness and bone density were analyzed to determine whether the morphology of the rib cage is sufficiently similar to a shell dome for internal rib structure to be predicted mathematically. A finite element analysis (FEA) simulation was used to measure stresses on the internal and external surfaces of a chest-shaped dome. Inner and outer rib cortical thickness and bone density were measured in the mid-axillary lines of seven cadaveric rib cages using computerized tomography scanning. Paired t tests and Pearson correlation were used to relate cortical thickness and bone density to stress. FEA modeling showed that the stress was 82% higher on the internal than the external surface, with a gradual decrease in internal and external wall stresses from the base to the apex. The inner cortex was more radio-dense, P < 0.001, and thicker, P < 0.001, than the outer cortex. Inner cortical thickness was related to internal stress, r = 0.94, P < 0.001, inner cortical bone density to internal stress, r = 0.87, P = 0.003, and outer cortical thickness to external stress, r = 0.65, P = 0.035. Mathematical models were developed relating internal and external cortical thicknesses and bone densities to rib level. The internal anatomical features of ribs, including the inner and outer cortical thicknesses and bone densities, are similar to the stress distribution in dome-shaped structures modeled using FEA computer simulations of a thick-walled dome pressure vessel. Fixation of rib fractures should include the stronger internal cortex. © 2015 Wiley Periodicals, Inc.
The SCEC Community Modeling Environment(SCEC/CME): A Collaboratory for Seismic Hazard Analysis
NASA Astrophysics Data System (ADS)
Maechling, P. J.; Jordan, T. H.; Minster, J. B.; Moore, R.; Kesselman, C.
2005-12-01
The SCEC Community Modeling Environment (SCEC/CME) Project is an NSF-supported Geosciences/IT partnership that is actively developing an advanced information infrastructure for system-level earthquake science in Southern California. This partnership includes SCEC, USC's Information Sciences Institute (ISI), the San Diego Supercomputer Center (SDSC), the Incorporated Institutions for Research in Seismology (IRIS), and the U.S. Geological Survey. The goal of the SCEC/CME is to develop seismological applications and information technology (IT) infrastructure to support the development of Seismic Hazard Analysis (SHA) programs and other geophysical simulations. The SHA application programs developed on the Project include a Probabilistic Seismic Hazard Analysis system called OpenSHA. OpenSHA computational elements that are currently available include a collection of attenuation relationships, and several Earthquake Rupture Forecasts (ERFs). Geophysicists in the collaboration have also developed Anelastic Wave Models (AWMs) using both finite-difference and finite-element approaches. Earthquake simulations using these codes have been run for a variety of earthquake sources. Rupture Dynamic Model (RDM) codes have also been developed that simulate friction-based fault slip. The SCEC/CME collaboration has also developed IT software and hardware infrastructure to support the development, execution, and analysis of these SHA programs. To support computationally expensive simulations, we have constructed a grid-based scientific workflow system. Using the SCEC grid, project collaborators can submit computations from the SCEC/CME servers to High Performance Computers at USC and TeraGrid High Performance Computing Centers. Data generated and archived by the SCEC/CME is stored in a digital library system, the Storage Resource Broker (SRB). This system provides a robust and secure system for maintaining the association between the data seta and their metadata. To provide an easy-to-use system for constructing SHA computations, a browser-based workflow assembly web portal has been developed. Users can compose complex SHA calculations, specifying SCEC/CME data sets as inputs to calculations, and calling SCEC/CME computational programs to process the data and the output. Knowledge-based software tools have been implemented that utilize ontological descriptions of SHA software and data can validate workflows created with this pathway assembly tool. Data visualization software developed by the collaboration supports analysis and validation of data sets. Several programs have been developed to visualize SCEC/CME data including GMT-based map making software for PSHA codes, 4D wavefield propagation visualization software based on OpenGL, and 3D Geowall-based visualization of earthquakes, faults, and seismic wave propagation. The SCEC/CME Project also helps to sponsor the SCEC UseIT Intern program. The UseIT Intern Program provides research opportunities in both Geosciences and Information Technology to undergraduate students in a variety of fields. The UseIT group has developed a 3D data visualization tool, called SCEC-VDO, as a part of this undergraduate research program.
Curtin, Lindsay B; Finn, Laura A; Czosnowski, Quinn A; Whitman, Craig B; Cawley, Michael J
2011-08-10
To assess the impact of computer-based simulation on the achievement of student learning outcomes during mannequin-based simulation. Participants were randomly assigned to rapid response teams of 5-6 students and then teams were randomly assigned to either a group that completed either computer-based or mannequin-based simulation cases first. In both simulations, students used their critical thinking skills and selected interventions independent of facilitator input. A predetermined rubric was used to record and assess students' performance in the mannequin-based simulations. Feedback and student performance scores were generated by the software in the computer-based simulations. More of the teams in the group that completed the computer-based simulation before completing the mannequin-based simulation achieved the primary outcome for the exercise, which was survival of the simulated patient (41.2% vs. 5.6%). The majority of students (>90%) recommended the continuation of simulation exercises in the course. Students in both groups felt the computer-based simulation should be completed prior to the mannequin-based simulation. The use of computer-based simulation prior to mannequin-based simulation improved the achievement of learning goals and outcomes. In addition to improving participants' skills, completing the computer-based simulation first may improve participants' confidence during the more real-life setting achieved in the mannequin-based simulation.
NASA Technical Reports Server (NTRS)
Jain, Abhinandan
2011-01-01
Ndarts software provides algorithms for computing quantities associated with the dynamics of articulated, rigid-link, multibody systems. It is designed as a general-purpose dynamics library that can be used for the modeling of robotic platforms, space vehicles, molecular dynamics, and other such applications. The architecture and algorithms in Ndarts are based on the Spatial Operator Algebra (SOA) theory for computational multibody and robot dynamics developed at JPL. It uses minimal, internal coordinate models. The algorithms are low-order, recursive scatter/ gather algorithms. In comparison with the earlier Darts++ software, this version has a more general and cleaner design needed to support a larger class of computational dynamics needs. It includes a frames infrastructure, allows algorithms to operate on subgraphs of the system, and implements lazy and deferred computation for better efficiency. Dynamics modeling modules such as Ndarts are core building blocks of control and simulation software for space, robotic, mechanism, bio-molecular, and material systems modeling.
A New Evaluation Method of Stored Heat Effect of Reinforced Concrete Wall of Cold Storage
NASA Astrophysics Data System (ADS)
Nomura, Tomohiro; Murakami, Yuji; Uchikawa, Motoyuki
Today it has become imperative to save energy by operating a refrigerator in a cold storage executed by external insulate reinforced concrete wall intermittently. The theme of the paper is to get the evaluation method to be capable of calculating, numerically, interval time for stopping the refrigerator, in applying reinforced concrete wall as source of stored heat. The experiments with the concrete models were performed in order to examine the time variation of internal temperature after refrigerator stopped. In addition, the simulation method with three dimensional unsteady FEM for personal-computer type was introduced for easily analyzing the internal temperature variation. Using this method, it is possible to obtain the time variation of internal temperature and to calculate the interval time for stopping the refrigerator.
NASA Astrophysics Data System (ADS)
Kumar, Rakesh; Li, Zheng; Levin, Deborah A.
2011-05-01
In this work, we propose a new heat accommodation model to simulate freely expanding homogeneous condensation flows of gaseous carbon dioxide using a new approach, the statistical Bhatnagar-Gross-Krook method. The motivation for the present work comes from the earlier work of Li et al. [J. Phys. Chem. 114, 5276 (2010)] in which condensation models were proposed and used in the direct simulation Monte Carlo method to simulate the flow of carbon dioxide from supersonic expansions of small nozzles into near-vacuum conditions. Simulations conducted for stagnation pressures of one and three bar were compared with the measurements of gas and cluster number densities, cluster size, and carbon dioxide rotational temperature obtained by Ramos et al. [Phys. Rev. A 72, 3204 (2005)]. Due to the high computational cost of direct simulation Monte Carlo method, comparison between simulations and data could only be performed for these stagnation pressures, with good agreement obtained beyond the condensation onset point, in the farfield. As the stagnation pressure increases, the degree of condensation also increases; therefore, to improve the modeling of condensation onset, one must be able to simulate higher stagnation pressures. In simulations of an expanding flow of argon through a nozzle, Kumar et al. [AIAA J. 48, 1531 (2010)] found that the statistical Bhatnagar-Gross-Krook method provides the same accuracy as direct simulation Monte Carlo method, but, at one half of the computational cost. In this work, the statistical Bhatnagar-Gross-Krook method was modified to account for internal degrees of freedom for multi-species polyatomic gases. With the computational approach in hand, we developed and tested a new heat accommodation model for a polyatomic system to properly account for the heat release of condensation. We then developed condensation models in the framework of the statistical Bhatnagar-Gross-Krook method. Simulations were found to agree well with the experiment for all stagnation pressure cases (1-5 bar), validating the accuracy of the Bhatnagar-Gross-Krook based condensation model in capturing the physics of condensation.
OpenWorm: an open-science approach to modeling Caenorhabditis elegans.
Szigeti, Balázs; Gleeson, Padraig; Vella, Michael; Khayrulin, Sergey; Palyanov, Andrey; Hokanson, Jim; Currie, Michael; Cantarelli, Matteo; Idili, Giovanni; Larson, Stephen
2014-01-01
OpenWorm is an international collaboration with the aim of understanding how the behavior of Caenorhabditis elegans (C. elegans) emerges from its underlying physiological processes. The project has developed a modular simulation engine to create computational models of the worm. The modularity of the engine makes it possible to easily modify the model, incorporate new experimental data and test hypotheses. The modeling framework incorporates both biophysical neuronal simulations and a novel fluid-dynamics-based soft-tissue simulation for physical environment-body interactions. The project's open-science approach is aimed at overcoming the difficulties of integrative modeling within a traditional academic environment. In this article the rationale is presented for creating the OpenWorm collaboration, the tools and resources developed thus far are outlined and the unique challenges associated with the project are discussed.
Numerical simulation of pressure fluctuation in 1000MW Francis turbine under small opening condition
NASA Astrophysics Data System (ADS)
Gong, R. Z.; Wang, H. G.; Yao, Y.; Shu, L. F.; Huang, Y. J.
2012-11-01
In order to study the cause of abnormal vibration in large Francis turbine under small opening condition, CFD method was adopted to analyze the flow filed and pressure fluctuation. Numerical simulation was performed on the commercial CFD code Ansys FLUENT 12, using DES method. After an effective validation of the computation result, the flow behaviour of internal flow field under small opening condition is analyzed. Pressure fluctuation in different working mode is obtained by unsteady CFD simulation, and results is compared to study its change. Radial force fluctuation is also analyzed. The result shows that the unstable flow under small opening condition leads to an increase of turbine instability in reverse pump mode, and is one possible reason of the abnormal oscillation.
Simulated Data for High Temperature Composite Design
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2006-01-01
The paper describes an effective formal method that can be used to simulate design properties for composites that is inclusive of all the effects that influence those properties. This effective simulation method is integrated computer codes that include composite micromechanics, composite macromechanics, laminate theory, structural analysis, and multi-factor interaction model. Demonstration of the method includes sample examples for static, thermal, and fracture reliability for a unidirectional metal matrix composite as well as rupture strength and fatigue strength for a high temperature super alloy. Typical results obtained for a unidirectional composite show that the thermal properties are more sensitive to internal local damage, the longitudinal properties degrade slowly with temperature, the transverse and shear properties degrade rapidly with temperature as do rupture strength and fatigue strength for super alloys.
NASA Astrophysics Data System (ADS)
Matha, Denis; Sandner, Frank; Schlipf, David
2014-12-01
Design verification of wind turbines is performed by simulation of design load cases (DLC) defined in the IEC 61400-1 and -3 standards or equivalent guidelines. Due to the resulting large number of necessary load simulations, here a method is presented to reduce the computational effort for DLC simulations significantly by introducing a reduced nonlinear model and simplified hydro- and aerodynamics. The advantage of the formulation is that the nonlinear ODE system only contains basic mathematic operations and no iterations or internal loops which makes it very computationally efficient. Global turbine extreme and fatigue loads such as rotor thrust, tower base bending moment and mooring line tension, as well as platform motions are outputs of the model. They can be used to identify critical and less critical load situations to be then analysed with a higher fidelity tool and so speed up the design process. Results from these reduced model DLC simulations are presented and compared to higher fidelity models. Results in frequency and time domain as well as extreme and fatigue load predictions demonstrate that good agreement between the reduced and advanced model is achieved, allowing to efficiently exclude less critical DLC simulations, and to identify the most critical subset of cases for a given design. Additionally, the model is applicable for brute force optimization of floater control system parameters.
Passive Motion Paradigm: An Alternative to Optimal Control
Mohan, Vishwanathan; Morasso, Pietro
2011-01-01
In the last years, optimal control theory (OCT) has emerged as the leading approach for investigating neural control of movement and motor cognition for two complementary research lines: behavioral neuroscience and humanoid robotics. In both cases, there are general problems that need to be addressed, such as the “degrees of freedom (DoFs) problem,” the common core of production, observation, reasoning, and learning of “actions.” OCT, directly derived from engineering design techniques of control systems quantifies task goals as “cost functions” and uses the sophisticated formal tools of optimal control to obtain desired behavior (and predictions). We propose an alternative “softer” approach passive motion paradigm (PMP) that we believe is closer to the biomechanics and cybernetics of action. The basic idea is that actions (overt as well as covert) are the consequences of an internal simulation process that “animates” the body schema with the attractor dynamics of force fields induced by the goal and task-specific constraints. This internal simulation offers the brain a way to dynamically link motor redundancy with task-oriented constraints “at runtime,” hence solving the “DoFs problem” without explicit kinematic inversion and cost function computation. We argue that the function of such computational machinery is not only restricted to shaping motor output during action execution but also to provide the self with information on the feasibility, consequence, understanding and meaning of “potential actions.” In this sense, taking into account recent developments in neuroscience (motor imagery, simulation theory of covert actions, mirror neuron system) and in embodied robotics, PMP offers a novel framework for understanding motor cognition that goes beyond the engineering control paradigm provided by OCT. Therefore, the paper is at the same time a review of the PMP rationale, as a computational theory, and a perspective presentation of how to develop it for designing better cognitive architectures. PMID:22207846
A Fokker-Planck based kinetic model for diatomic rarefied gas flows
NASA Astrophysics Data System (ADS)
Gorji, M. Hossein; Jenny, Patrick
2013-06-01
A Fokker-Planck based kinetic model is presented here, which also accounts for internal energy modes characteristic for diatomic gas molecules. The model is based on a Fokker-Planck approximation of the Boltzmann equation for monatomic molecules, whereas phenomenological principles were employed for the derivation. It is shown that the model honors the equipartition theorem in equilibrium and fulfills the Landau-Teller relaxation equations for internal degrees of freedom. The objective behind this approximate kinetic model is accuracy at reasonably low computational cost. This can be achieved due to the fact that the resulting stochastic differential equations are continuous in time; therefore, no collisions between the simulated particles have to be calculated. Besides, because of the devised energy conserving time integration scheme, it is not required to resolve the collisional scales, i.e., the mean collision time and the mean free path of molecules. This, of course, gives rise to much more efficient simulations with respect to other particle methods, especially the conventional direct simulation Monte Carlo (DSMC), for small and moderate Knudsen numbers. To examine the new approach, first the computational cost of the model was compared with respect to DSMC, where significant speed up could be obtained for small Knudsen numbers. Second, the structure of a high Mach shock (in nitrogen) was studied, and the good performance of the model for such out of equilibrium conditions could be demonstrated. At last, a hypersonic flow of nitrogen over a wedge was studied, where good agreement with respect to DSMC (with level to level transition model) for vibrational and translational temperatures is shown.
NASA Technical Reports Server (NTRS)
Derkevorkian, Armen; Peterson, Lee; Kolaini, Ali R.; Hendricks, Terry J.; Nesmith, Bill J.
2016-01-01
An analytic approach is demonstrated to reveal potential pyroshock -driven dynamic effects causing power losses in the Thermo -Electric (TE) module bars of the Mars Science Laboratory (MSL) Multi -Mission Radioisotope Thermoelectric Generator (MMRTG). This study utilizes high- fidelity finite element analysis with SIERRA/PRESTO codes to estimate wave propagation effects due to large -amplitude suddenly -applied pyro shock loads in the MMRTG. A high fidelity model of the TE module bar was created with approximately 30 million degrees -of-freedom (DOF). First, a quasi -static preload was applied on top of the TE module bar, then transient tri- axial acceleration inputs were simultaneously applied on the preloaded module. The applied input acceleration signals were measured during MMRTG shock qualification tests performed at the Jet Propulsion Laboratory. An explicit finite element solver in the SIERRA/PRESTO computational environment, along with a 3000 processor parallel super -computing framework at NASA -AMES, was used for the simulation. The simulation results were investigated both qualitatively and quantitatively. The predicted shock wave propagation results provide detailed structural responses throughout the TE module bar, and key insights into the dynamic response (i.e., loads, displacements, accelerations) of critical internal spring/piston compression systems, TE materials, and internal component interfaces in the MMRTG TE module bar. They also provide confidence on the viability of this high -fidelity modeling scheme to accurately predict shock wave propagation patterns within complex structures. This analytic approach is envisioned for modeling shock sensitive hardware susceptible to intense shock environments positioned near shock separation devices in modern space vehicles and systems.
Energy Exascale Earth System Model (E3SM) Project Strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bader, D.
The E3SM project will assert and maintain an international scientific leadership position in the development of Earth system and climate models at the leading edge of scientific knowledge and computational capabilities. With its collaborators, it will demonstrate its leadership by using these models to achieve the goal of designing, executing, and analyzing climate and Earth system simulations that address the most critical scientific questions for the nation and DOE.
(YIP 2011) Unsteady Output-based Adaptive Simulation of Separated and Transitional Flows
2015-03-19
Investigator Aerospace Eng. U. Michigan Marco Ceze Ph.D. student/postdoctoral associate Aerospace Eng. U. Michigan Steven Kast Ph.D. student Aerospace...13] S. M. Kast , M. A. Ceze, and K. J. Fidkowski. Output-adaptive solution strategies for unsteady aerodynamics on deformable domains. Seventh...International Conference on Computational Fluid Dynamics ICCFD7-3802, 2012. [14] S. M. Kast and K. J. Fidkowski. Output-based mesh adaptation for high order
NASA Astrophysics Data System (ADS)
Tomaro, Robert F.
1998-07-01
The present research is aimed at developing a higher-order, spatially accurate scheme for both steady and unsteady flow simulations using unstructured meshes. The resulting scheme must work on a variety of general problems to ensure the creation of a flexible, reliable and accurate aerodynamic analysis tool. To calculate the flow around complex configurations, unstructured grids and the associated flow solvers have been developed. Efficient simulations require the minimum use of computer memory and computational times. Unstructured flow solvers typically require more computer memory than a structured flow solver due to the indirect addressing of the cells. The approach taken in the present research was to modify an existing three-dimensional unstructured flow solver to first decrease the computational time required for a solution and then to increase the spatial accuracy. The terms required to simulate flow involving non-stationary grids were also implemented. First, an implicit solution algorithm was implemented to replace the existing explicit procedure. Several test cases, including internal and external, inviscid and viscous, two-dimensional, three-dimensional and axi-symmetric problems, were simulated for comparison between the explicit and implicit solution procedures. The increased efficiency and robustness of modified code due to the implicit algorithm was demonstrated. Two unsteady test cases, a plunging airfoil and a wing undergoing bending and torsion, were simulated using the implicit algorithm modified to include the terms required for a moving and/or deforming grid. Secondly, a higher than second-order spatially accurate scheme was developed and implemented into the baseline code. Third- and fourth-order spatially accurate schemes were implemented and tested. The original dissipation was modified to include higher-order terms and modified near shock waves to limit pre- and post-shock oscillations. The unsteady cases were repeated using the higher-order spatially accurate code. The new solutions were compared with those obtained using the second-order spatially accurate scheme. Finally, the increased efficiency of using an implicit solution algorithm in a production Computational Fluid Dynamics flow solver was demonstrated for steady and unsteady flows. A third- and fourth-order spatially accurate scheme has been implemented creating a basis for a state-of-the-art aerodynamic analysis tool.
An Isopycnal Box Model with predictive deep-ocean structure for biogeochemical cycling applications
NASA Astrophysics Data System (ADS)
Goodwin, Philip
2012-07-01
To simulate global ocean biogeochemical tracer budgets a model must accurately determine both the volume and surface origins of each water-mass. Water-mass volumes are dynamically linked to the ocean circulation in General Circulation Models, but at the cost of high computational load. In computationally efficient Box Models the water-mass volumes are simply prescribed and do not vary when the circulation transport rates or water mass densities are perturbed. A new computationally efficient Isopycnal Box Model is presented in which the sub-surface box volumes are internally calculated from the prescribed circulation using a diffusive conceptual model of the thermocline, in which upwelling of cold dense water is balanced by a downward diffusion of heat. The volumes of the sub-surface boxes are set so that the density stratification satisfies an assumed link between diapycnal diffusivity, κd, and buoyancy frequency, N: κd = c/(Nα), where c and α are user prescribed parameters. In contrast to conventional Box Models, the volumes of the sub-surface ocean boxes in the Isopycnal Box Model are dynamically linked to circulation, and automatically respond to circulation perturbations. This dynamical link allows an important facet of ocean biogeochemical cycling to be simulated in a highly computationally efficient model framework.
Managing complexity in simulations of land surface and near-surface processes
Coon, Ethan T.; Moulton, J. David; Painter, Scott L.
2016-01-12
Increasing computing power and the growing role of simulation in Earth systems science have led to an increase in the number and complexity of processes in modern simulators. We present a multiphysics framework that specifies interfaces for coupled processes and automates weak and strong coupling strategies to manage this complexity. Process management is enabled by viewing the system of equations as a tree, where individual equations are associated with leaf nodes and coupling strategies with internal nodes. A dynamically generated dependency graph connects a variable to its dependencies, streamlining and automating model evaluation, easing model development, and ensuring models aremore » modular and flexible. Additionally, the dependency graph is used to ensure that data requirements are consistent between all processes in a given simulation. Here we discuss the design and implementation of these concepts within the Arcos framework, and demonstrate their use for verification testing and hypothesis evaluation in numerical experiments.« less
New Tooling System for Forming Aluminum Beverage Can End Shell
NASA Astrophysics Data System (ADS)
Yamazaki, Koetsu; Otsuka, Takayasu; Han, Jing; Hasegawa, Takashi; Shirasawa, Taketo
2011-08-01
This paper proposes a new tooling system for forming shells of aluminum beverage can ends. At first, forming process of a conversional tooling system has been simulated using three-dimensional finite element models. Simulation results have been confirmed to be consistent with those of axisymmetric models, so simulations for further study have been performed using axisymmetric models to save computational time. A comparison shows that thinning of the shell formed by the proposed tooling system has been improved about 3.6%. Influences of the tool upmost surface profiles and tool initial positions in the new tooling system have been investigated and the design optimization method based on the numerical simulations has been then applied to search optimum design points, in order to minimize thinning subjected to the constraints of the geometrical dimensions of the shell. At last, the performance of the shell subjected to internal pressure has been confirmed to meet design requirements.
NASA Astrophysics Data System (ADS)
Bednar, Earl; Drager, Steven L.
2007-04-01
Quantum information processing's objective is to utilize revolutionary computing capability based on harnessing the paradigm shift offered by quantum computing to solve classically hard and computationally challenging problems. Some of our computationally challenging problems of interest include: the capability for rapid image processing, rapid optimization of logistics, protecting information, secure distributed simulation, and massively parallel computation. Currently, one important problem with quantum information processing is that the implementation of quantum computers is difficult to realize due to poor scalability and great presence of errors. Therefore, we have supported the development of Quantum eXpress and QuIDD Pro, two quantum computer simulators running on classical computers for the development and testing of new quantum algorithms and processes. This paper examines the different methods used by these two quantum computing simulators. It reviews both simulators, highlighting each simulators background, interface, and special features. It also demonstrates the implementation of current quantum algorithms on each simulator. It concludes with summary comments on both simulators.
The computation of equating errors in international surveys in education.
Monseur, Christian; Berezner, Alla
2007-01-01
Since the IEA's Third International Mathematics and Science Study, one of the major objectives of international surveys in education has been to report trends in achievement. The names of the two current IEA surveys reflect this growing interest: Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study (PIRLS). Similarly a central concern of the OECD's PISA is with trends in outcomes over time. To facilitate trend analyses these studies link their tests using common item equating in conjunction with item response modelling methods. IEA and PISA policies differ in terms of reporting the error associated with trends. In IEA surveys, the standard errors of the trend estimates do not include the uncertainty associated with the linking step while PISA does include a linking error component in the standard errors of trend estimates. In other words, PISA implicitly acknowledges that trend estimates partly depend on the selected common items, while the IEA's surveys do not recognise this source of error. Failing to recognise the linking error leads to an underestimation of the standard errors and thus increases the Type I error rate, thereby resulting in reporting of significant changes in achievement when in fact these are not significant. The growing interest of policy makers in trend indicators and the impact of the evaluation of educational reforms appear to be incompatible with such underestimation. However, the procedure implemented by PISA raises a few issues about the underlying assumptions for the computation of the equating error. After a brief introduction, this paper will describe the procedure PISA implemented to compute the linking error. The underlying assumptions of this procedure will then be discussed. Finally an alternative method based on replication techniques will be presented, based on a simulation study and then applied to the PISA 2000 data.
González-Suárez, Ana; Trujillo, Macarena; Burdío, Fernando; Andaluz, Anna; Berjano, Enrique
2014-08-01
To assess by means of computer simulations whether the heat sink effect inside a large vessel (portal vein) could protect the vessel wall from thermal damage close to an internally cooled electrode during radiofrequency (RF)-assisted resection. First,in vivo experiments were conducted to validate the computational model by comparing the experimental and computational thermal lesion shapes created around the vessels. Computer simulations were then carried out to study the effect of different factors such as device-tissue contact, vessel position, and vessel-device distance on temperature distributions and thermal lesion shapes near a large vessel, specifically the portal vein. The geometries of thermal lesions around the vessels in the in vivo experiments were in agreement with the computer results. The thermal lesion shape created around the portal vein was significantly modified by the heat sink effect in all the cases considered. Thermal damage to the portal vein wall was inversely related to the vessel-device distance. It was also more pronounced when the device-tissue contact surface was reduced or when the vessel was parallel to the device or perpendicular to its distal end (blade zone), the vessel wall being damaged at distances less than 4.25 mm. The computational findings suggest that the heat sink effect could protect the portal vein wall for distances equal to or greater than 5 mm, regardless of its position and distance with respect to the RF-based device.
Anigstein, Robert; Erdman, Michael C.; Ansari, Armin
2017-01-01
The detonation of a radiological dispersion device or other radiological incidents could result in the dispersion of radioactive materials and intakes of radionuclides by affected individuals. Transportable radiation monitoring instruments could be used to measure photon radiation from radionuclides in the body for triaging individuals and assigning priorities to their bioassay samples for further assessments. Computer simulations and experimental measurements are required for these instruments to be used for assessing intakes of radionuclides. Count rates from calibrated sources of 60Co, 137Cs, and 241Am were measured on three instruments: a survey meter containing a 2.54 × 2.54-cm NaI(Tl) crystal, a thyroid probe using a 5.08 × 5.08-cm NaI(Tl) crystal, and a portal monitor incorporating two 3.81 × 7.62 × 182.9-cm polyvinyltoluene plastic scintillators. Computer models of the instruments and of the calibration sources were constructed, using engineering drawings and other data provided by the manufacturers. Count rates on the instruments were simulated using the Monte Carlo radiation transport code MCNPX. The computer simulations were within 16% of the measured count rates for all 20 measurements without using empirical radionuclide-dependent scaling factors, as reported by others. The weighted root-mean-square deviations (differences between measured and simulated count rates, added in quadrature and weighted by the variance of the difference) were 10.9% for the survey meter, 4.2% for the thyroid probe, and 0.9% for the portal monitor. These results validate earlier MCNPX models of these instruments that were used to develop calibration factors that enable these instruments to be used for assessing intakes and committed doses from several gamma-emitting radionuclides. PMID:27115229
GENOA-PFA: Progressive Fracture in Composites Simulated Computationally
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.
2000-01-01
GENOA-PFA is a commercial version of the Composite Durability Structural Analysis (CODSTRAN) computer program that simulates the progression of damage ultimately leading to fracture in polymer-matrix-composite (PMC) material structures under various loading and environmental conditions. GENOA-PFA offers several capabilities not available in other programs developed for this purpose, making it preferable for use in analyzing the durability and damage tolerance of complex PMC structures in which the fiber reinforcements occur in two- and three-dimensional weaves and braids. GENOA-PFA implements a progressive-fracture methodology based on the idea that a structure fails when flaws that may initially be small (even microscopic) grow and/or coalesce to a critical dimension where the structure no longer has an adequate safety margin to avoid catastrophic global fracture. Damage is considered to progress through five stages: (1) initiation, (2) growth, (3) accumulation (coalescence of propagating flaws), (4) stable propagation (up to the critical dimension), and (5) unstable or very rapid propagation (beyond the critical dimension) to catastrophic failure. The computational simulation of progressive failure involves formal procedures for identifying the five different stages of damage and for relating the amount of damage at each stage to the overall behavior of the deteriorating structure. In GENOA-PFA, mathematical modeling of the composite physical behavior involves an integration of simulations at multiple, hierarchical scales ranging from the macroscopic (lamina, laminate, and structure) to the microscopic (fiber, matrix, and fiber/matrix interface), as shown in the figure. The code includes algorithms to simulate the progression of damage from various source defects, including (1) through-the-thickness cracks and (2) voids with edge, pocket, internal, or mixed-mode delaminations.
Anigstein, Robert; Erdman, Michael C; Ansari, Armin
2016-06-01
The detonation of a radiological dispersion device or other radiological incidents could result in the dispersion of radioactive materials and intakes of radionuclides by affected individuals. Transportable radiation monitoring instruments could be used to measure photon radiation from radionuclides in the body for triaging individuals and assigning priorities to their bioassay samples for further assessments. Computer simulations and experimental measurements are required for these instruments to be used for assessing intakes of radionuclides. Count rates from calibrated sources of Co, Cs, and Am were measured on three instruments: a survey meter containing a 2.54 × 2.54-cm NaI(Tl) crystal, a thyroid probe using a 5.08 × 5.08-cm NaI(Tl) crystal, and a portal monitor incorporating two 3.81 × 7.62 × 182.9-cm polyvinyltoluene plastic scintillators. Computer models of the instruments and of the calibration sources were constructed, using engineering drawings and other data provided by the manufacturers. Count rates on the instruments were simulated using the Monte Carlo radiation transport code MCNPX. The computer simulations were within 16% of the measured count rates for all 20 measurements without using empirical radionuclide-dependent scaling factors, as reported by others. The weighted root-mean-square deviations (differences between measured and simulated count rates, added in quadrature and weighted by the variance of the difference) were 10.9% for the survey meter, 4.2% for the thyroid probe, and 0.9% for the portal monitor. These results validate earlier MCNPX models of these instruments that were used to develop calibration factors that enable these instruments to be used for assessing intakes and committed doses from several gamma-emitting radionuclides.
NASA Astrophysics Data System (ADS)
Ramotar, Lokendra; Rohrauer, Greg L.; Filion, Ryan; MacDonald, Kathryn
2017-03-01
The development of a dynamic thermal battery model for hybrid and electric vehicles is realized. A thermal equivalent circuit model is created which aims to capture and understand the heat propagation from the cells through the entire pack and to the environment using a production vehicle battery pack for model validation. The inclusion of production hardware and the liquid battery thermal management system components into the model considers physical and geometric properties to calculate thermal resistances of components (conduction, convection and radiation) along with their associated heat capacity. Various heat sources/sinks comprise the remaining model elements. Analog equivalent circuit simulations using PSpice are compared to experimental results to validate internal temperature nodes and heat rates measured through various elements, which are then employed to refine the model further. Agreement with experimental results indicates the proposed method allows for a comprehensive real-time battery pack analysis at little computational expense when compared to other types of computer based simulations. Elevated road and ambient conditions in Mesa, Arizona are simulated on a parked vehicle with varying quiescent cooling rates to examine the effect on the diurnal battery temperature for longer term static exposure. A typical daily driving schedule is also simulated and examined.
The Strata-1 experiment on small body regolith segregation
NASA Astrophysics Data System (ADS)
Fries, Marc; Abell, Paul; Brisset, Julie; Britt, Daniel; Colwell, Joshua; Dove, Adrienne; Durda, Dan; Graham, Lee; Hartzell, Christine; Hrovat, Kenneth; John, Kristen; Karrer, Dakotah; Leonard, Matthew; Love, Stanley; Morgan, Joseph; Poppin, Jayme; Rodriguez, Vincent; Sánchez-Lana, Paul; Scheeres, Dan; Whizin, Akbar
2018-01-01
The Strata-1 experiment studies the mixing and segregation dynamics of regolith on small bodies by exposing a suite of regolith simulants to the microgravity environment aboard the International Space Station (ISS) for one year. This will improve our understanding of regolith dynamics and properties on small asteroids, and may assist in interpreting analyses of samples from missions to small bodies such as OSIRIS-REx, Hayabusa-1 and -2, and future missions to small bodies. The Strata-1 experiment consists of four evacuated tubes partially filled with regolith simulants. The simulants are chosen to represent models of regolith covering a range of complexity and tailored to inform and improve computational studies. The four tubes are regularly imaged while moving in response to the ambient vibrational environment using dedicated cameras. The imagery is then downlinked to the Strata-1 science team about every two months. Analyses performed on the imagery includes evaluating the extent of the segregation of Strata-1 samples and comparing the observations to computational models. After Strata-1's return to Earth, x-ray tomography and optical microscopy will be used to study the post-flight simulant distribution. Strata-1 is also a pathfinder for the new "1E" ISS payload class, which is intended to simplify and accelerate emplacement of experiments on board ISS.
Cross-entropy optimization for neuromodulation.
Brar, Harleen K; Yunpeng Pan; Mahmoudi, Babak; Theodorou, Evangelos A
2016-08-01
This study presents a reinforcement learning approach for the optimization of the proportional-integral gains of the feedback controller represented in a computational model of epilepsy. The chaotic oscillator model provides a feedback control systems view of the dynamics of an epileptic brain with an internal feedback controller representative of the natural seizure suppression mechanism within the brain circuitry. Normal and pathological brain activity is simulated in this model by adjusting the feedback gain values of the internal controller. With insufficient gains, the internal controller cannot provide enough feedback to the brain dynamics causing an increase in correlation between different brain sites. This increase in synchronization results in the destabilization of the brain dynamics, which is representative of an epileptic seizure. To provide compensation for an insufficient internal controller an external controller is designed using proportional-integral feedback control strategy. A cross-entropy optimization algorithm is applied to the chaotic oscillator network model to learn the optimal feedback gains for the external controller instead of hand-tuning the gains to provide sufficient control to the pathological brain and prevent seizure generation. The correlation between the dynamics of neural activity within different brain sites is calculated for experimental data to show similar dynamics of epileptic neural activity as simulated by the network of chaotic oscillators.
Microwave quantum logic gates for trapped ions.
Ospelkaus, C; Warring, U; Colombe, Y; Brown, K R; Amini, J M; Leibfried, D; Wineland, D J
2011-08-10
Control over physical systems at the quantum level is important in fields as diverse as metrology, information processing, simulation and chemistry. For trapped atomic ions, the quantized motional and internal degrees of freedom can be coherently manipulated with laser light. Similar control is difficult to achieve with radio-frequency or microwave radiation: the essential coupling between internal degrees of freedom and motion requires significant field changes over the extent of the atoms' motion, but such changes are negligible at these frequencies for freely propagating fields. An exception is in the near field of microwave currents in structures smaller than the free-space wavelength, where stronger gradients can be generated. Here we first manipulate coherently (on timescales of 20 nanoseconds) the internal quantum states of ions held in a microfabricated trap. The controlling magnetic fields are generated by microwave currents in electrodes that are integrated into the trap structure. We also generate entanglement between the internal degrees of freedom of two atoms with a gate operation suitable for general quantum computation; the entangled state has a fidelity of 0.76(3), where the uncertainty denotes standard error of the mean. Our approach, which involves integrating the quantum control mechanism into the trapping device in a scalable manner, could be applied to quantum information processing, simulation and spectroscopy.
Multilevel Monte Carlo and improved timestepping methods in atmospheric dispersion modelling
NASA Astrophysics Data System (ADS)
Katsiolides, Grigoris; Müller, Eike H.; Scheichl, Robert; Shardlow, Tony; Giles, Michael B.; Thomson, David J.
2018-02-01
A common way to simulate the transport and spread of pollutants in the atmosphere is via stochastic Lagrangian dispersion models. Mathematically, these models describe turbulent transport processes with stochastic differential equations (SDEs). The computational bottleneck is the Monte Carlo algorithm, which simulates the motion of a large number of model particles in a turbulent velocity field; for each particle, a trajectory is calculated with a numerical timestepping method. Choosing an efficient numerical method is particularly important in operational emergency-response applications, such as tracking radioactive clouds from nuclear accidents or predicting the impact of volcanic ash clouds on international aviation, where accurate and timely predictions are essential. In this paper, we investigate the application of the Multilevel Monte Carlo (MLMC) method to simulate the propagation of particles in a representative one-dimensional dispersion scenario in the atmospheric boundary layer. MLMC can be shown to result in asymptotically superior computational complexity and reduced computational cost when compared to the Standard Monte Carlo (StMC) method, which is currently used in atmospheric dispersion modelling. To reduce the absolute cost of the method also in the non-asymptotic regime, it is equally important to choose the best possible numerical timestepping method on each level. To investigate this, we also compare the standard symplectic Euler method, which is used in many operational models, with two improved timestepping algorithms based on SDE splitting methods.
Ponnath, Abhilash
2010-01-01
Sensitivity to acoustic amplitude modulation in crickets differs between species and depends on carrier frequency (e.g., calling song vs. bat-ultrasound bands). Using computational tools, we explore how Ca2+-dependent mechanisms underlying selective attention can contribute to such differences in amplitude modulation sensitivity. For omega neuron 1 (ON1), selective attention is mediated by Ca2+-dependent feedback: [Ca2+]internal increases with excitation, activating a Ca2+-dependent after-hyperpolarizing current. We propose that Ca2+ removal rate and the size of the after-hyperpolarizing current can determine ON1’s temporal modulation transfer function (TMTF). This is tested using a conductance-based simulation calibrated to responses in vivo. The model shows that parameter values that simulate responses to single pulses are sufficient in simulating responses to modulated stimuli: no special modulation-sensitive mechanisms are necessary, as high and low-pass portions of the TMTF are due to Ca2+-dependent spike frequency adaptation and post-synaptic potential depression, respectively. Furthermore, variance in the two biophysical parameters is sufficient to produce TMTFs of varying bandwidth, shifting amplitude modulation sensitivity like that in different species and in response to different carrier frequencies. Thus, the hypothesis that the size of after-hyperpolarizing current and the rate of Ca2+ removal can affect amplitude modulation sensitivity is computationally validated. PMID:20559640
Stochastic Effects in Computational Biology of Space Radiation Cancer Risk
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Pluth, Janis; Harper, Jane; O'Neill, Peter
2007-01-01
Estimating risk from space radiation poses important questions on the radiobiology of protons and heavy ions. We are considering systems biology models to study radiation induced repair foci (RIRF) at low doses, in which less than one-track on average transverses the cell, and the subsequent DNA damage processing and signal transduction events. Computational approaches for describing protein regulatory networks coupled to DNA and oxidative damage sites include systems of differential equations, stochastic equations, and Monte-Carlo simulations. We review recent developments in the mathematical description of protein regulatory networks and possible approaches to radiation effects simulation. These include robustness, which states that regulatory networks maintain their functions against external and internal perturbations due to compensating properties of redundancy and molecular feedback controls, and modularity, which leads to general theorems for considering molecules that interact through a regulatory mechanism without exchange of matter leading to a block diagonal reduction of the connecting pathways. Identifying rate-limiting steps, robustness, and modularity in pathways perturbed by radiation damage are shown to be valid techniques for reducing large molecular systems to realistic computer simulations. Other techniques studied are the use of steady-state analysis, and the introduction of composite molecules or rate-constants to represent small collections of reactants. Applications of these techniques to describe spatial and temporal distributions of RIRF and cell populations following low dose irradiation are described.
Interaction of hydraulic and buckling mechanisms in blowout fractures.
Nagasao, Tomohisa; Miyamoto, Junpei; Jiang, Hua; Tamaki, Tamotsu; Kaneko, Tsuyoshi
2010-04-01
The etiology of blowout fractures is generally attributed to 2 mechanisms--increase in the pressure of the orbital contents (the hydraulic mechanism) and direct transmission of impacts on the orbital walls (the buckling mechanism). The present study aims to elucidate whether or not an interaction exists between these 2 mechanisms. We performed a simulation experiment using 10 Computer-Aided-Design skull models. We applied destructive energy to the orbits of the 10 models in 3 different ways. First, to simulate pure hydraulic mechanism, energy was applied solely on the internal walls of the orbit. Second, to simulate pure buckling mechanism, energy was applied solely on the inferior rim of the orbit. Third, to simulate the combined effect of the hydraulic and buckling mechanisms, energy was applied both on the internal wall of the orbit and inferior rim of the orbit. After applying the energy, we calculated the areas of the regions where fracture occurred in the models. Thereafter, we compared the areas among the 3 energy application patterns. When the hydraulic and buckling mechanisms work simultaneously, fracture occurs on wider areas of the orbital walls than when each of these mechanisms works separately. The hydraulic and buckling mechanisms interact, enhancing each other's effect. This information should be taken into consideration when we examine patients in whom blowout fracture is suspected.
Crash Models for Advanced Automotive Batteries: A Review of the Current State of the Art
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, John A.; Allu, Srikanth; Gorti, Sarma B.
Safety is a critical aspect of lithium-ion (Li-ion) battery design. Impact/crash conditions can trigger a complex interplay of mechanical contact, heat generation and electrical discharge, which can result in adverse thermal events. The cause of these thermal events has been linked to internal contact between the opposite electrodes, i.e. internal short circuit. The severity of the outcome is influenced by the configuration of the internal short circuit and the battery state. Different loading conditions and battery states may lead to micro (soft) shorts where material burnout due to generated heat eliminates contact between the electrodes, or persistent (hard) shorts whichmore » can lead to more significant thermal events and potentially damage the entire battery system and beyond. Experimental characterization of individual battery components for the onset of internal shorts is limited, since it is impractical to canvas all possible variations in battery state of charge, operating conditions, and impact loading in a timely manner. This report provides a survey of modeling and simulation approaches and documents a project initiated and funded by DOT/NHTSA to improve modeling and simulation capabilities in order to design tests that provide leading indicators of failure in batteries. In this project, ORNL has demonstrated a computational infrastructure to conduct impact simulations of Li-ion batteries using models that resolve internal structures and electro-thermo-chemical and mechanical conditions. Initial comparisons to abuse experiments on cells and cell strings conducted at ORNL and Naval Surface Warfare Center (NSWC) at Carderock MD for parameter estimation and model validation have been performed. This research has provided insight into the mechanisms of deformation in batteries (both at cell and electrode level) and their relationship to the safety of batteries.« less
Estimation of ligament strains and joint moments in the ankle during a supination sprain injury.
Wei, Feng; Fong, Daniel Tik-Pui; Chan, Kai-Ming; Haut, Roger C
2015-01-01
This study presents the ankle ligament strains and ankle joint moments during an accidental injury event diagnosed as a grade I anterior talofibular ligament (ATaFL) sprain. A male athlete accidentally sprained his ankle while performing a cutting motion in a laboratory setting. The kinematic data were input to a three-dimensional rigid-body foot model for simulation analyses. Maximum strains in 20 ligaments were evaluated in simulations that investigated various combinations of the reported ankle joint motions. Temporal strains in the ATaFL and the calcaneofibular ligament (CaFL) were then compared and the three-dimensional ankle joint moments were evaluated from the model. The ATaFL and CaFL were highly strained when the inversion motion was simulated (10% for ATaFL and 12% for CaFL). These ligament strains were increased significantly when either or both plantarflexion and internal rotation motions were added in a temporal fashion (up to 20% for ATaFL and 16% for CaFL). Interestingly, at the time strain peaked in the ATaFL, the plantarflexion angle was not large but apparently important. This computational simulation study suggested that an inversion moment of approximately 23 N m plus an internal rotation moment of approximately 11 N m and a small plantarflexion moment may have generated a strain of 15-20% in the ATaFL to produce a grade I ligament injury in the athlete's ankle. This injury simulation study exhibited the potentially important roles of plantarflexion and internal rotation, when combined with a large inversion motion, to produce a grade I ATaFL injury in the ankle of this athlete.
Simulation in International Relations Education.
ERIC Educational Resources Information Center
Starkey, Brigid A.; Blake, Elizabeth L.
2001-01-01
Discusses the educational implications of simulations in international relations. Highlights include the development of international relations simulations; the role of technology; the International Communication and Negotiation Simulations (ICONS) project at the University of Maryland; evolving information technology; and simulating real-world…
Tortuosity Computations of Porous Materials using the Direct Simulation Monte Carlo
NASA Technical Reports Server (NTRS)
Borner, A.; Ferguson, C.; Panerai, F.; Mansour, Nagi N.
2017-01-01
Low-density carbon fiber preforms, used as thermal protection systems (TPS) materials for planetary entry systems, have permeable, highly porous microstructures consisting of interlaced fibers. Internal gas transport in TPS is important in modeling the penetration of hot boundary-layer gases and the in-depth transport of pyrolysis and ablation products. The gas effective diffusion coefficient of a porous material must be known before the gas transport can be modeled in material response solvers; however, there are very little available data for rigid fibrous insulators used in heritage TPS.The tortuosity factor, which reflects the efficiency of the percolation paths, can be computed from the effective diffusion coefficient of a gas inside a porous material and is based on the micro-structure of the material. It is well known, that the tortuosity factor is a strong function of the Knudsen number. Due to the small characteristic scales of porous media used in TPS applications (typical pore size of the order of 50 micron), the transport of gases can occur in the rarefied and transitional regimes, at Knudsen numbers above 1. A proper way to model the gas dynamics at these conditions consists in solving the Boltzmann equation using particle-based methods that account for movement and collisions of atoms and molecules.In this work we adopt, for the first time, the Direct Simulation Monte Carlo (DSMC) method to compute the tortuosity factor of fibrous media in the rarefied regime. To enable realistic simulations of the actual transport of gases in the porous medium, digitized computational grids are obtained from X-ray micro-tomography imaging of real TPS materials. The SPARTA DSMC solver is used for simulations. Effective diffusion coefficients and tortuosity factors are obtained by computing the mean-square displacement of diffusing particles.We first apply the method to compute the tortuosity factors as a function of the Knudsen number for computationally designed materials such as random cylindrical fibers and packed bed of spheres with prescribed porosity. Results are compared to literature values obtained using random walk methods in the rarefied and transitional regime and a finite-volume method for the continuum regime. We then compute tortuosity factors for a real carbon fiber material with a transverse isotropic structure (FiberForm), quantifying differences between through-thickness and in-plain tortuosities at various Knudsen regimes.
Prognostic characteristics of the lowest-mode internal waves in the Sea of Okhotsk
NASA Astrophysics Data System (ADS)
Kurkin, Andrey; Kurkina, Oxana; Zaytsev, Andrey; Rybin, Artem; Talipova, Tatiana
2017-04-01
The nonlinear dynamics of short-period internal waves on ocean shelves is well described by generalized nonlinear evolutionary models of Korteweg - de Vries type. Parameters of these models such as long wave propagation speed, nonlinear and dispersive coefficients can be calculated using hydrological data (sea water density stratification), and therefore have geographical and seasonal variations. The internal wave parameters for the basin of the Sea of Okhotsk are computed on a base of recent version of hydrological data source GDEM V3.0. Geographical and seasonal variability of internal wave characteristics is investigated. It is shown that annually or seasonally averaged data can be used for linear parameters. The nonlinear parameters are more sensitive to temporal averaging of hydrological data and detailed data are preferable to use. The zones for nonlinear parameters to change their signs (so-called "turning points") are selected. Possible internal waveforms appearing in the process of internal tide transformation including the solitary waves changing polarities are simulated for the hydrological conditions in the Sea of Okhotsk shelf to demonstrate different scenarios of internal wave adjustment, transformation, refraction and cylindrical divergence.
A breakthrough for experiencing and understanding simulated physics
NASA Technical Reports Server (NTRS)
Watson, Val
1988-01-01
The use of computer simulation in physics research is discussed, focusing on improvements to graphic workstations. Simulation capabilities and applications of enhanced visualization tools are outlined. The elements of an ideal computer simulation are presented and the potential for improving various simulation elements is examined. The interface between the human and the computer and simulation models are considered. Recommendations are made for changes in computer simulation practices and applications of simulation technology in education.
Prototyping and Simulation of Robot Group Intelligence using Kohonen Networks.
Wang, Zhijun; Mirdamadi, Reza; Wang, Qing
2016-01-01
Intelligent agents such as robots can form ad hoc networks and replace human being in many dangerous scenarios such as a complicated disaster relief site. This project prototypes and builds a computer simulator to simulate robot kinetics, unsupervised learning using Kohonen networks, as well as group intelligence when an ad hoc network is formed. Each robot is modeled using an object with a simple set of attributes and methods that define its internal states and possible actions it may take under certain circumstances. As the result, simple, reliable, and affordable robots can be deployed to form the network. The simulator simulates a group of robots as an unsupervised learning unit and tests the learning results under scenarios with different complexities. The simulation results show that a group of robots could demonstrate highly collaborative behavior on a complex terrain. This study could potentially provide a software simulation platform for testing individual and group capability of robots before the design process and manufacturing of robots. Therefore, results of the project have the potential to reduce the cost and improve the efficiency of robot design and building.
Prototyping and Simulation of Robot Group Intelligence using Kohonen Networks
Wang, Zhijun; Mirdamadi, Reza; Wang, Qing
2016-01-01
Intelligent agents such as robots can form ad hoc networks and replace human being in many dangerous scenarios such as a complicated disaster relief site. This project prototypes and builds a computer simulator to simulate robot kinetics, unsupervised learning using Kohonen networks, as well as group intelligence when an ad hoc network is formed. Each robot is modeled using an object with a simple set of attributes and methods that define its internal states and possible actions it may take under certain circumstances. As the result, simple, reliable, and affordable robots can be deployed to form the network. The simulator simulates a group of robots as an unsupervised learning unit and tests the learning results under scenarios with different complexities. The simulation results show that a group of robots could demonstrate highly collaborative behavior on a complex terrain. This study could potentially provide a software simulation platform for testing individual and group capability of robots before the design process and manufacturing of robots. Therefore, results of the project have the potential to reduce the cost and improve the efficiency of robot design and building. PMID:28540284
NASA Astrophysics Data System (ADS)
Lee, Choonik
A series of realistic voxel computational phantoms of pediatric patients were developed and then used for the radiation risk assessment for various exposure scenarios. The high-resolution computed tomographic images of live patients were utilized for the development of the five voxel phantoms of pediatric patients, 9-month male, 4-year female, 8-year female, 11-year male, and 14-year male. The phantoms were first developed as head and torso phantoms and then extended into whole body phantoms by utilizing computed tomographic images of a healthy adult volunteer. The whole body phantom series was modified to have the same anthropometrics with the most recent reference data reported by the international commission on radiological protection. The phantoms, named as the University of Florida series B, are the first complete set of the pediatric voxel phantoms having reference organ masses and total heights. As part of the dosimetry study, the investigation on skeletal tissue dosimetry methods was performed for better understanding of the radiation dose to the active bone marrow and bone endosteum. All of the currently available methodologies were inter-compared and benchmarked with the paired-image radiation transport model. The dosimetric characteristics of the phantoms were investigated by using Monte Carlo simulation of the broad parallel beams of external phantom in anterior-posterior, posterior-anterior, left lateral, right lateral, rotational, and isotropic angles. Organ dose conversion coefficients were calculated for extensive photon energies and compared with the conventional stylized pediatric phantoms of Oak Ridge National Laboratory. The multi-slice helical computed tomography exams were simulated using Monte Carlo simulation code for various exams protocols, head, chest, abdomen, pelvis, and chest-abdomen-pelvis studies. Results have found realistic estimates of the effective doses for frequently used protocols in pediatric radiology. The results were very crucial in understanding the radiation risks of the patients undergoing computed tomography. Finally, nuclear medicine simulations were performed by calculating specific absorbed fractions for multiple target-source organ pairs via Monte Carlo simulations. Specific absorbed fractions were calculated for both photon and electron so that they can be used to calculated radionuclide S-values. All of the results were tabulated for future uses and example dose assessment was performed for selected nuclides administered in nuclear medicine.
Pan, Yuxi; Qiu, Rui; Gao, Linfeng; Ge, Chaoyong; Zheng, Junzheng; Xie, Wenzhang; Li, Junli
2014-09-21
With the rapidly growing number of CT examinations, the consequential radiation risk has aroused more and more attention. The average dose in each organ during CT scans can only be obtained by using Monte Carlo simulation with computational phantoms. Since children tend to have higher radiation sensitivity than adults, the radiation dose of pediatric CT examinations requires special attention and needs to be assessed accurately. So far, studies on organ doses from CT exposures for pediatric patients are still limited. In this work, a 1-year-old computational phantom was constructed. The body contour was obtained from the CT images of a 1-year-old physical phantom and the internal organs were deformed from an existing Chinese reference adult phantom. To ensure the organ locations in the 1-year-old computational phantom were consistent with those of the physical phantom, the organ locations in 1-year-old computational phantom were manually adjusted one by one, and the organ masses were adjusted to the corresponding Chinese reference values. Moreover, a CT scanner model was developed using the Monte Carlo technique and the 1-year-old computational phantom was applied to estimate organ doses derived from simulated CT exposures. As a result, a database including doses to 36 organs and tissues from 47 single axial scans was built. It has been verified by calculation that doses of axial scans are close to those of helical scans; therefore, this database could be applied to helical scans as well. Organ doses were calculated using the database and compared with those obtained from the measurements made in the physical phantom for helical scans. The differences between simulation and measurement were less than 25% for all organs. The result shows that the 1-year-old phantom developed in this work can be used to calculate organ doses in CT exposures, and the dose database provides a method for the estimation of 1-year-old patient doses in a variety of CT examinations.
NASA Astrophysics Data System (ADS)
Pan, Yuxi; Qiu, Rui; Gao, Linfeng; Ge, Chaoyong; Zheng, Junzheng; Xie, Wenzhang; Li, Junli
2014-09-01
With the rapidly growing number of CT examinations, the consequential radiation risk has aroused more and more attention. The average dose in each organ during CT scans can only be obtained by using Monte Carlo simulation with computational phantoms. Since children tend to have higher radiation sensitivity than adults, the radiation dose of pediatric CT examinations requires special attention and needs to be assessed accurately. So far, studies on organ doses from CT exposures for pediatric patients are still limited. In this work, a 1-year-old computational phantom was constructed. The body contour was obtained from the CT images of a 1-year-old physical phantom and the internal organs were deformed from an existing Chinese reference adult phantom. To ensure the organ locations in the 1-year-old computational phantom were consistent with those of the physical phantom, the organ locations in 1-year-old computational phantom were manually adjusted one by one, and the organ masses were adjusted to the corresponding Chinese reference values. Moreover, a CT scanner model was developed using the Monte Carlo technique and the 1-year-old computational phantom was applied to estimate organ doses derived from simulated CT exposures. As a result, a database including doses to 36 organs and tissues from 47 single axial scans was built. It has been verified by calculation that doses of axial scans are close to those of helical scans; therefore, this database could be applied to helical scans as well. Organ doses were calculated using the database and compared with those obtained from the measurements made in the physical phantom for helical scans. The differences between simulation and measurement were less than 25% for all organs. The result shows that the 1-year-old phantom developed in this work can be used to calculate organ doses in CT exposures, and the dose database provides a method for the estimation of 1-year-old patient doses in a variety of CT examinations.
Effects of Ensemble Configuration on Estimates of Regional Climate Uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldenson, N.; Mauger, G.; Leung, L. R.
Internal variability in the climate system can contribute substantial uncertainty in climate projections, particularly at regional scales. Internal variability can be quantified using large ensembles of simulations that are identical but for perturbed initial conditions. Here we compare methods for quantifying internal variability. Our study region spans the west coast of North America, which is strongly influenced by El Niño and other large-scale dynamics through their contribution to large-scale internal variability. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find that internal variability can be quantified consistently using a large ensemble or an ensemble ofmore » opportunity that includes small ensembles from multiple models and climate scenarios. The latter also produce estimates of uncertainty due to model differences. We conclude that projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible, which has implications for ensemble design in large modeling efforts.« less
Symplectic molecular dynamics simulations on specially designed parallel computers.
Borstnik, Urban; Janezic, Dusanka
2005-01-01
We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.
Asakura, Nobuhiko; Inui, Toshio
2016-01-01
Two apparently contrasting theories have been proposed to account for the development of children's theory of mind (ToM): theory-theory and simulation theory. We present a Bayesian framework that rationally integrates both theories for false belief reasoning. This framework exploits two internal models for predicting the belief states of others: one of self and one of others. These internal models are responsible for simulation-based and theory-based reasoning, respectively. The framework further takes into account empirical studies of a developmental ToM scale (e.g., Wellman and Liu, 2004): developmental progressions of various mental state understandings leading up to false belief understanding. By representing the internal models and their interactions as a causal Bayesian network, we formalize the model of children's false belief reasoning as probabilistic computations on the Bayesian network. This model probabilistically weighs and combines the two internal models and predicts children's false belief ability as a multiplicative effect of their early-developed abilities to understand the mental concepts of diverse beliefs and knowledge access. Specifically, the model predicts that children's proportion of correct responses on a false belief task can be closely approximated as the product of their proportions correct on the diverse belief and knowledge access tasks. To validate this prediction, we illustrate that our model provides good fits to a variety of ToM scale data for preschool children. We discuss the implications and extensions of our model for a deeper understanding of developmental progressions of children's ToM abilities. PMID:28082941
Asakura, Nobuhiko; Inui, Toshio
2016-01-01
Two apparently contrasting theories have been proposed to account for the development of children's theory of mind (ToM): theory-theory and simulation theory. We present a Bayesian framework that rationally integrates both theories for false belief reasoning. This framework exploits two internal models for predicting the belief states of others: one of self and one of others. These internal models are responsible for simulation-based and theory-based reasoning, respectively. The framework further takes into account empirical studies of a developmental ToM scale (e.g., Wellman and Liu, 2004): developmental progressions of various mental state understandings leading up to false belief understanding. By representing the internal models and their interactions as a causal Bayesian network, we formalize the model of children's false belief reasoning as probabilistic computations on the Bayesian network. This model probabilistically weighs and combines the two internal models and predicts children's false belief ability as a multiplicative effect of their early-developed abilities to understand the mental concepts of diverse beliefs and knowledge access. Specifically, the model predicts that children's proportion of correct responses on a false belief task can be closely approximated as the product of their proportions correct on the diverse belief and knowledge access tasks. To validate this prediction, we illustrate that our model provides good fits to a variety of ToM scale data for preschool children. We discuss the implications and extensions of our model for a deeper understanding of developmental progressions of children's ToM abilities.
Application of a Modular Particle-Continuum Method to Partially Rarefied, Hypersonic Flow
NASA Astrophysics Data System (ADS)
Deschenes, Timothy R.; Boyd, Iain D.
2011-05-01
The Modular Particle-Continuum (MPC) method is used to simulate partially-rarefied, hypersonic flow over a sting-mounted planetary probe configuration. This hybrid method uses computational fluid dynamics (CFD) to solve the Navier-Stokes equations in regions that are continuum, while using direct simulation Monte Carlo (DSMC) in portions of the flow that are rarefied. The MPC method uses state-based coupling to pass information between the two flow solvers and decouples both time-step and mesh densities required by each solver. It is parallelized for distributed memory systems using dynamic domain decomposition and internal energy modes can be consistently modeled to be out of equilibrium with the translational mode in both solvers. The MPC results are compared to both full DSMC and CFD predictions and available experimental measurements. By using DSMC in only regions where the flow is nonequilibrium, the MPC method is able to reproduce full DSMC results down to the level of velocity and rotational energy probability density functions while requiring a fraction of the computational time.
Computational Model of Heat Transfer on the ISS
NASA Technical Reports Server (NTRS)
Torian, John G.; Rischar, Michael L.
2008-01-01
SCRAM Lite (SCRAM signifies Station Compact Radiator Analysis Model) is a computer program for analyzing convective and radiative heat-transfer and heat-rejection performance of coolant loops and radiators, respectively, in the active thermal-control systems of the International Space Station (ISS). SCRAM Lite is a derivative of prior versions of SCRAM but is more robust. SCRAM Lite computes thermal operating characteristics of active heat-transport and heat-rejection subsystems for the major ISS configurations from Flight 5A through completion of assembly. The program performs integrated analysis of both internal and external coolant loops of the various ISS modules and of an external active thermal control system, which includes radiators and the coolant loops that transfer heat to the radiators. The SCRAM Lite run time is of the order of one minute per day of mission time. The overall objective of the SCRAM Lite simulation is to process input profiles of equipment-rack, crew-metabolic, and other heat loads to determine flow rates, coolant supply temperatures, and available radiator heat-rejection capabilities. Analyses are performed for timelines of activities, orbital parameters, and attitudes for mission times ranging from a few hours to several months.
Seedorf, Jens; Schmidt, Ralf-Gunther
2017-08-01
Research that investigates bioaerosol emissions from animal transport vehicles (ATVs) and their importance in the spread of harmful airborne agents while the ATVs travel on roads is limited. To investigate the dynamical behaviour of theoretically released particles from a moving ATV, the open-source computational fluid dynamics (CFD) software OpenFOAM was used to calculate the external and internal air flow fields with passive and forced ventilated openings of a common ATV moving at a speed of 80 km/h. In addition to a computed flow rate of approximately 40,000 m 3 /h crossing the interior of the ATV, the visualization of the trajectories has demonstrated distinct patterns of the spatial distribution of potentially released bioaerosols in the vicinity of the ATV. Although the front openings show the highest air flow to the outside, the recirculations of air masses between the interior of the ATV and the atmosphere also occur, which complicate the emission and the dispersion characterizations. To specify the future emission rates of ATVs, a database of bioaerosol concentrations within the ATV is necessary in conjunction with high-performance computing resources to simulate the potential dispersion of bioaerosols in the environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candel, A.; Kabel, A.; Lee, L.
Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell)more » approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.« less
NASA Astrophysics Data System (ADS)
Gruber, Ralph; Periaux, Jaques; Shaw, Richard Paul
Recent advances in computational mechanics are discussed in reviews and reports. Topics addressed include spectral superpositions on finite elements for shear banding problems, strain-based finite plasticity, numerical simulation of hypersonic viscous continuum flow, constitutive laws in solid mechanics, dynamics problems, fracture mechanics and damage tolerance, composite plates and shells, contact and friction, metal forming and solidification, coupling problems, and adaptive FEMs. Consideration is given to chemical flows, convection problems, free boundaries and artificial boundary conditions, domain-decomposition and multigrid methods, combustion and thermal analysis, wave propagation, mixed and hybrid FEMs, integral-equation methods, optimization, software engineering, and vector and parallel computing.
NASA Technical Reports Server (NTRS)
Spjeldvik, W. N.
1981-01-01
Computer simulations of processes which control the relative abundances of ions in the trapping regions of geospace are compared with observations from discriminating ion detectors. Energy losses due to Coulomb collisions between ions and exospheric neutrals are considered, along with charge exchange losses and internal charge exchanges. The time evolution of energetic ion fluxes of equatorially mirroring ions under radial diffusion is modelled to include geomagnetic and geoelectric fluctutations. Limits to the validity of diffusion transport theory are discussed, and the simulation is noted to contain provisions for six ionic charge states and the source effect on the radiation belt oxygen ion distributions. Comparisons are made with ion flux data gathered on Explorer 45 and ISEE-1 spacecraft and results indicate that internal charge exchanges cause the radiation belt ion charge state to be independent of source charge rate characteristics, and relative charge state distribution is independent of the radially diffusive transport rate below the charge state redistribution zone.
Bifurcated helical core equilibrium states in tokamaks
NASA Astrophysics Data System (ADS)
Cooper, W. A.; Chapman, I. T.; Schmitz, O.; Turnbull, A. D.; Tobias, B. J.; Lazarus, E. A.; Turco, F.; Lanctot, M. J.; Evans, T. E.; Graves, J. P.; Brunetti, D.; Pfefferlé, D.; Reimerdes, H.; Sauter, O.; Halpern, F. D.; Tran, T. M.; Coda, S.; Duval, B. P.; Labit, B.; Pochelon, A.; Turnyanskiy, M. R.; Lao, L.; Luce, T. C.; Buttery, R.; Ferron, J. R.; Hollmann, E. M.; Petty, C. C.; van Zeeland, M.; Fenstermacher, M. E.; Hanson, J. M.; Lütjens, H.
2013-07-01
Tokamaks with weak to moderate reversed central shear in which the minimum inverse rotational transform (safety factor) qmin is in the neighbourhood of unity can trigger bifurcated magnetohydrodynamic equilibrium states, one of which is similar to a saturated ideal internal kink mode. Peaked prescribed pressure profiles reproduce the ‘snake’ structures observed in many tokamaks which has led to a novel explanation of the snake as a bifurcated equilibrium state. Snake equilibrium structures are computed in simulations of the tokamak à configuration variable (TCV), DIII-D and mega amp spherical torus (MAST) tokamaks. The internal helical deformations only weakly modulate the plasma-vacuum interface which is more sensitive to ripple and resonant magnetic perturbations. On the other hand, the external perturbations do not alter the helical core deformation in a significant manner. The confinement of fast particles in MAST simulations deteriorate with the amplitude of the helical core distortion. These three-dimensional bifurcated solutions constitute a paradigm shift that motivates the applications of tools developed for stellarator research in tokamak physics investigations.
Modelling cell motility and chemotaxis with evolving surface finite elements
Elliott, Charles M.; Stinner, Björn; Venkataraman, Chandrasekhar
2012-01-01
We present a mathematical and a computational framework for the modelling of cell motility. The cell membrane is represented by an evolving surface, with the movement of the cell determined by the interaction of various forces that act normal to the surface. We consider external forces such as those that may arise owing to inhomogeneities in the medium and a pressure that constrains the enclosed volume, as well as internal forces that arise from the reaction of the cells' surface to stretching and bending. We also consider a protrusive force associated with a reaction–diffusion system (RDS) posed on the cell membrane, with cell polarization modelled by this surface RDS. The computational method is based on an evolving surface finite-element method. The general method can account for the large deformations that arise in cell motility and allows the simulation of cell migration in three dimensions. We illustrate applications of the proposed modelling framework and numerical method by reporting on numerical simulations of a model for eukaryotic chemotaxis and a model for the persistent movement of keratocytes in two and three space dimensions. Movies of the simulated cells can be obtained from http://homepages.warwick.ac.uk/∼maskae/CV_Warwick/Chemotaxis.html. PMID:22675164
The Navy/NASA Engine Program (NNEP89): A user's manual
NASA Technical Reports Server (NTRS)
Plencner, Robert M.; Snyder, Christopher A.
1991-01-01
An engine simulation computer code called NNEP89 was written to perform 1-D steady state thermodynamic analysis of turbine engine cycles. By using a very flexible method of input, a set of standard components are connected at execution time to simulate almost any turbine engine configuration that the user could imagine. The code was used to simulate a wide range of engine cycles from turboshafts and turboprops to air turborockets and supersonic cruise variable cycle engines. Off design performance is calculated through the use of component performance maps. A chemical equilibrium model is incorporated to adequately predict chemical dissociation as well as model virtually any fuel. NNEP89 is written in standard FORTRAN77 with clear structured programming and extensive internal documentation. The standard FORTRAN77 programming allows it to be installed onto most mainframe computers and workstations without modification. The NNEP89 code was derived from the Navy/NASA Engine program (NNEP). NNEP89 provides many improvements and enhancements to the original NNEP code and incorporates features which make it easier to use for the novice user. This is a comprehensive user's guide for the NNEP89 code.
Numerical Simulations For the F-16XL Aircraft Configuration
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa A.; Abdol-Hamid, Khaled; Cavallo, Peter A.; Parlette, Edward B.
2014-01-01
Numerical simulations of flow around the F-16XL are presented as a contribution to the Cranked Arrow Wing Aerodynamic Project International II (CAWAPI-II). The NASA Tetrahedral Unstructured Software System (TetrUSS) is used to perform numerical simulations. This CFD suite, developed and maintained by NASA Langley Research Center, includes an unstructured grid generation program called VGRID, a postprocessor named POSTGRID, and the flow solver USM3D. The CRISP CFD package is utilized to provide error estimates and grid adaption for verification of USM3D results. A subsonic high angle-of-attack case flight condition (FC) 25 is computed and analyzed. Three turbulence models are used in the calculations: the one-equation Spalart-Allmaras (SA), the two-equation shear stress transport (SST) and the ke turbulence models. Computational results, and surface static pressure profiles are presented and compared with flight data. Solution verification is performed using formal grid refinement studies, the solution of Error Transport Equations, and adaptive mesh refinement. The current study shows that the USM3D solver coupled with CRISP CFD can be used in an engineering environment in predicting vortex-flow physics on a complex configuration at flight Reynolds numbers.
Launch Vehicle Systems Analysis
NASA Technical Reports Server (NTRS)
Olds, John R.
1999-01-01
This report summaries the key accomplishments of Georgia Tech's Space Systems Design Laboratory (SSDL) under NASA Grant NAG8-1302 from NASA - Marshall Space Flight Center. The report consists of this summary white paper, copies of technical papers written under this grant, and several viewgraph-style presentations. During the course of this grant four main tasks were completed: (1)Simulated Combined-Cycle Rocket Engine Analysis Module (SCCREAM), a computer analysis tool for predicting the performance of various RBCC engine configurations; (2) Hyperion, a single stage to orbit vehicle capable of delivering 25,000 pound payloads to the International Space Station Orbit; (3) Bantam-X Support - a small payload mission; (4) International Trajectory Support for interplanetary human Mars missions.
Break modeling for RELAP5 analyses of ISP-27 Bethsy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petelin, S.; Gortnar, O.; Mavko, B.
This paper presents pre- and posttest analyses of International Standard Problem (ISP) 27 on the Bethsy facility and separate RELAP5 break model tests considering the measured boundary condition at break inlet. This contribution also demonstrates modifications which have assured the significant improvement of model response in posttest simulations. Calculations were performed using the RELAP5/MOD2/36.05 and RELAP5/MOD3.5M5 codes on the MicroVAX, SUN, and CONVEX computers. Bethsy is an integral test facility that simulates a typical 900-MW (electric) Framatome pressurized water reactor. The ISP-27 scenario involves a 2-in. cold-leg break without HPSI and with delayed operator procedures for secondary system depressurization.
Solar Proton Transport within an ICRU Sphere Surrounded by a Complex Shield: Combinatorial Geometry
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2015-01-01
The 3DHZETRN code, with improved neutron and light ion (Z (is) less than 2) transport procedures, was recently developed and compared to Monte Carlo (MC) simulations using simplified spherical geometries. It was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in general combinatorial geometry. A more complex shielding structure with internal parts surrounding a tissue sphere is considered and compared against MC simulations. It is shown that even in the more complex geometry, 3DHZETRN agrees well with the MC codes and maintains a high degree of computational efficiency.
Runway Scheduling for Charlotte Douglas International Airport
NASA Technical Reports Server (NTRS)
Malik, Waqar A.; Lee, Hanbong; Jung, Yoon C.
2016-01-01
This paper describes the runway scheduler that was used in the 2014 SARDA human-in-the-loop simulations for CLT. The algorithm considers multiple runways and computes optimal runway times for departures and arrivals. In this paper, we plan to run additional simulation on the standalone MRS algorithm and compare the performance of the algorithm against a FCFS heuristic where aircraft avail of runway slots based on a priority given by their positions in the FCFS sequence. Several traffic scenarios corresponding to current day traffic level and demand profile will be generated. We also plan to examine the effect of increase in traffic level (1.2x and 1.5x) and observe trends in algorithm performance.
Nuclear Engine System Simulation (NESS) version 2.0
NASA Technical Reports Server (NTRS)
Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.
1993-01-01
The topics are presented in viewgraph form and include the following; nuclear thermal propulsion (NTP) engine system analysis program development; nuclear thermal propulsion engine analysis capability requirements; team resources used to support NESS development; expanded liquid engine simulations (ELES) computer model; ELES verification examples; NESS program development evolution; past NTP ELES analysis code modifications and verifications; general NTP engine system features modeled by NESS; representative NTP expander, gas generator, and bleed engine system cycles modeled by NESS; NESS program overview; NESS program flow logic; enabler (NERVA type) nuclear thermal rocket engine; prismatic fuel elements and supports; reactor fuel and support element parameters; reactor parameters as a function of thrust level; internal shield sizing; and reactor thermal model.
STS-105 Crew Training in VR Lab
2001-03-15
JSC2001-00754 (15 March 2001) --- Astronaut Patrick G. Forrester, STS-105 mission specialist, uses specialized gear in the virtual reality lab at the Johnson Space Center (JSC) to train for his duties aboard the Space Shuttle Discovery. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the International Space Station (ISS) hardware with which they will be working.
International Space Station (ISS)
2007-05-21
STS-118 astronaut and mission specialist Dafydd R. “Dave” Williams, representing the Canadian Space Agency, uses Virtual Reality Hardware in the Space Vehicle Mock Up Facility at the Johnson Space Center to rehearse some of his duties for the upcoming mission. This type of virtual reality training allows the astronauts to wear special gloves and other gear while looking at a computer that displays simulating actual movements around the various locations on the station hardware which with they will be working.
European Scientific Notes. Volume 35, Number 7,
1981-07-31
simulated the entire processor down cores, semiconductor PROMs, etc. pack- to gate level on a PDP-11/45 computer, aged on FUROCARDS can be interfaced...approaching retirement were used to generate internal heat age , but DERMO will undoubtedly con- when irradiated. It was found that tinue to be France’s leading...import- parameters , such a doublet will focus ance. it plays an important role not a bundle of rays incident parallel only in mapping and defining the
Basic Research in Digital Stochastic Model Algorithmic Control.
1980-11-01
IDCOM Description 115 8.2 Basic Control Computation 117 8.3 Gradient Algorithm 119 8.4 Simulation Model 119 8.5 Model Modifications 123 8.6 Summary 124...constraints, and 3) control traJectorv comouta- tion. 2.1.1 Internal Model of the System The multivariable system to be controlled is represented by a...more flexible and adaptive, since the model , criteria, and sampling rates can be adjusted on-line. This flexibility comes from the use of the impulse
Fire and the Related Effects of Nuclear Explosions. 1982 Asilomar Conference,
1982-11-01
SRT International 333 Ravenswood Ave. Menlo Park, CA 94025 F2A Work Unit 2563F II. CONTROLLING OFFICE NAME AND AOoReSS US. REPORT OATS Federal...and Scope: 1. Define the events and modules to be used in the Urban Fire Demonstration Model along with appropriate data elements, control and data...1) a specific control structure to sequence and organize the various computational V- 9 elements of an interactive simulation, and (2) emphasis
Pearce, Marcus T
2018-05-11
Music perception depends on internal psychological models derived through exposure to a musical culture. It is hypothesized that this musical enculturation depends on two cognitive processes: (1) statistical learning, in which listeners acquire internal cognitive models of statistical regularities present in the music to which they are exposed; and (2) probabilistic prediction based on these learned models that enables listeners to organize and process their mental representations of music. To corroborate these hypotheses, I review research that uses a computational model of probabilistic prediction based on statistical learning (the information dynamics of music (IDyOM) model) to simulate data from empirical studies of human listeners. The results show that a broad range of psychological processes involved in music perception-expectation, emotion, memory, similarity, segmentation, and meter-can be understood in terms of a single, underlying process of probabilistic prediction using learned statistical models. Furthermore, IDyOM simulations of listeners from different musical cultures demonstrate that statistical learning can plausibly predict causal effects of differential cultural exposure to musical styles, providing a quantitative model of cultural distance. Understanding the neural basis of musical enculturation will benefit from close coordination between empirical neuroimaging and computational modeling of underlying mechanisms, as outlined here. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.
Kinematic analysis of anterior cruciate ligament reconstruction in total knee arthroplasty
Liu, Hua-Wei; Ni, Ming; Zhang, Guo-Qiang; Li, Xiang; Chen, Hui; Zhang, Qiang; Chai, Wei; Zhou, Yong-Gang; Chen, Ji-Ying; Liu, Yu-Liang; Cheng, Cheng-Kung; Wang, Yan
2016-01-01
Background: This study aims to retain normal knee kinematics after knee replacement surgeries by reconstructing anterior cruciate ligament during total knee arthroplasty. Method: We use computational simulation tools to establish four dynamic knee models, including normal knee model, posterior cruciate ligament retaining knee model, posterior cruciate ligament substituting knee model, and anterior cruciate ligament reconstructing knee model. Our proposed method utilizes magnetic resonance images to reconstruct solid bones and attachments of ligaments, and assemble femoral and tibial components according representative literatures and operational specifications. Dynamic data of axial tibial rotation and femoral translation from full-extension to 135 were measured for analyzing the motion of knee models. Findings: The computational simulation results show that comparing with the posterior cruciate ligament retained knee model and the posterior cruciate ligament substituted knee model, reconstructing anterior cruciate ligament improves the posterior movement of the lateral condyle, medial condyle and tibial internal rotation through a full range of flexion. The maximum posterior translations of the lateral condyle, medial condyle and tibial internal rotation of the anterior cruciate ligament reconstructed knee are 15.3 mm, 4.6 mm and 20.6 at 135 of flexion. Interpretation: Reconstructing anterior cruciate ligament in total knee arthroplasty has been approved to be an more efficient way of maintaining normal knee kinematics comparing to posterior cruciate ligament retained and posterior cruciate ligament substituted total knee arthroplasty. PMID:27347334
A unified dislocation density-dependent physical-based constitutive model for cold metal forming
NASA Astrophysics Data System (ADS)
Schacht, K.; Motaman, A. H.; Prahl, U.; Bleck, W.
2017-10-01
Dislocation-density-dependent physical-based constitutive models of metal plasticity while are computationally efficient and history-dependent, can accurately account for varying process parameters such as strain, strain rate and temperature; different loading modes such as continuous deformation, creep and relaxation; microscopic metallurgical processes; and varying chemical composition within an alloy family. Since these models are founded on essential phenomena dominating the deformation, they have a larger range of usability and validity. Also, they are suitable for manufacturing chain simulations since they can efficiently compute the cumulative effect of the various manufacturing processes by following the material state through the entire manufacturing chain and also interpass periods and give a realistic prediction of the material behavior and final product properties. In the physical-based constitutive model of cold metal plasticity introduced in this study, physical processes influencing cold and warm plastic deformation in polycrystalline metals are described using physical/metallurgical internal variables such as dislocation density and effective grain size. The evolution of these internal variables are calculated using adequate equations that describe the physical processes dominating the material behavior during cold plastic deformation. For validation, the model is numerically implemented in general implicit isotropic elasto-viscoplasticity algorithm as a user-defined material subroutine (UMAT) in ABAQUS/Standard and used for finite element simulation of upsetting tests and a complete cold forging cycle of case hardenable MnCr steel family.
Training astronauts using three-dimensional visualisations of the International Space Station.
Rycroft, M; Houston, A; Barker, A; Dahlstron, E; Lewis, N; Maris, N; Nelles, D; Bagaoutdinov, R; Bodrikov, G; Borodin, Y; Cheburkov, M; Ivanov, D; Karpunin, P; Katargin, R; Kiselyev, A; Kotlayarevsky, Y; Schetinnikov, A; Tylerov, F
1999-03-01
Recent advances in personal computer technology have led to the development of relatively low-cost software to generate high-resolution three-dimensional images. The capability both to rotate and zoom in on these images superposed on appropriate background images enables high-quality movies to be created. These developments have been used to produce realistic simulations of the International Space Station on CD-ROM. This product is described and its potentialities demonstrated. With successive launches, the ISS is gradually built up, and visualised over a rotating Earth against the star background. It is anticipated that this product's capability will be useful when training astronauts to carry out EVAs around the ISS. Simulations inside the ISS are also very realistic. These should prove invaluable when familiarising the ISS crew with their future workplace and home. Operating procedures can be taught and perfected. "What if" scenario models can be explored and this facility should be useful when training the crew to deal with emergency situations which might arise. This CD-ROM product will also be used to make the general public more aware of, and hence enthusiastic about, the International Space Station programme.
NASA Technical Reports Server (NTRS)
Allgood, Daniel C.; Graham, Jason S.; McVay, Greg P.; Langford, Lester L.
2008-01-01
A unique assessment of acoustic similarity scaling laws and acoustic analogy methodologies in predicting the far-field acoustic signature from a sub-scale altitude rocket test facility at the NASA Stennis Space Center was performed. A directional, point-source similarity analysis was implemented for predicting the acoustic far-field. In this approach, experimental acoustic data obtained from "similar" rocket engine tests were appropriately scaled using key geometric and dynamic parameters. The accuracy of this engineering-level method is discussed by comparing the predictions with acoustic far-field measurements obtained. In addition, a CFD solver was coupled with a Lilley's acoustic analogy formulation to determine the improvement of using a physics-based methodology over an experimental correlation approach. In the current work, steady-state Reynolds-averaged Navier-Stokes calculations were used to model the internal flow of the rocket engine and altitude diffuser. These internal flow simulations provided the necessary realistic input conditions for external plume simulations. The CFD plume simulations were then used to provide the spatial turbulent noise source distributions in the acoustic analogy calculations. Preliminary findings of these studies will be discussed.
NASA Technical Reports Server (NTRS)
Santana, Erico Soriano Martins; Mueller, Carlos
2003-01-01
The occurrence of flight delays in Brazil, mostly verified at the ground (airfield), is responsible for serious disruptions at the airport level but also for the unchaining of problems in all the airport system, affecting also the airspace. The present study develops an analysis of delay and travel times at Sao Paulo International Airport/ Guarulhos (AISP/GRU) airfield based on simulation model. Different airport physical and operational scenarios had been analyzed by means of simulation. SIMMOD Plus 4.0, the computational tool developed to represent aircraft operation in the airspace and airside of airports, was used to perform these analysis. The study was mainly focused on aircraft operations on ground, at the airport runway, taxi-lanes and aprons. The visualization of the operations with increasing demand facilitated the analyses. The results generated in this work certify the viability of the methodology, they also indicated the solutions capable to solve the delay problem by travel time analysis, thus diminishing the costs for users mainly airport authority. It also indicated alternatives for airport operations, assisting the decision-making process and in the appropriate timing of the proposed changes in the existing infrastructure.
Progressive Fracture of Composite Structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Minnetyan, Levon
2008-01-01
A new approach is described for evaluating fracture in composite structures. This approach is independent of classical fracture mechanics parameters like fracture toughness. It relies on computational simulation and is programmed in a stand-alone integrated computer code. It is multiscale, multifunctional because it includes composite mechanics for the composite behavior and finite element analysis for predicting the structural response. It contains seven modules; layered composite mechanics (micro, macro, laminate), finite element, updating scheme, local fracture, global fracture, stress based failure modes, and fracture progression. The computer code is called CODSTRAN (Composite Durability Structural ANalysis). It is used in the present paper to evaluate the global fracture of four composite shell problems and one composite built-up structure. Results show that the composite shells and the built-up composite structure global fracture are enhanced when internal pressure is combined with shear loads.
Training Surgical Residents With a Haptic Robotic Central Venous Catheterization Simulator.
Pepley, David F; Gordon, Adam B; Yovanoff, Mary A; Mirkin, Katelin A; Miller, Scarlett R; Han, David C; Moore, Jason Z
Ultrasound guided central venous catheterization (CVC) is a common surgical procedure with complication rates ranging from 5 to 21 percent. Training is typically performed using manikins that do not simulate anatomical variations such as obesity and abnormal vessel positioning. The goal of this study was to develop and validate the effectiveness of a new virtual reality and force haptic based simulation platform for CVC of the right internal jugular vein. A CVC simulation platform was developed using a haptic robotic arm, 3D position tracker, and computer visualization. The haptic robotic arm simulated needle insertion force that was based on cadaver experiments. The 3D position tracker was used as a mock ultrasound device with realistic visualization on a computer screen. Upon completion of a practice simulation, performance feedback is given to the user through a graphical user interface including scoring factors based on good CVC practice. The effectiveness of the system was evaluated by training 13 first year surgical residents using the virtual reality haptic based training system over a 3 month period. The participants' performance increased from 52% to 96% on the baseline training scenario, approaching the average score of an expert surgeon: 98%. This also resulted in improvement in positive CVC practices including a 61% decrease between final needle tip position and vein center, a decrease in mean insertion attempts from 1.92 to 1.23, and a 12% increase in time spent aspirating the syringe throughout the procedure. A virtual reality haptic robotic simulator for CVC was successfully developed. Surgical residents training on the simulation improved to near expert levels after three robotic training sessions. This suggests that this system could act as an effective training device for CVC. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustavsen, Arlid; Kohler, Christian; Dalehaug, Arvid
2008-12-01
This paper assesses the accuracy of the simplified frame cavity conduction/convection and radiation models presented in ISO 15099 and used in software for rating and labeling window products. Temperatures and U-factors for typical horizontal window frames with internal cavities are compared; results from Computational Fluid Dynamics (CFD) simulations with detailed radiation modeling are used as a reference. Four different frames were studied. Two were made of polyvinyl chloride (PVC) and two of aluminum. For each frame, six different simulations were performed, two with a CFD code and four with a building-component thermal-simulation tool using the Finite Element Method (FEM). Thismore » FEM tool addresses convection using correlations from ISO 15099; it addressed radiation with either correlations from ISO 15099 or with a detailed, view-factor-based radiation model. Calculations were performed using the CFD code with and without fluid flow in the window frame cavities; the calculations without fluid flow were performed to verify that the CFD code and the building-component thermal-simulation tool produced consistent results. With the FEM-code, the practice of subdividing small frame cavities was examined, in some cases not subdividing, in some cases subdividing cavities with interconnections smaller than five millimeters (mm) (ISO 15099) and in some cases subdividing cavities with interconnections smaller than seven mm (a breakpoint that has been suggested in other studies). For the various frames, the calculated U-factors were found to be quite comparable (the maximum difference between the reference CFD simulation and the other simulations was found to be 13.2 percent). A maximum difference of 8.5 percent was found between the CFD simulation and the FEM simulation using ISO 15099 procedures. The ISO 15099 correlation works best for frames with high U-factors. For more efficient frames, the relative differences among various simulations are larger. Temperature was also compared, at selected locations on the frames. Small differences was found in the results from model to model. Finally, the effectiveness of the ISO cavity radiation algorithms was examined by comparing results from these algorithms to detailed radiation calculations (from both programs). Our results suggest that improvements in cavity heat transfer calculations can be obtained by using detailed radiation modeling (i.e. view-factor or ray-tracing models), and that incorporation of these strategies may be more important for improving the accuracy of results than the use of CFD modeling for horizontal cavities.« less
Kendon, Vivien M; Nemoto, Kae; Munro, William J
2010-08-13
We briefly review what a quantum computer is, what it promises to do for us and why it is so hard to build one. Among the first applications anticipated to bear fruit is the quantum simulation of quantum systems. While most quantum computation is an extension of classical digital computation, quantum simulation differs fundamentally in how the data are encoded in the quantum computer. To perform a quantum simulation, the Hilbert space of the system to be simulated is mapped directly onto the Hilbert space of the (logical) qubits in the quantum computer. This type of direct correspondence is how data are encoded in a classical analogue computer. There is no binary encoding, and increasing precision becomes exponentially costly: an extra bit of precision doubles the size of the computer. This has important consequences for both the precision and error-correction requirements of quantum simulation, and significant open questions remain about its practicality. It also means that the quantum version of analogue computers, continuous-variable quantum computers, becomes an equally efficient architecture for quantum simulation. Lessons from past use of classical analogue computers can help us to build better quantum simulators in future.
Student Ability, Confidence, and Attitudes Toward Incorporating a Computer into a Patient Interview.
Ray, Sarah; Valdovinos, Katie
2015-05-25
To improve pharmacy students' ability to effectively incorporate a computer into a simulated patient encounter and to improve their awareness of barriers and attitudes towards and their confidence in using a computer during simulated patient encounters. Students completed a survey that assessed their awareness of, confidence in, and attitudes towards computer use during simulated patient encounters. Students were evaluated with a rubric on their ability to incorporate a computer into a simulated patient encounter. Students were resurveyed and reevaluated after instruction. Students improved in their ability to effectively incorporate computer usage into a simulated patient encounter. They also became more aware of and improved their attitudes toward barriers regarding such usage and gained more confidence in their ability to use a computer during simulated patient encounters. Instruction can improve pharmacy students' ability to incorporate a computer into simulated patient encounters. This skill is critical to developing efficiency while maintaining rapport with patients.
Experimental, Theoretical, and Computational Investigation of Separated Nozzle Flows
NASA Technical Reports Server (NTRS)
Hunter, Craig A.
2004-01-01
A detailed experimental, theoretical, and computational study of separated nozzle flows has been conducted. Experimental testing was performed at the NASA Langley 16-Foot Transonic Tunnel Complex. As part of a comprehensive static performance investigation, force, moment, and pressure measurements were made and schlieren flow visualization was obtained for a sub-scale, non-axisymmetric, two-dimensional, convergent- divergent nozzle. In addition, two-dimensional numerical simulations were run using the computational fluid dynamics code PAB3D with two-equation turbulence closure and algebraic Reynolds stress modeling. For reference, experimental and computational results were compared with theoretical predictions based on one-dimensional gas dynamics and an approximate integral momentum boundary layer method. Experimental results from this study indicate that off-design overexpanded nozzle flow was dominated by shock induced boundary layer separation, which was divided into two distinct flow regimes; three- dimensional separation with partial reattachment, and fully detached two-dimensional separation. The test nozzle was observed to go through a marked transition in passing from one regime to the other. In all cases, separation provided a significant increase in static thrust efficiency compared to the ideal prediction. Results indicate that with controlled separation, the entire overexpanded range of nozzle performance would be within 10% of the peak thrust efficiency. By offering savings in weight and complexity over a conventional mechanical exhaust system, this may allow a fixed geometry nozzle to cover an entire flight envelope. The computational simulation was in excellent agreement with experimental data over most of the test range, and did a good job of modeling internal flow and thrust performance. An exception occurred at low nozzle pressure ratios, where the two-dimensional computational model was inconsistent with the three-dimensional separation observed in the experiment. In general, the computation captured the physics of the shock boundary layer interaction and shock induced boundary layer separation in the nozzle, though there were some differences in shock structure compared to experiment. Though minor, these differences could be important for studies involving flow control or thrust vectoring of separated nozzles. Combined with other observations, this indicates that more detailed, three-dimensional computational modeling needs to be conducted to more realistically simulate shock-separated nozzle flows.
NASA Astrophysics Data System (ADS)
Rapaka, Narsimha R.; Sarkar, Sutanu
2016-10-01
A sharp-interface Immersed Boundary Method (IBM) is developed to simulate density-stratified turbulent flows in complex geometry using a Cartesian grid. The basic numerical scheme corresponds to a central second-order finite difference method, third-order Runge-Kutta integration in time for the advective terms and an alternating direction implicit (ADI) scheme for the viscous and diffusive terms. The solver developed here allows for both direct numerical simulation (DNS) and large eddy simulation (LES) approaches. Methods to enhance the mass conservation and numerical stability of the solver to simulate high Reynolds number flows are discussed. Convergence with second-order accuracy is demonstrated in flow past a cylinder. The solver is validated against past laboratory and numerical results in flow past a sphere, and in channel flow with and without stratification. Since topographically generated internal waves are believed to result in a substantial fraction of turbulent mixing in the ocean, we are motivated to examine oscillating tidal flow over a triangular obstacle to assess the ability of this computational model to represent nonlinear internal waves and turbulence. Results in laboratory-scale (order of few meters) simulations show that the wave energy flux, mean flow properties and turbulent kinetic energy agree well with our previous results obtained using a body-fitted grid (BFG). The deviation of IBM results from BFG results is found to increase with increasing nonlinearity in the wave field that is associated with either increasing steepness of the topography relative to the internal wave propagation angle or with the amplitude of the oscillatory forcing. LES is performed on a large scale ridge, of the order of few kilometers in length, that has the same geometrical shape and same non-dimensional values for the governing flow and environmental parameters as the laboratory-scale topography, but significantly larger Reynolds number. A non-linear drag law is utilized in the large-scale application to parameterize turbulent losses due to bottom friction at high Reynolds number. The large scale problem exhibits qualitatively similar behavior to the laboratory scale problem with some differences: slightly larger intensification of the boundary flow and somewhat higher non-dimensional values for the energy fluxed away by the internal wave field. The phasing of wave breaking and turbulence exhibits little difference between small-scale and large-scale obstacles as long as the important non-dimensional parameters are kept the same. We conclude that IBM is a viable approach to the simulation of internal waves and turbulence in high Reynolds number stratified flows over topography.
Gemini Rendezvous Docking Simulator
1964-05-11
Gemini Rendezvous Docking Simulator suspended from the roof of the Langley Research Center s aircraft hangar. Francis B. Smith wrote: The rendezvous and docking operation of the Gemini spacecraft with the Agena and of the Apollo Command Module with the Lunar Excursion Module have been the subject of simulator studies for several years. This figure illustrates the Gemini-Agena rendezvous docking simulator at Langley. The Gemini spacecraft was supported in a gimbal system by an overhead crane and gantry arrangement which provided 6 degrees of freedom - roll, pitch, yaw, and translation in any direction - all controllable by the astronaut in the spacecraft. Here again the controls fed into a computer which in turn provided an input to the servos driving the spacecraft so that it responded to control motions in a manner which accurately simulated the Gemini spacecraft. -- Published in Barton C. Hacker and James M. Grimwood, On the Shoulders of Titans: A History of Project Gemini, NASA SP-4203 Francis B. Smith, Simulators for Manned Space Research, Paper presented at the 1966 IEEE International convention, March 21-25, 1966.
NASA Astrophysics Data System (ADS)
Ho, Teck Seng; Charles, Christine; Boswell, Roderick W.
2016-12-01
This paper presents computational fluid dynamics simulations of the cold gas operation of Pocket Rocket and Mini Pocket Rocket radiofrequency electrothermal microthrusters, replicating experiments performed in both sub-Torr and vacuum environments. This work takes advantage of flow velocity choking to circumvent the invalidity of modelling vacuum regions within a CFD simulation, while still preserving the accuracy of the desired results in the internal regions of the microthrusters. Simulated results of the plenum stagnation pressure is in precise agreement with experimental measurements when slip boundary conditions with the correct tangential momentum accommodation coefficients for each gas are used. Thrust and specific impulse is calculated by integrating the flow profiles at the exit of the microthrusters, and are in good agreement with experimental pendulum thrust balance measurements and theoretical expectations. For low thrust conditions where experimental instruments are not sufficiently sensitive, these cold gas simulations provide additional data points against which experimental results can be verified and extrapolated. The cold gas simulations presented in this paper will be used as a benchmark to compare with future plasma simulations of the Pocket Rocket microthruster.
Development of simulation computer complex specification
NASA Technical Reports Server (NTRS)
1973-01-01
The Training Simulation Computer Complex Study was one of three studies contracted in support of preparations for procurement of a shuttle mission simulator for shuttle crew training. The subject study was concerned with definition of the software loads to be imposed on the computer complex to be associated with the shuttle mission simulator and the development of procurement specifications based on the resulting computer requirements. These procurement specifications cover the computer hardware and system software as well as the data conversion equipment required to interface the computer to the simulator hardware. The development of the necessary hardware and software specifications required the execution of a number of related tasks which included, (1) simulation software sizing, (2) computer requirements definition, (3) data conversion equipment requirements definition, (4) system software requirements definition, (5) a simulation management plan, (6) a background survey, and (7) preparation of the specifications.
Lacerda, Luis M; Sperl, Jonathan I; Menzel, Marion I; Sprenger, Tim; Barker, Gareth J; Dell'Acqua, Flavio
2016-12-01
Diffusion spectrum imaging (DSI) is an imaging technique that has been successfully applied to resolve white matter crossings in the human brain. However, its accuracy in complex microstructure environments has not been well characterized. Here we have simulated different tissue configurations, sampling schemes, and processing steps to evaluate DSI performances' under realistic biophysical conditions. A novel approach to compute the orientation distribution function (ODF) has also been developed to include biophysical constraints, namely integration ranges compatible with axial fiber diffusivities. Performed simulations identified several DSI configurations that consistently show aliasing artifacts caused by fast diffusion components for both isotropic diffusion and fiber configurations. The proposed method for ODF computation showed some improvement in reducing such artifacts and improving the ability to resolve crossings, while keeping the quantitative nature of the ODF. In this study, we identified an important limitation of current DSI implementations, specifically the presence of aliasing due to fast diffusion components like those from pathological tissues, which are not well characterized, and can lead to artifactual fiber reconstructions. To minimize this issue, a new way of computing the ODF was introduced, which removes most of these artifacts and offers improved angular resolution. Magn Reson Med 76:1837-1847, 2016. © 2015 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2015 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.
Computational simulation of concurrent engineering for aerospace propulsion systems
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Singhal, S. N.
1992-01-01
Results are summarized of an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulations methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties - fundamental in developing such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering for propulsion systems and systems in general. Benefits and facets needing early attention in the development are outlined.
Computational simulation for concurrent engineering of aerospace propulsion systems
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Singhal, S. N.
1993-01-01
Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.
Computational simulation for concurrent engineering of aerospace propulsion systems
NASA Astrophysics Data System (ADS)
Chamis, C. C.; Singhal, S. N.
1993-02-01
Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.
NASA Technical Reports Server (NTRS)
Druyan, Leonard M.
2012-01-01
Climate models is a very broad topic, so a single volume can only offer a small sampling of relevant research activities. This volume of 14 chapters includes descriptions of a variety of modeling studies for a variety of geographic regions by an international roster of authors. The climate research community generally uses the rubric climate models to refer to organized sets of computer instructions that produce simulations of climate evolution. The code is based on physical relationships that describe the shared variability of meteorological parameters such as temperature, humidity, precipitation rate, circulation, radiation fluxes, etc. Three-dimensional climate models are integrated over time in order to compute the temporal and spatial variations of these parameters. Model domains can be global or regional and the horizontal and vertical resolutions of the computational grid vary from model to model. Considering the entire climate system requires accounting for interactions between solar insolation, atmospheric, oceanic and continental processes, the latter including land hydrology and vegetation. Model simulations may concentrate on one or more of these components, but the most sophisticated models will estimate the mutual interactions of all of these environments. Advances in computer technology have prompted investments in more complex model configurations that consider more phenomena interactions than were possible with yesterday s computers. However, not every attempt to add to the computational layers is rewarded by better model performance. Extensive research is required to test and document any advantages gained by greater sophistication in model formulation. One purpose for publishing climate model research results is to present purported advances for evaluation by the scientific community.
3-D Analysis of Flanged Joints Through Various Preload Methods Using ANSYS
NASA Astrophysics Data System (ADS)
Murugan, Jeyaraj Paul; Kurian, Thomas; Jayaprakash, Janardhan; Sreedharapanickar, Somanath
2015-10-01
Flanged joints are being employed in aerospace solid rocket motor hardware for the integration of various systems or subsystems. Hence, the design of flanged joints is very important in ensuring the integrity of motor while functioning. As these joints are subjected to higher loads due to internal pressure acting inside the motor chamber, an appropriate preload is required to be applied in this joint before subjecting it to the external load. Preload, also known as clamp load, is applied on the fastener and helps to hold the mating flanges together. Generally preload is simulated as a thermal load and the exact preload is obtained through number of iterations. Infact, more iterations are required when considering the material nonlinearity of the bolt. This way of simulation will take more computational time for generating the required preload. Now a days most commercial software packages use pretension elements for simulating the preload. This element does not require iterations for inducing the preload and it can be solved with single iteration. This approach takes less computational time and thus one can study the characteristics of the joint easily by varying the preload. When the structure contains more number of joints with different sizes of fasteners, pretension elements can be used compared to thermal load approach for simulating each size of fastener. This paper covers the details of analyses carried out simulating the preload through various options viz., preload through thermal, initial state command and pretension element etc. using ANSYS finite element package.
Acoustic wave simulation using an overset grid for the global monitoring system
NASA Astrophysics Data System (ADS)
Kushida, N.; Le Bras, R.
2017-12-01
The International Monitoring System of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has been monitoring hydro-acoustic and infrasound waves over the globe. Because of the complex natures of the oceans and the atmosphere, computer simulation can play an important role in understanding the observed signals. In this regard, methods which depend on partial differential equations and require minimum modelling, are preferable. So far, to our best knowledge, acoustic wave propagation simulations based on partial differential equations on such a large scale have not been performed (pp 147 - 161 of ref [1], [2]). The main difficulties in building such simulation codes are: (1) considering the inhomogeneity of medium including background flows, (2) high aspect ratio of computational domain, (3) stability during long time integration. To overcome these difficulties, we employ a two-dimensional finite different (FDM) scheme on spherical coordinates with the Yin-Yang overset grid[3] solving the governing equation of acoustic waves introduces by Ostashev et. al.[4]. The comparison with real recording examples in hydro-acoustic will be presented at the conference. [1] Paul C. Etter: Underwater Acoustic Modeling and Simulation, Fourth Edition, CRC Press, 2013. [2] LIAN WANG et. al.: REVIEW OF UNDERWATER ACOUSTIC PROPAGATION MODELS, NPL Report AC 12, 2014. [3] A. Kageyama and T. Sato: "Yin-Yang grid": An overset grid in spherical geometry, Geochem. Geophys. Geosyst., 5, Q09005, 2004. [4] Vladimir E. Ostashev et. al: Equations for finite-difference, time-domain simulation of sound propagation in moving inhomogeneous media and numerical implementation, Acoustical Society of America. DOI: 10.1121/1.1841531, 2005.
NASA Astrophysics Data System (ADS)
Jackson, David
NICT (National Institute of Information and Communications Technology) has been in charge of space weather forecast service in Japan for more than 20 years. The main target region of the space weather is the geo-space in the vicinity of the Earth where human activities are dominant. In the geo-space, serious damages of satellites, international space stations and astronauts take place caused by energetic particles or electromagnetic disturbances: the origin of the causes is dynamically changing of solar activities. Positioning systems via GPS satellites are also im-portant recently. Since the most significant effect of positioning error comes from disturbances of the ionosphere, it is crucial to estimate time-dependent modulation of the electron density profiles in the ionosphere. NICT is one of the 13 members of the ISES (International Space Environment Service), which is an international assembly of space weather forecast centers under the UNESCO. With help of geo-space environment data exchanging among the member nations, NICT operates daily space weather forecast service every day to provide informa-tion on forecasts of solar flare, geomagnetic disturbances, solar proton event, and radio-wave propagation conditions in the ionosphere. The space weather forecast at NICT is conducted based on the three methodologies: observations, simulations and informatics (OSI model). For real-time or quasi real-time reporting of space weather, we conduct our original observations: Hiraiso solar observatory to monitor the solar activity (solar flare, coronal mass ejection, and so on), domestic ionosonde network, magnetometer HF radar observations in far-east Siberia, and south-east Asia low-latitude ionosonde network (SEALION). Real-time observation data to monitor solar and solar-wind activities are obtained through antennae at NICT from ACE and STEREO satellites. We have a middle-class super-computer (NEC SX-8R) to maintain real-time computer simulations for solar and solar-wind, magnetosphere and ionosphere. The three simulations are directly or indirectly connected each other based on real-time observa-tion data to reproduce a virtual geo-space region on the super-computer. Informatics is a new methodology to make precise forecast of space weather. Based on new information and communication technologies (ICT), it provides more information in both quality and quantity. At NICT, we have been developing a cloud-computing system named "space weather cloud" based on a high-speed network system (JGN2+). Huge-scale distributed storage (1PB), clus-ter computers, visualization systems and other resources are expected to derive new findings and services of space weather forecasting. The final goal of NICT space weather service is to predict near-future space weather conditions and disturbances which will be causes of satellite malfunctions, tele-communication problems, and error of GPS navigations. In the present talk, we introduce our recent activities on the space weather services and discuss how we are going to develop the services from the view points of space science and practical uses.
Activities of NICT space weather project
NASA Astrophysics Data System (ADS)
Murata, Ken T.; Nagatsuma, Tsutomu; Watari, Shinichi; Shinagawa, Hiroyuki; Ishii, Mamoru
NICT (National Institute of Information and Communications Technology) has been in charge of space weather forecast service in Japan for more than 20 years. The main target region of the space weather is the geo-space in the vicinity of the Earth where human activities are dominant. In the geo-space, serious damages of satellites, international space stations and astronauts take place caused by energetic particles or electromagnetic disturbances: the origin of the causes is dynamically changing of solar activities. Positioning systems via GPS satellites are also im-portant recently. Since the most significant effect of positioning error comes from disturbances of the ionosphere, it is crucial to estimate time-dependent modulation of the electron density profiles in the ionosphere. NICT is one of the 13 members of the ISES (International Space Environment Service), which is an international assembly of space weather forecast centers under the UNESCO. With help of geo-space environment data exchanging among the member nations, NICT operates daily space weather forecast service every day to provide informa-tion on forecasts of solar flare, geomagnetic disturbances, solar proton event, and radio-wave propagation conditions in the ionosphere. The space weather forecast at NICT is conducted based on the three methodologies: observations, simulations and informatics (OSI model). For real-time or quasi real-time reporting of space weather, we conduct our original observations: Hiraiso solar observatory to monitor the solar activity (solar flare, coronal mass ejection, and so on), domestic ionosonde network, magnetometer HF radar observations in far-east Siberia, and south-east Asia low-latitude ionosonde network (SEALION). Real-time observation data to monitor solar and solar-wind activities are obtained through antennae at NICT from ACE and STEREO satellites. We have a middle-class super-computer (NEC SX-8R) to maintain real-time computer simulations for solar and solar-wind, magnetosphere and ionosphere. The three simulations are directly or indirectly connected each other based on real-time observa-tion data to reproduce a virtual geo-space region on the super-computer. Informatics is a new methodology to make precise forecast of space weather. Based on new information and communication technologies (ICT), it provides more information in both quality and quantity. At NICT, we have been developing a cloud-computing system named "space weather cloud" based on a high-speed network system (JGN2+). Huge-scale distributed storage (1PB), clus-ter computers, visualization systems and other resources are expected to derive new findings and services of space weather forecasting. The final goal of NICT space weather service is to predict near-future space weather conditions and disturbances which will be causes of satellite malfunctions, tele-communication problems, and error of GPS navigations. In the present talk, we introduce our recent activities on the space weather services and discuss how we are going to develop the services from the view points of space science and practical uses.
Effects of Internal Waves on Sound Propagation in the Shallow Waters of the Continental Shelves
2016-09-01
experiment area were largely generated by tidal forcing. Compared to simulations without internal waves , simulations accounting for the effects of...internal waves in the experiment area were largely generated by tidal forcing. Compared to simulations without internal waves , simulations accounting for...IN THE SHALLOW WATERS OF THE CONTINENTAL SHELVES ..................................4 1. Internal Tides—Internal Waves Generated by Tidal Forcing
A computational neural model of goal-directed utterance selection.
Klein, Michael; Kamp, Hans; Palm, Guenther; Doya, Kenji
2010-06-01
It is generally agreed that much of human communication is motivated by extra-linguistic goals: we often make utterances in order to get others to do something, or to make them support our cause, or adopt our point of view, etc. However, thus far a computational foundation for this view on language use has been lacking. In this paper we propose such a foundation using Markov Decision Processes. We borrow computational components from the field of action selection and motor control, where a neurobiological basis of these components has been established. In particular, we make use of internal models (i.e., next-state transition functions defined on current state action pairs). The internal model is coupled with reinforcement learning of a value function that is used to assess the desirability of any state that utterances (as well as certain non-verbal actions) can bring about. This cognitive architecture is tested in a number of multi-agent game simulations. In these computational experiments an agent learns to predict the context-dependent effects of utterances by interacting with other agents that are already competent speakers. We show that the cognitive architecture can account for acquiring the capability of deciding when to speak in order to achieve a certain goal (instead of performing a non-verbal action or simply doing nothing), whom to address and what to say. Copyright 2010 Elsevier Ltd. All rights reserved.
Paloncýová, Markéta; Langer, Michal; Otyepka, Michal
2018-04-10
Carbon dots (CDs), one of the youngest members of the carbon nanostructure family, are now widely experimentally studied for their tunable fluorescence properties, bleaching resistance, and biocompatibility. Their interaction with biomolecular systems has also been explored experimentally. However, many atomistic details still remain unresolved. Molecular dynamics (MD) simulations enabling atomistic and femtosecond resolutions simultaneously are a well-established tool of computational chemistry which can provide useful insights into investigated systems. Here we present a full procedure for performing MD simulations of CDs. We developed a builder for generating CDs of a desired size and with various oxygen-containing surface functional groups. Further, we analyzed the behavior of various CDs differing in size, surface functional groups, and degrees of functionalization by MD simulations. These simulations showed that surface functionalized CDs are stable in a water environment through the formation of an extensive hydrogen bonding network. We also analyzed the internal dynamics of individual layers of CDs and evaluated the role of surface functional groups on CD stability. We observed that carboxyl groups interconnected the neighboring layers and decreased the rate of internal rotations. Further, we monitored changes in the CD shape caused by an excess of charged carboxyl groups or carbonyl groups. In addition to simulations in water, we analyzed the behavior of CDs in the organic solvent DMF, which decreased the stability of pure CDs but increased the level of interlayer hydrogen bonding. We believe that the developed protocol, builder, and parameters will facilitate future studies addressing various aspects of structural features of CDs and nanocomposites containing CDs.
2004-12-31
Research, 58( 1 ), 47-77. Herl, H. E., O’Neil, H. F ., Jr., Chung, G., & Schacter, J. (1999) Reliability and validity of a computer-based knowledge mapping...simulation: A meta analysis. International Journal of Instructional Media, 26( 1 ), 71-85. Leemkuil, H., de Jong, T., de Hoog , R., & Christoph, N. (2003...FINAL REPORT ON PLAN FOR THE ASSESSMENT AND EVALUATION OF INDIVIDUAL AND TEAM PROFICIENCIES DEVELOPED BY THE DARWARS ENVIRONMENTS Harold F . O’Neil
2005-03-01
International Conference On Computers Communications and Networks, 153- 161, Lafayette, L.A. Deitel , H.M. and P.J. Deitel . 2003. C++ How to Program ...of this study is to provide an additional performance evaluation technique for the TNT program of Naval Postgraduate School. The current approach...case are the PAMAS and DBTMA protocols. Toh (2002) illustrates how these approaches succeed in solving the problem. In order to address all the
Improved definition of crustal anomalies for Magsat data
NASA Technical Reports Server (NTRS)
1981-01-01
A scheme was developed for separating the portions of the magnetic field measured by the Magsat 1 satellite that arise from internal and external sources. To test this method, a set of sample coefficients were used to compute the field values along a simulated satellite orbit. This data was then used to try to recover the original coefficients. Matrix inversion and recursive least squares methods were used to solve for the input coefficients. The accuracy of the two methods are compared.
Relaxation of the residual defect structure in deformed polycrystals under ultrasonic action
NASA Astrophysics Data System (ADS)
Murzaev, R. T.; Bachurin, D. V.; Nazarov, A. A.
2017-07-01
Using numerical computer simulation, the behavior of disordered dislocation systems under the action of monochromatic standing sound wave has been investigated in the grain of the model two-dimensional polycrystal containing nonequilibrium grain boundaries. It has been found that the presence of grain boundaries markedly affects the behavior of dislocations. The relaxation process and changes in the level of internal stresses caused by the rearrangement of the dislocation structure due to the ultrasonic action have been studied.
STS-105 Crew Training in VR Lab
2001-03-15
JSC2001-00748 (15 March 2001) --- Astronaut Patrick G. Forrester, STS-105 mission specialist, prepares to use specialized gear in the virtual reality lab at the Johnson Space Center (JSC) to train for his duties aboard the Space Shuttle Discovery. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the International Space Station (ISS) hardware with which they will be working.
STS-111 Training in VR lab with Expedition IV and V Crewmembers
2001-10-18
JSC2001-E-39083 (18 October 2001) --- Astronaut Franklin R. Chang-Diaz, STS-111 mission specialist, uses specialized gear in the virtual reality lab at the Johnson Space Center (JSC) to train for his duties aboard the Space Shuttle Endeavour. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the International Space Station (ISS) hardware with which they will be working.
STS-120 crew along with Expedition crew members Dan Tani and Sandra Magnus
2007-08-09
JSC2007-E-41535 (9 Aug. 2007) --- Astronaut Douglas H. Wheelock, STS-120 mission specialist, uses virtual reality hardware in the Space Vehicle Mockup Facility at Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear special gloves and other gear while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working.
STS-134 crew and Expedition 24/25 crew member Shannon Walker
2010-03-25
JSC2010-E-043660 (25 March 2010) --- NASA astronaut Greg Chamitoff, STS-134 mission specialist, uses virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working.
STS-134 crew and Expedition 24/25 crew member Shannon Walker
2010-03-25
JSC2010-E-043685 (25 March 2010) --- NASA astronaut Michael Fincke, STS-134 mission specialist, uses virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working.
2005-02-03
JSC2005-E-04513 (3 Feb. 2005) --- European Space Agency (ESA) astronaut Christer Fuglesang, STS-116 mission specialist, uses virtual reality hardware in the Space Vehicle Mockup Facility at the Johnson Space Center to rehearse some of his duties on the upcoming mission to the international space station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working.
STS-120 crew along with Expedition crew members Dan Tani and Sandra Magnus
2007-08-09
JSC2007-E-41537 (9 Aug. 2007) --- Astronaut Douglas H. Wheelock, STS-120 mission specialist, uses virtual reality hardware in the Space Vehicle Mockup Facility at Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear special gloves and other gear while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working.
2013-08-22
software. Using this weapon, two ways of sending trigger fire response to the D-Flow software were proposed. One was to integrate a wireless game...Logitech International, S.A., Romanel-sur- Morges, Switzerland) and the Xbox 360 wireless controller for Windows (Microsoft, Redmond, WA). The circuit board...power on and off the game controller so that the batteries do not drain (though these devices will time out after approximately 10 minutes of
Computer modeling of test particle acceleration at oblique shocks
NASA Technical Reports Server (NTRS)
Decker, Robert B.
1988-01-01
The present evaluation of the basic techniques and illustrative results of charged particle-modeling numerical codes suitable for particle acceleration at oblique, fast-mode collisionless shocks emphasizes the treatment of ions as test particles, calculating particle dynamics through numerical integration along exact phase-space orbits. Attention is given to the acceleration of particles at planar, infinitessimally thin shocks, as well as to plasma simulations in which low-energy ions are injected and accelerated at quasi-perpendicular shocks with internal structure.
NASA Astrophysics Data System (ADS)
Papa, Mauricio; Shenoi, Sujeet
The information infrastructure -- comprising computers, embedded devices, networks and software systems -- is vital to day-to-day operations in every sector: information and telecommunications, banking and finance, energy, chemicals and hazardous materials, agriculture, food, water, public health, emergency services, transportation, postal and shipping, government and defense. Global business and industry, governments, indeed society itself, cannot function effectively if major components of the critical information infrastructure are degraded, disabled or destroyed. Critical Infrastructure Protection II describes original research results and innovative applications in the interdisciplinary field of critical infrastructure protection. Also, it highlights the importance of weaving science, technology and policy in crafting sophisticated, yet practical, solutions that will help secure information, computer and network assets in the various critical infrastructure sectors. Areas of coverage include: - Themes and Issues - Infrastructure Security - Control Systems Security - Security Strategies - Infrastructure Interdependencies - Infrastructure Modeling and Simulation This book is the second volume in the annual series produced by the International Federation for Information Processing (IFIP) Working Group 11.10 on Critical Infrastructure Protection, an international community of scientists, engineers, practitioners and policy makers dedicated to advancing research, development and implementation efforts focused on infrastructure protection. The book contains a selection of twenty edited papers from the Second Annual IFIP WG 11.10 International Conference on Critical Infrastructure Protection held at George Mason University, Arlington, Virginia, USA in the spring of 2008.
The new agreement of the international RIGA consensus conference on nasal airway function tests.
Vogt, K; Bachmann-Harildstad, G; Lintermann, A; Nechyporenko, A; Peters, F; Wernecke, K D
2018-01-21
The report reflects an agreement based on the consensus conference of the International Standardization Committee on the Objective Assessment of the Nasal Airway in Riga, 2nd Nov. 2016. The aim of the conference was to address the existing nasal airway function tests and to take into account physical, mathematical and technical correctness as a base of international standardization as well as the requirements of the Council Directive 93/42/EEC of 14 June 1993 concerning medical devices. Rhinomanometry, acoustic rhinometry, peak nasal inspiratory flow, Odiosoft-Rhino, optical rhinometry, 24-h measurements, computational fluid dynamics, nasometry and the mirrow test were evaluated for important diagnostic criteria, which are the precision of the equipment including calibration and the software applied; validity with sensitivity, specificity, positive and negative predictive values, reliability with intra-individual and inter-individual reproducibility and responsiveness in clinical studies. For rhinomanometry, the logarithmic effective resistance was set as the parameter of high diagnostic relevance. In acoustic rhinometry, the area of interest for the minimal cross-sectional area will need further standardization. Peak nasal inspiratory flow is a reproducible and fast test, which showed a high range of mean values in different studies. The state of the art with computational fluid dynamics for the simulation of the airway still depends on high performance computing hardware and will, after standardization of the software and both the software and hardware for imaging protocols, certainly deliver a better understanding of the nasal airway flux.
Understanding Emergency Care Delivery Through Computer Simulation Modeling.
Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L
2018-02-01
In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This article, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This article discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo simulation, system dynamics modeling, discrete-event simulation, and agent-based simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this article, our goal is to enhance adoption of computer simulation, a set of methods that hold great promise in addressing emergency care organization and design challenges. © 2017 by the Society for Academic Emergency Medicine.
Uniform rovibrational collisional N2 bin model for DSMC, with application to atmospheric entry flows
NASA Astrophysics Data System (ADS)
Torres, E.; Bondar, Ye. A.; Magin, T. E.
2016-11-01
A state-to-state model for internal energy exchange and molecular dissociation allows for high-fidelity DSMC simulations. Elementary reaction cross sections for the N2 (v, J)+ N system were previously extracted from a quantum-chemical database, originally compiled at NASA Ames Research Center. Due to the high computational cost of simulating the full range of inelastic collision processes (approx. 23 million reactions), a coarse-grain model, called the Uniform RoVibrational Collisional (URVC) bin model can be used instead. This allows to reduce the original 9390 rovibrational levels of N2 to 10 energy bins. In the present work, this reduced model is used to simulate a 2D flow configuration, which more closely reproduces the conditions of high-speed entry into Earth's atmosphere. For this purpose, the URVC bin model had to be adapted for integration into the "Rarefied Gas Dynamics Analysis System" (RGDAS), a separate high-performance DSMC code capable of handling complex geometries and parallel computations. RGDAS was developed at the Institute of Theoretical and Applied Mechanics in Novosibirsk, Russia for use by the European Space Agency (ESA) and shares many features with the well-known SMILE code developed by the same group. We show that the reduced mechanism developed previously can be implemented in RGDAS, and the results exhibit nonequilibrium effects consistent with those observed in previous 1D-simulations.
Rathgeber, Silke; Pakula, Tadeusz; Urban, Volker
2004-08-22
We investigated the generation dependent shape and internal structure of star-burst dendrimers under good solvent conditions using small angle x-ray scattering and molecular modeling. Measurements have been performed on poly(amidoamine) dendrimers with generations ranging from g=0 up to g=8 at low concentrations in methanol. We described the static form factor P(q) by a model taking into account the compact, globular shape as well as the loose, polymeric character of dendrimers. Monomer distributions within dendrimers are of special interest for potential applications and have been characterized by the pair correlation function gamma(r), as well as by the monomer and end-group density profile, rho(r) and rho(e)(r), respectively. Monomer density profiles and gamma(r) can be derived from P(q) by modeling and via a model independent approach using the inverse Fourier transformation algorithm first introduced by Glatter. Experimental results are compared with computer simulations performed for single dendrimers of various generations using the cooperative motion algorithm. The simulation gives direct access to gamma(r) and rho(r), allows an independent determination of P(q), and yields in addition to the scattering experiment information about the distribution of the end groups. Excellent qualitative agreement between experiment and simulation has been found. (c) 2004 American Institute of Physics
Comparison of DSMC and CFD Solutions of Fire II Including Radiative Heating
NASA Technical Reports Server (NTRS)
Liechty, Derek S.; Johnston, Christopher O.; Lewis, Mark J.
2011-01-01
The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. These flows may also contain significant radiative heating. To prepare for these missions, NASA is developing the capability to simulate rarefied, ionized flows and to then calculate the resulting radiative heating to the vehicle's surface. In this study, the DSMC codes DAC and DS2V are used to obtain charge-neutral ionization solutions. NASA s direct simulation Monte Carlo code DAC is currently being updated to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced Quantum-Kinetic chemistry model, and to include electronic energy levels as an additional internal energy mode. The Fire II flight test is used in this study to assess these new capabilities. The 1634 second data point was chosen for comparisons to be made in order to include comparisons to computational fluid dynamics solutions. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid. It is shown that there can be quite a bit of variability in the vibrational temperature inferred from DSMC solutions and that, from how radiative heating is computed, the electronic temperature is much better suited for radiative calculations. To include the radiative portion of heating, the flow-field solutions are post-processed by the non-equilibrium radiation code HARA. Acceptable agreement between CFD and DSMC flow field solutions is demonstrated and the progress of the updates to DAC, along with an appropriate radiative heating solution, are discussed. In addition, future plans to generate more high fidelity radiative heat transfer solutions are discussed.
Design and construction of miniature artificial ecosystem based on dynamic response optimization
NASA Astrophysics Data System (ADS)
Hu, Dawei; Liu, Hong; Tong, Ling; Li, Ming; Hu, Enzhu
The miniature artificial ecosystem (MAES) is a combination of man, silkworm, salad and mi-croalgae to partially regenerate O2 , sanitary water and food, simultaneously dispose CO2 and wastes, therefore it have a fundamental life support function. In order to enhance the safety and reliability of MAES and eliminate the influences of internal variations and external dis-turbances, it was necessary to configure MAES as a closed-loop control system, and it could be considered as a prototype for future bioregenerative life support system. However, MAES is a complex system possessing large numbers of parameters, intricate nonlinearities, time-varying factors as well as uncertainties, hence it is difficult to perfectly design and construct a prototype through merely conducting experiments by trial and error method. Our research presented an effective way to resolve preceding problem by use of dynamic response optimiza-tion. Firstly the mathematical model of MAES with first-order nonlinear ordinary differential equations including parameters was developed based on relevant mechanisms and experimental data, secondly simulation model of MAES was derived on the platform of MatLab/Simulink to perform model validation and further digital simulations, thirdly reference trajectories of de-sired dynamic response of system outputs were specified according to prescribed requirements, and finally optimization for initial values, tuned parameter and independent parameters was carried out using the genetic algorithm, the advanced direct search method along with parallel computing methods through computer simulations. The result showed that all parameters and configurations of MAES were determined after a series of computer experiments, and its tran-sient response performances and steady characteristics closely matched the reference curves. Since the prototype is a physical system that represents the mathematical model with reason-able accuracy, so the process of designing and constructing a prototype of MAES is the reverse of mathematical modeling, and must have prerequisite assists from these results of computer simulation.
NASA Astrophysics Data System (ADS)
Teng, Fei; Fang, Guohong; Xu, Xiaoqing
2017-09-01
A parameterized internal tide dissipation term and self-attraction and loading (SAL) tide term are introduced in a barotropic numerical model to investigate the dynamics of semidiurnal tidal constituents M 2 and S 2 in the Bohai Sea, Yellow Sea and East China Sea (BYECS). The optimal parameters for bottom friction and internal dissipation are obtained through a series of numerical computations. Numerical simulation shows that the tide-generating force contributes 1.2% of M 2 power for the entire BYECS and up to 2.8% for the East China Sea deep basin. SAL tide contributes 4.4% of M 2 power for the BYECS and up to 9.3% for the East China Sea deep basin. Bottom friction plays a major role in dissipating tidal energy in the shelf regions, and the internal tide effect is important in the deep water regions. Numerical experiments show that artificial removal of tide-generating force in the BYECS can cause a significant difference (as much as 30 cm) in model output. Artificial removal of SAL tide in the BYECS can cause even greater difference, up to 40 cm. This indicates that SAL tide should be taken into account in numerical simulations, especially if the tide-generating force is considered.
ERIC Educational Resources Information Center
Zillesen, Pieter G. van Schaick
This paper introduces a hardware and software independent model for producing educational computer simulation environments. The model, which is based on the results of 32 studies of educational computer simulations program production, implies that educational computer simulation environments are specified, constructed, tested, implemented, and…
The Learning Effects of Computer Simulations in Science Education
ERIC Educational Resources Information Center
Rutten, Nico; van Joolingen, Wouter R.; van der Veen, Jan T.
2012-01-01
This article reviews the (quasi)experimental research of the past decade on the learning effects of computer simulations in science education. The focus is on two questions: how use of computer simulations can enhance traditional education, and how computer simulations are best used in order to improve learning processes and outcomes. We report on…
NASA Astrophysics Data System (ADS)
Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko
2017-08-01
We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.
Fluid-solid coupled simulation of the ignition transient of solid rocket motor
NASA Astrophysics Data System (ADS)
Li, Qiang; Liu, Peijin; He, Guoqiang
2015-05-01
The first period of the solid rocket motor operation is the ignition transient, which involves complex processes and, according to chronological sequence, can be divided into several stages, namely, igniter jet injection, propellant heating and ignition, flame spreading, chamber pressurization and solid propellant deformation. The ignition transient should be comprehensively analyzed because it significantly influences the overall performance of the solid rocket motor. A numerical approach is presented in this paper for simulating the fluid-solid interaction problems in the ignition transient of the solid rocket motor. In the proposed procedure, the time-dependent numerical solutions of the governing equations of internal compressible fluid flow are loosely coupled with those of the geometrical nonlinearity problems to determine the propellant mechanical response and deformation. The well-known Zeldovich-Novozhilov model was employed to model propellant ignition and combustion. The fluid-solid coupling interface data interpolation scheme and coupling instance for different computational agents were also reported. Finally, numerical validation was performed, and the proposed approach was applied to the ignition transient of one laboratory-scale solid rocket motor. For the application, the internal ballistics were obtained from the ground hot firing test, and comparisons were made. Results show that the integrated framework allows us to perform coupled simulations of the propellant ignition, strong unsteady internal fluid flow, and propellant mechanical response in SRMs with satisfactory stability and efficiency and presents a reliable and accurate solution to complex multi-physics problems.
NASA Technical Reports Server (NTRS)
Ahmad, Rashid A.; McCool, Alex (Technical Monitor)
2001-01-01
An enhanced performance solid rocket booster concept for the space shuttle system has been proposed. The concept booster will have strong commonality with the existing, proven, reliable four-segment Space Shuttle Reusable Solid Rocket Motors (RSRM) with individual component design (nozzle, insulator, etc.) optimized for a five-segment configuration. Increased performance is desirable to further enhance safety/reliability and/or increase payload capability. Performance increase will be achieved by adding a fifth propellant segment to the current four-segment booster and opening the throat to accommodate the increased mass flow while maintaining current pressure levels. One development concept under consideration is the static test of a "standard" RSRM with a fifth propellant segment inserted and appropriate minimum motor modifications. Feasibility studies are being conducted to assess the potential for any significant departure in component performance/loading from the well-characterized RSRM. An area of concern is the aft motor (submerged nozzle inlet, aft dome, etc.) where the altered internal flow resulting from the performance enhancing features (25% increase in mass flow rate, higher Mach numbers, modified subsonic nozzle contour) may result in increased component erosion and char. To assess this issue and to define the minimum design changes required to successfully static test a fifth segment RSRM engineering test motor, internal flow studies have been initiated. Internal aero-thermal environments were quantified in terms of conventional convective heating and discrete phase alumina particle impact/concentration and accretion calculations via Computational Fluid Dynamics (CFD) simulation. Two sets of comparative CFD simulations of the RSRM and the five-segment (IBM) concept motor were conducted with CFD commercial code FLUENT. The first simulation involved a two-dimensional axi-symmetric model of the full motor, initial grain RSRM. The second set of analyses included three-dimensional models of the RSRM and FSM aft motors with four-degree vectored nozzles.
2016-01-01
An important challenge in the simulation of biomolecular systems is a quantitative description of the protonation and deprotonation process of amino acid residues. Despite the seeming simplicity of adding or removing a positively charged hydrogen nucleus, simulating the actual protonation/deprotonation process is inherently difficult. It requires both the explicit treatment of the excess proton, including its charge defect delocalization and Grotthuss shuttling through inhomogeneous moieties (water and amino residues), and extensive sampling of coupled condensed phase motions. In a recent paper (J. Chem. Theory Comput.2014, 10, 2729−273725061442), a multiscale approach was developed to map high-level quantum mechanics/molecular mechanics (QM/MM) data into a multiscale reactive molecular dynamics (MS-RMD) model in order to describe amino acid deprotonation in bulk water. In this article, we extend the fitting approach (called FitRMD) to create MS-RMD models for ionizable amino acids within proteins. The resulting models are shown to faithfully reproduce the free energy profiles of the reference QM/MM Hamiltonian for PT inside an example protein, the ClC-ec1 H+/Cl– antiporter. Moreover, we show that the resulting MS-RMD models are computationally efficient enough to then characterize more complex 2-dimensional free energy surfaces due to slow degrees of freedom such as water hydration of internal protein cavities that can be inherently coupled to the excess proton charge translocation. The FitRMD method is thus shown to be an effective way to map ab initio level accuracy into a much more computationally efficient reactive MD method in order to explicitly simulate and quantitatively describe amino acid protonation/deprotonation in proteins. PMID:26734942
Lee, Sangyun; Liang, Ruibin; Voth, Gregory A; Swanson, Jessica M J
2016-02-09
An important challenge in the simulation of biomolecular systems is a quantitative description of the protonation and deprotonation process of amino acid residues. Despite the seeming simplicity of adding or removing a positively charged hydrogen nucleus, simulating the actual protonation/deprotonation process is inherently difficult. It requires both the explicit treatment of the excess proton, including its charge defect delocalization and Grotthuss shuttling through inhomogeneous moieties (water and amino residues), and extensive sampling of coupled condensed phase motions. In a recent paper (J. Chem. Theory Comput. 2014, 10, 2729-2737), a multiscale approach was developed to map high-level quantum mechanics/molecular mechanics (QM/MM) data into a multiscale reactive molecular dynamics (MS-RMD) model in order to describe amino acid deprotonation in bulk water. In this article, we extend the fitting approach (called FitRMD) to create MS-RMD models for ionizable amino acids within proteins. The resulting models are shown to faithfully reproduce the free energy profiles of the reference QM/MM Hamiltonian for PT inside an example protein, the ClC-ec1 H(+)/Cl(-) antiporter. Moreover, we show that the resulting MS-RMD models are computationally efficient enough to then characterize more complex 2-dimensional free energy surfaces due to slow degrees of freedom such as water hydration of internal protein cavities that can be inherently coupled to the excess proton charge translocation. The FitRMD method is thus shown to be an effective way to map ab initio level accuracy into a much more computationally efficient reactive MD method in order to explicitly simulate and quantitatively describe amino acid protonation/deprotonation in proteins.
Bertocci, Gina E; Brown, Nathan P; Mich, Patrice M
2017-01-01
OBJECTIVE To evaluate effects of an orthosis on biomechanics of a cranial cruciate ligament (CrCL)-deficient canine stifle joint by use of a 3-D quasistatic rigid-body pelvic limb computer model simulating the stance phase of gait and to investigate influences of orthosis hinge stiffness (durometer). SAMPLE A previously developed computer simulation model for a healthy 33-kg 5-year-old neutered Golden Retriever. PROCEDURES A custom stifle joint orthosis was implemented in the CrCL-deficient pelvic limb computer simulation model. Ligament loads, relative tibial translation, and relative tibial rotation in the orthosis-stabilized stifle joint (baseline scenario; high-durometer hinge]) were determined and compared with values for CrCL-intact and CrCL-deficient stifle joints. Sensitivity analysis was conducted to evaluate the influence of orthosis hinge stiffness on model outcome measures. RESULTS The orthosis decreased loads placed on the caudal cruciate and lateral collateral ligaments and increased load placed on the medial collateral ligament, compared with loads for the CrCL-intact stifle joint. Ligament loads were decreased in the orthosis-managed CrCL-deficient stifle joint, compared with loads for the CrCL-deficient stifle joint. Relative tibial translation and rotation decreased but were not eliminated after orthosis management. Increased orthosis hinge stiffness reduced tibial translation and rotation, whereas decreased hinge stiffness increased internal tibial rotation, compared with values for the baseline scenario. CONCLUSIONS AND CLINICAL RELEVANCE Stifle joint biomechanics were improved following orthosis implementation, compared with biomechanics of the CrCL-deficient stifle joint. Orthosis hinge stiffness influenced stifle joint biomechanics. An orthosis may be a viable option to stabilize a CrCL-deficient canine stifle joint.
Transient Three-Dimensional Side Load Analysis of Out-of-Round Film Cooled Nozzles
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Lin, Jeff; Ruf, Joe; Guidos, Mike
2010-01-01
The objective of this study is to investigate the effect of nozzle out-of-roundness on the transient startup side loads. The out-of-roundness could be the result of asymmetric loads induced by hardware attached to the nozzle, asymmetric internal stresses induced by previous tests and/or deformation, such as creep, from previous tests. The rocket engine studied encompasses a regeneratively cooled thrust chamber and a film cooled nozzle extension with film coolant distributed from a turbine exhaust manifold. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, and a transient inlet history based on an engine system simulation. Transient startup computations were performed with the out-of-roundness achieved by four degrees of ovalization of the nozzle: one perfectly round, one slightly out-of-round, one more out-of-round, and one significantly out-of-round. The computed side load physics caused by the nozzle out-of-roundness and its effect on nozzle side load are reported and discussed.
RNA secondary structure prediction using soft computing.
Ray, Shubhra Sankar; Pal, Sankar K
2013-01-01
Prediction of RNA structure is invaluable in creating new drugs and understanding genetic diseases. Several deterministic algorithms and soft computing-based techniques have been developed for more than a decade to determine the structure from a known RNA sequence. Soft computing gained importance with the need to get approximate solutions for RNA sequences by considering the issues related with kinetic effects, cotranscriptional folding, and estimation of certain energy parameters. A brief description of some of the soft computing-based techniques, developed for RNA secondary structure prediction, is presented along with their relevance. The basic concepts of RNA and its different structural elements like helix, bulge, hairpin loop, internal loop, and multiloop are described. These are followed by different methodologies, employing genetic algorithms, artificial neural networks, and fuzzy logic. The role of various metaheuristics, like simulated annealing, particle swarm optimization, ant colony optimization, and tabu search is also discussed. A relative comparison among different techniques, in predicting 12 known RNA secondary structures, is presented, as an example. Future challenging issues are then mentioned.
Multigrid calculation of internal flows in complex geometries
NASA Technical Reports Server (NTRS)
Smith, K. M.; Vanka, S. P.
1992-01-01
The development, validation, and application of a general purpose multigrid solution algorithm and computer program for the computation of elliptic flows in complex geometries is presented. This computer program combines several desirable features including a curvilinear coordinate system, collocated arrangement of the variables, and Full Multi-Grid/Full Approximation Scheme (FMG/FAS). Provisions are made for the inclusion of embedded obstacles and baffles inside the flow domain. The momentum and continuity equations are solved in a decoupled manner and a pressure corrective equation is used to update the pressures such that the fluxes at the cell faces satisfy local mass continuity. Despite the computational overhead required in the restriction and prolongation phases of the multigrid cycling, the superior convergence results in reduced overall CPU time. The numerical scheme and selected results of several validation flows are presented. Finally, the procedure is applied to study the flowfield in a side-inlet dump combustor and twin jet impingement from a simulated aircraft fuselage.
Computational discovery of extremal microstructure families
Chen, Desai; Skouras, Mélina; Zhu, Bo; Matusik, Wojciech
2018-01-01
Modern fabrication techniques, such as additive manufacturing, can be used to create materials with complex custom internal structures. These engineered materials exhibit a much broader range of bulk properties than their base materials and are typically referred to as metamaterials or microstructures. Although metamaterials with extraordinary properties have many applications, designing them is very difficult and is generally done by hand. We propose a computational approach to discover families of microstructures with extremal macroscale properties automatically. Using efficient simulation and sampling techniques, we compute the space of mechanical properties covered by physically realizable microstructures. Our system then clusters microstructures with common topologies into families. Parameterized templates are eventually extracted from families to generate new microstructure designs. We demonstrate these capabilities on the computational design of mechanical metamaterials and present five auxetic microstructure families with extremal elastic material properties. Our study opens the way for the completely automated discovery of extremal microstructures across multiple domains of physics, including applications reliant on thermal, electrical, and magnetic properties. PMID:29376124
NASA Astrophysics Data System (ADS)
Holmes, Shawn Yvette
A simulation was created to emulate two Racial Ethical Sensitivity Test (REST) videos (Brabeck et al., 2000). The REST is a reliable assessment for ethical sensitivity to racial and gender intolerant behaviors in educational settings. Quantitative and qualitative analysis of the REST was performed using the Quick-REST survey and an interview protocol. The purpose of this study was to affect science educator ability to recognize instances of racial and gender intolerant behaviors by levering immersive qualities of simulations. The fictitious Hazelton High School virtual environment was created by the researcher and compared with the traditional REST. The study investigated whether computer simulations can influence the ethical sensitivity of preservice and inservice science teachers to racial and gender intolerant behaviors in school settings. The post-test only research design involved 32 third-year science education students enrolled in science education classes at several southeastern universities and 31 science teachers from the same locale, some of which were part of an NSF project. Participant samples were assigned to the video control group or the simulation experimental group. This resulted in four comparison group; preservice video, preservice simulation, inservice video and inservice simulation. Participants experienced two REST scenarios in the appropriate format then responded to Quick-REST survey questions for both scenarios. Additionally, the simulation groups answered in-simulation and post-simulation questions. Nonparametric analysis of the Quick-REST ascertained differences between comparison groups. Cronbach's alpha was calculated for internal consistency. The REST interview protocol was used to analyze recognition of intolerant behaviors in the in-simulation prompts. Post-simulation prompts were analyzed for emergent themes concerning effect of the simulation on responses. The preservice video group had a significantly higher mean rank score than other comparison groups. There were no significant differences across the remaining groups. Qualitative analyses of in-simulation prompts suggest both preservice and inservice participants are unlikely to take action in an intolerant environment. Themes emerged in the post-simulation responses indicated participants viewed the simulation as a reflective, interactive, personal, and organic environment.
Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators
NASA Astrophysics Data System (ADS)
Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.
2015-12-01
Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.