Thermodynamic and transport properties of nitrogen fluid: Molecular theory and computer simulations
NASA Astrophysics Data System (ADS)
Eskandari Nasrabad, A.; Laghaei, R.
2018-04-01
Computer simulations and various theories are applied to compute the thermodynamic and transport properties of nitrogen fluid. To model the nitrogen interaction, an existing potential in the literature is modified to obtain a close agreement between the simulation results and experimental data for the orthobaric densities. We use the Generic van der Waals theory to calculate the mean free volume and apply the results within the modified Cohen-Turnbull relation to obtain the self-diffusion coefficient. Compared to experimental data, excellent results are obtained via computer simulations for the orthobaric densities, the vapor pressure, the equation of state, and the shear viscosity. We analyze the results of the theory and computer simulations for the various thermophysical properties.
Quantum chemistry simulation on quantum computers: theories and experiments.
Lu, Dawei; Xu, Boruo; Xu, Nanyang; Li, Zhaokai; Chen, Hongwei; Peng, Xinhua; Xu, Ruixue; Du, Jiangfeng
2012-07-14
It has been claimed that quantum computers can mimic quantum systems efficiently in the polynomial scale. Traditionally, those simulations are carried out numerically on classical computers, which are inevitably confronted with the exponential growth of required resources, with the increasing size of quantum systems. Quantum computers avoid this problem, and thus provide a possible solution for large quantum systems. In this paper, we first discuss the ideas of quantum simulation, the background of quantum simulators, their categories, and the development in both theories and experiments. We then present a brief introduction to quantum chemistry evaluated via classical computers followed by typical procedures of quantum simulation towards quantum chemistry. Reviewed are not only theoretical proposals but also proof-of-principle experimental implementations, via a small quantum computer, which include the evaluation of the static molecular eigenenergy and the simulation of chemical reaction dynamics. Although the experimental development is still behind the theory, we give prospects and suggestions for future experiments. We anticipate that in the near future quantum simulation will become a powerful tool for quantum chemistry over classical computations.
ERIC Educational Resources Information Center
Singh, Gurmukh
2012-01-01
The present article is primarily targeted for the advanced college/university undergraduate students of chemistry/physics education, computational physics/chemistry, and computer science. The most recent software system such as MS Visual Studio .NET version 2010 is employed to perform computer simulations for modeling Bohr's quantum theory of…
NASA Technical Reports Server (NTRS)
Parzen, Benjamin
1992-01-01
The theory of oscillator analysis in the immittance domain should be read in conjunction with the additional theory presented here. The combined theory enables the computer simulation of the steady state oscillator. The simulation makes the calculation of the oscillator total steady state performance practical, including noise at all oscillator locations. Some specific precision oscillators are analyzed.
Fiber Composite Sandwich Thermostructural Behavior: Computational Simulation
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Aiello, R. A.; Murthy, P. L. N.
1986-01-01
Several computational levels of progressive sophistication/simplification are described to computationally simulate composite sandwich hygral, thermal, and structural behavior. The computational levels of sophistication include: (1) three-dimensional detailed finite element modeling of the honeycomb, the adhesive and the composite faces; (2) three-dimensional finite element modeling of the honeycomb assumed to be an equivalent continuous, homogeneous medium, the adhesive and the composite faces; (3) laminate theory simulation where the honeycomb (metal or composite) is assumed to consist of plies with equivalent properties; and (4) derivations of approximate, simplified equations for thermal and mechanical properties by simulating the honeycomb as an equivalent homogeneous medium. The approximate equations are combined with composite hygrothermomechanical and laminate theories to provide a simple and effective computational procedure for simulating the thermomechanical/thermostructural behavior of fiber composite sandwich structures.
Simulating Serious Games: A Discrete-Time Computational Model Based on Cognitive Flow Theory
ERIC Educational Resources Information Center
Westera, Wim
2018-01-01
This paper presents a computational model for simulating how people learn from serious games. While avoiding the combinatorial explosion of a games micro-states, the model offers a meso-level pathfinding approach, which is guided by cognitive flow theory and various concepts from learning sciences. It extends a basic, existing model by exposing…
Shegog, Ross; Bartholomew, L Kay; Gold, Robert S; Pierrel, Elaine; Parcel, Guy S; Sockrider, Marianna M; Czyzewski, Danita I; Fernandez, Maria E; Berlin, Nina J; Abramson, Stuart
2006-01-01
Translating behavioral theories, models, and strategies to guide the development and structure of computer-based health applications is well recognized, although a continued challenge for program developers. A stepped approach to translate behavioral theory in the design of simulations to teach chronic disease management to children is described. This includes the translation steps to: 1) define target behaviors and their determinants, 2) identify theoretical methods to optimize behavioral change, and 3) choose educational strategies to effectively apply these methods and combine these into a cohesive computer-based simulation for health education. Asthma is used to exemplify a chronic health management problem and a computer-based asthma management simulation (Watch, Discover, Think and Act) that has been evaluated and shown to effect asthma self-management in children is used to exemplify the application of theory to practice. Impact and outcome evaluation studies have indicated the effectiveness of these steps in providing increased rigor and accountability, suggesting their utility for educators and developers seeking to apply simulations to enhance self-management behaviors in patients.
Understanding Islamist political violence through computational social simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watkins, Jennifer H; Mackerrow, Edward P; Patelli, Paolo G
Understanding the process that enables political violence is of great value in reducing the future demand for and support of violent opposition groups. Methods are needed that allow alternative scenarios and counterfactuals to be scientifically researched. Computational social simulation shows promise in developing 'computer experiments' that would be unfeasible or unethical in the real world. Additionally, the process of modeling and simulation reveals and challenges assumptions that may not be noted in theories, exposes areas where data is not available, and provides a rigorous, repeatable, and transparent framework for analyzing the complex dynamics of political violence. This paper demonstrates themore » computational modeling process using two simulation techniques: system dynamics and agent-based modeling. The benefits and drawbacks of both techniques are discussed. In developing these social simulations, we discovered that the social science concepts and theories needed to accurately simulate the associated psychological and social phenomena were lacking.« less
Application of control theory to dynamic systems simulation
NASA Technical Reports Server (NTRS)
Auslander, D. M.; Spear, R. C.; Young, G. E.
1982-01-01
The application of control theory is applied to dynamic systems simulation. Theory and methodology applicable to controlled ecological life support systems are considered. Spatial effects on system stability, design of control systems with uncertain parameters, and an interactive computing language (PARASOL-II) designed for dynamic system simulation, report quality graphics, data acquisition, and simple real time control are discussed.
Simulation of Nonlinear Instabilities in an Attachment-Line Boundary Layer
NASA Technical Reports Server (NTRS)
Joslin, Ronald D.
1996-01-01
The linear and the nonlinear stability of disturbances that propagate along the attachment line of a three-dimensional boundary layer is considered. The spatially evolving disturbances in the boundary layer are computed by direct numerical simulation (DNS) of the unsteady, incompressible Navier-Stokes equations. Disturbances are introduced either by forcing at the in ow or by applying suction and blowing at the wall. Quasi-parallel linear stability theory and a nonparallel theory yield notably different stability characteristics for disturbances near the critical Reynolds number; the DNS results con rm the latter theory. Previously, a weakly nonlinear theory and computations revealed a high wave-number region of subcritical disturbance growth. More recent computations have failed to achieve this subcritical growth. The present computational results indicate the presence of subcritically growing disturbances; the results support the weakly nonlinear theory. Furthermore, an explanation is provided for the previous theoretical and computational discrepancy. In addition, the present results demonstrate that steady suction can be used to stabilize disturbances that otherwise grow subcritically along the attachment line.
Approaches to Classroom-Based Computational Science.
ERIC Educational Resources Information Center
Guzdial, Mark
Computational science includes the use of computer-based modeling and simulation to define and test theories about scientific phenomena. The challenge for educators is to develop techniques for implementing computational science in the classroom. This paper reviews some previous work on the use of simulation alone (without modeling), modeling…
Real-time dynamics of lattice gauge theories with a few-qubit quantum computer
NASA Astrophysics Data System (ADS)
Martinez, Esteban A.; Muschik, Christine A.; Schindler, Philipp; Nigg, Daniel; Erhard, Alexander; Heyl, Markus; Hauke, Philipp; Dalmonte, Marcello; Monz, Thomas; Zoller, Peter; Blatt, Rainer
2016-06-01
Gauge theories are fundamental to our understanding of interactions between the elementary constituents of matter as mediated by gauge bosons. However, computing the real-time dynamics in gauge theories is a notorious challenge for classical computational methods. This has recently stimulated theoretical effort, using Feynman’s idea of a quantum simulator, to devise schemes for simulating such theories on engineered quantum-mechanical devices, with the difficulty that gauge invariance and the associated local conservation laws (Gauss laws) need to be implemented. Here we report the experimental demonstration of a digital quantum simulation of a lattice gauge theory, by realizing (1 + 1)-dimensional quantum electrodynamics (the Schwinger model) on a few-qubit trapped-ion quantum computer. We are interested in the real-time evolution of the Schwinger mechanism, describing the instability of the bare vacuum due to quantum fluctuations, which manifests itself in the spontaneous creation of electron-positron pairs. To make efficient use of our quantum resources, we map the original problem to a spin model by eliminating the gauge fields in favour of exotic long-range interactions, which can be directly and efficiently implemented on an ion trap architecture. We explore the Schwinger mechanism of particle-antiparticle generation by monitoring the mass production and the vacuum persistence amplitude. Moreover, we track the real-time evolution of entanglement in the system, which illustrates how particle creation and entanglement generation are directly related. Our work represents a first step towards quantum simulation of high-energy theories using atomic physics experiments—the long-term intention is to extend this approach to real-time quantum simulations of non-Abelian lattice gauge theories.
Scalco, Andrea; Ceschi, Andrea; Sartori, Riccardo
2018-01-01
It is likely that computer simulations will assume a greater role in the next future to investigate and understand reality (Rand & Rust, 2011). Particularly, agent-based models (ABMs) represent a method of investigation of social phenomena that blend the knowledge of social sciences with the advantages of virtual simulations. Within this context, the development of algorithms able to recreate the reasoning engine of autonomous virtual agents represents one of the most fragile aspects and it is indeed crucial to establish such models on well-supported psychological theoretical frameworks. For this reason, the present work discusses the application case of the theory of planned behavior (TPB; Ajzen, 1991) in the context of agent-based modeling: It is argued that this framework might be helpful more than others to develop a valid representation of human behavior in computer simulations. Accordingly, the current contribution considers issues related with the application of the model proposed by the TPB inside computer simulations and suggests potential solutions with the hope to contribute to shorten the distance between the fields of psychology and computer science.
Building an adiabatic quantum computer simulation in the classroom
NASA Astrophysics Data System (ADS)
Rodríguez-Laguna, Javier; Santalla, Silvia N.
2018-05-01
We present a didactic introduction to adiabatic quantum computation (AQC) via the explicit construction of a classical simulator of quantum computers. This constitutes a suitable route to introduce several important concepts for advanced undergraduates in physics: quantum many-body systems, quantum phase transitions, disordered systems, spin-glasses, and computational complexity theory.
An analysis of the 70-meter antenna hydrostatic bearing by means of computer simulation
NASA Technical Reports Server (NTRS)
Bartos, R. D.
1993-01-01
Recently, the computer program 'A Computer Solution for Hydrostatic Bearings with Variable Film Thickness,' used to design the hydrostatic bearing of the 70-meter antennas, was modified to improve the accuracy with which the program predicts the film height profile and oil pressure distribution between the hydrostatic bearing pad and the runner. This article presents a description of the modified computer program, the theory upon which the computer program computations are based, computer simulation results, and a discussion of the computer simulation results.
ERIC Educational Resources Information Center
Navarro, Aaron B.
1981-01-01
Presents a program in Level II BASIC for a TRS-80 computer that simulates a Turing machine and discusses the nature of the device. The program is run interactively and is designed to be used as an educational tool by computer science or mathematics students studying computational or automata theory. (MP)
Computer Simulation of Laboratory Experiments: An Unrealized Potential.
ERIC Educational Resources Information Center
Magin, D. J.; Reizes, J. A.
1990-01-01
Discussion of the use of computer simulation for laboratory experiments in undergraduate engineering education focuses on work at the University of New South Wales in the instructional design and software development of a package simulating a heat exchange device. The importance of integrating theory, design, and experimentation is also discussed.…
Elementary Teachers' Simulation Adoption and Inquiry-Based Use Following Professional Development
ERIC Educational Resources Information Center
Gonczi, Amanda; Maeng, Jennifer; Bell, Randy
2017-01-01
The purpose of this study was to characterize and compare 64 elementary science teachers' computer simulation use prior to and following professional development (PD) aligned with Innovation Adoption Theory. The PD highlighted computer simulation affordances that elementary teachers might find particularly useful. Qualitative and quantitative…
NASA Astrophysics Data System (ADS)
Wagner, R.; Norman, M. L.
Here we present a working example of a Basic SkyNode serving theoretical data. The data is taken from the Simulated Cluster Archive (SCA), a set of simulated X-ray clusters, where each cluster was computed using four different physics models. The LCA Theory SkyNode (LCATheory) tables contain columns of the integrated physical properties of the clusters at various redshifts. The ease of setting up a Theory SkyNode is an important result, because it represents a clear way to present theory data to the Virtual Observatory. Also, our Theory SkyNode provides a prototype for additional simulated object catalogs, which will be created from other simulations by our group, and hopefully others.
A computational model for simulating text comprehension.
Lemaire, Benoît; Denhière, Guy; Bellissens, Cédrick; Jhean-Larose, Sandra
2006-11-01
In the present article, we outline the architecture of a computer program for simulating the process by which humans comprehend texts. The program is based on psycholinguistic theories about human memory and text comprehension processes, such as the construction-integration model (Kintsch, 1998), the latent semantic analysis theory of knowledge representation (Landauer & Dumais, 1997), and the predication algorithms (Kintsch, 2001; Lemaire & Bianco, 2003), and it is intended to help psycholinguists investigate the way humans comprehend texts.
Investigating the Effectiveness of Computer Simulations for Chemistry Learning
ERIC Educational Resources Information Center
Plass, Jan L.; Milne, Catherine; Homer, Bruce D.; Schwartz, Ruth N.; Hayward, Elizabeth O.; Jordan, Trace; Verkuilen, Jay; Ng, Florrie; Wang, Yan; Barrientos, Juan
2012-01-01
Are well-designed computer simulations an effective tool to support student understanding of complex concepts in chemistry when integrated into high school science classrooms? We investigated scaling up the use of a sequence of simulations of kinetic molecular theory and associated topics of diffusion, gas laws, and phase change, which we designed…
Mechanisms of Developmental Change in Infant Categorization
ERIC Educational Resources Information Center
Westermann, Gert; Mareschal, Denis
2012-01-01
Computational models are tools for testing mechanistic theories of learning and development. Formal models allow us to instantiate theories of cognitive development in computer simulations. Model behavior can then be compared to real performance. Connectionist models, loosely based on neural information processing, have been successful in…
The application of the integral equation theory to study the hydrophobic interaction
Mohorič, Tomaž; Urbic, Tomaz; Hribar-Lee, Barbara
2014-01-01
The Wertheim's integral equation theory was tested against newly obtained Monte Carlo computer simulations to describe the potential of mean force between two hydrophobic particles. An excellent agreement was obtained between the theoretical and simulation results. Further, the Wertheim's integral equation theory with polymer Percus-Yevick closure qualitatively correctly (with respect to the experimental data) describes the solvation structure under conditions where the simulation results are difficult to obtain with good enough accuracy. PMID:24437891
Theory and Simulation of Multicomponent Osmotic Systems
Karunaweera, Sadish; Gee, Moon Bae; Weerasinghe, Samantha; Smith, Paul E.
2012-01-01
Most cellular processes occur in systems containing a variety of components many of which are open to material exchange. However, computer simulations of biological systems are almost exclusively performed in systems closed to material exchange. In principle, the behavior of biomolecules in open and closed systems will be different. Here, we provide a rigorous framework for the analysis of experimental and simulation data concerning open and closed multicomponent systems using the Kirkwood-Buff (KB) theory of solutions. The results are illustrated using computer simulations for various concentrations of the solutes Gly, Gly2 and Gly3 in both open and closed systems, and in the absence or presence of NaCl as a cosolvent. In addition, KB theory is used to help rationalize the aggregation properties of the solutes. Here one observes that the picture of solute association described by the KB integrals, which are directly related to the solution thermodynamics, and that provided by more physical clustering approaches are different. It is argued that the combination of KB theory and simulation data provides a simple and powerful tool for the analysis of complex multicomponent open and closed systems. PMID:23329894
Empirical improvements for estimating earthquake response spectra with random‐vibration theory
Boore, David; Thompson, Eric M.
2012-01-01
The stochastic method of ground‐motion simulation is often used in combination with the random‐vibration theory to directly compute ground‐motion intensity measures, thereby bypassing the more computationally intensive time‐domain simulations. Key to the application of random‐vibration theory to simulate response spectra is determining the duration (Drms) used in computing the root‐mean‐square oscillator response. Boore and Joyner (1984) originally proposed an equation for Drms , which was improved upon by Liu and Pezeshk (1999). Though these equations are both substantial improvements over using the duration of the ground‐motion excitation for Drms , we document systematic differences between the ground‐motion intensity measures derived from the random‐vibration and time‐domain methods for both of these Drms equations. These differences are generally less than 10% for most magnitudes, distances, and periods of engineering interest. Given the systematic nature of the differences, however, we feel that improved equations are warranted. We empirically derive new equations from time‐domain simulations for eastern and western North America seismological models. The new equations improve the random‐vibration simulations over a wide range of magnitudes, distances, and oscillator periods.
2017-02-13
3550 Aberdeen Ave., SE 11. SPONSOR/MONITOR’S REPORT Kirtland AFB, NM 87117-5776 NUMBER(S) AFRL -RV-PS-TR-2016-0161 12. DISTRIBUTION / AVAILABILITY...RVIL Kirtland AFB, NM 87117-5776 2 cys Official Record Copy AFRL /RVSW/David Cardimona 1 cy 22 Approved for public release; distribution is unlimited. ... AFRL -RV-PS- AFRL -RV-PS- TR-2016-0161 TR-2016-0161 ATOMISTIC- AND MESO-SCALE COMPUTATIONAL SIMULATIONS FOR DEVELOPING MULTI-TIMESCALE THEORY FOR
Cyberpsychology: a human-interaction perspective based on cognitive modeling.
Emond, Bruno; West, Robert L
2003-10-01
This paper argues for the relevance of cognitive modeling and cognitive architectures to cyberpsychology. From a human-computer interaction point of view, cognitive modeling can have benefits both for theory and model building, and for the design and evaluation of sociotechnical systems usability. Cognitive modeling research applied to human-computer interaction has two complimentary objectives: (1) to develop theories and computational models of human interactive behavior with information and collaborative technologies, and (2) to use the computational models as building blocks for the design, implementation, and evaluation of interactive technologies. From the perspective of building theories and models, cognitive modeling offers the possibility to anchor cyberpsychology theories and models into cognitive architectures. From the perspective of the design and evaluation of socio-technical systems, cognitive models can provide the basis for simulated users, which can play an important role in usability testing. As an example of application of cognitive modeling to technology design, the paper presents a simulation of interactive behavior with five different adaptive menu algorithms: random, fixed, stacked, frequency based, and activation based. Results of the simulation indicate that fixed menu positions seem to offer the best support for classification like tasks such as filing e-mails. This research is part of the Human-Computer Interaction, and the Broadband Visual Communication research programs at the National Research Council of Canada, in collaboration with the Carleton Cognitive Modeling Lab at Carleton University.
ERIC Educational Resources Information Center
Lorton, Paul, Jr.
EXPER-SIM (Experiment Simulation) is an instructional approach (with supporting computer programs) which allows an instructor to build a theory based model of how data would occur if an experiment were actually conducted in a world where the theory held true. The LESS version of EXPER-SIM was adapted to run on the Hewlett-Packard 2000E timesharing…
Datta, Subhra; Ghosal, Sandip; Patankar, Neelesh A
2006-02-01
Electroosmotic flow in a straight micro-channel of rectangular cross-section is computed numerically for several situations where the wall zeta-potential is not constant but has a specified spatial variation. The results of the computation are compared with an earlier published asymptotic theory based on the lubrication approximation: the assumption that any axial variations take place on a long length scale compared to a characteristic channel width. The computational results are found to be in excellent agreement with the theory even when the scale of axial variations is comparable to the channel width. In the opposite limit when the wavelength of fluctuations is much shorter than the channel width, the lubrication theory fails to describe the solution either qualitatively or quantitatively. In this short wave limit the solution is well described by Ajdari's theory for electroosmotic flow between infinite parallel plates (Ajdari, A., Phys. Rev. E 1996, 53, 4996-5005.) The infinitely thin electric double layer limit is assumed in the theory as well as in the simulation.
Physics Computing '92: Proceedings of the 4th International Conference
NASA Astrophysics Data System (ADS)
de Groot, Robert A.; Nadrchal, Jaroslav
1993-04-01
The Table of Contents for the book is as follows: * Preface * INVITED PAPERS * Ab Initio Theoretical Approaches to the Structural, Electronic and Vibrational Properties of Small Clusters and Fullerenes: The State of the Art * Neural Multigrid Methods for Gauge Theories and Other Disordered Systems * Multicanonical Monte Carlo Simulations * On the Use of the Symbolic Language Maple in Physics and Chemistry: Several Examples * Nonequilibrium Phase Transitions in Catalysis and Population Models * Computer Algebra, Symmetry Analysis and Integrability of Nonlinear Evolution Equations * The Path-Integral Quantum Simulation of Hydrogen in Metals * Digital Optical Computing: A New Approach of Systolic Arrays Based on Coherence Modulation of Light and Integrated Optics Technology * Molecular Dynamics Simulations of Granular Materials * Numerical Implementation of a K.A.M. Algorithm * Quasi-Monte Carlo, Quasi-Random Numbers and Quasi-Error Estimates * What Can We Learn from QMC Simulations * Physics of Fluctuating Membranes * Plato, Apollonius, and Klein: Playing with Spheres * Steady States in Nonequilibrium Lattice Systems * CONVODE: A REDUCE Package for Differential Equations * Chaos in Coupled Rotators * Symplectic Numerical Methods for Hamiltonian Problems * Computer Simulations of Surfactant Self Assembly * High-dimensional and Very Large Cellular Automata for Immunological Shape Space * A Review of the Lattice Boltzmann Method * Electronic Structure of Solids in the Self-interaction Corrected Local-spin-density Approximation * Dedicated Computers for Lattice Gauge Theory Simulations * Physics Education: A Survey of Problems and Possible Solutions * Parallel Computing and Electronic-Structure Theory * High Precision Simulation Techniques for Lattice Field Theory * CONTRIBUTED PAPERS * Case Study of Microscale Hydrodynamics Using Molecular Dynamics and Lattice Gas Methods * Computer Modelling of the Structural and Electronic Properties of the Supported Metal Catalysis * Ordered Particle Simulations for Serial and MIMD Parallel Computers * "NOLP" -- Program Package for Laser Plasma Nonlinear Optics * Algorithms to Solve Nonlinear Least Square Problems * Distribution of Hydrogen Atoms in Pd-H Computed by Molecular Dynamics * A Ray Tracing of Optical System for Protein Crystallography Beamline at Storage Ring-SIBERIA-2 * Vibrational Properties of a Pseudobinary Linear Chain with Correlated Substitutional Disorder * Application of the Software Package Mathematica in Generalized Master Equation Method * Linelist: An Interactive Program for Analysing Beam-foil Spectra * GROMACS: A Parallel Computer for Molecular Dynamics Simulations * GROMACS Method of Virial Calculation Using a Single Sum * The Interactive Program for the Solution of the Laplace Equation with the Elimination of Singularities for Boundary Functions * Random-Number Generators: Testing Procedures and Comparison of RNG Algorithms * Micro-TOPIC: A Tokamak Plasma Impurities Code * Rotational Molecular Scattering Calculations * Orthonormal Polynomial Method for Calibrating of Cryogenic Temperature Sensors * Frame-based System Representing Basis of Physics * The Role of Massively Data-parallel Computers in Large Scale Molecular Dynamics Simulations * Short-range Molecular Dynamics on a Network of Processors and Workstations * An Algorithm for Higher-order Perturbation Theory in Radiative Transfer Computations * Hydrostochastics: The Master Equation Formulation of Fluid Dynamics * HPP Lattice Gas on Transputers and Networked Workstations * Study on the Hysteresis Cycle Simulation Using Modeling with Different Functions on Intervals * Refined Pruning Techniques for Feed-forward Neural Networks * Random Walk Simulation of the Motion of Transient Charges in Photoconductors * The Optical Hysteresis in Hydrogenated Amorphous Silicon * Diffusion Monte Carlo Analysis of Modern Interatomic Potentials for He * A Parallel Strategy for Molecular Dynamics Simulations of Polar Liquids on Transputer Arrays * Distribution of Ions Reflected on Rough Surfaces * The Study of Step Density Distribution During Molecular Beam Epitaxy Growth: Monte Carlo Computer Simulation * Towards a Formal Approach to the Construction of Large-scale Scientific Applications Software * Correlated Random Walk and Discrete Modelling of Propagation through Inhomogeneous Media * Teaching Plasma Physics Simulation * A Theoretical Determination of the Au-Ni Phase Diagram * Boson and Fermion Kinetics in One-dimensional Lattices * Computational Physics Course on the Technical University * Symbolic Computations in Simulation Code Development and Femtosecond-pulse Laser-plasma Interaction Studies * Computer Algebra and Integrated Computing Systems in Education of Physical Sciences * Coordinated System of Programs for Undergraduate Physics Instruction * Program Package MIRIAM and Atomic Physics of Extreme Systems * High Energy Physics Simulation on the T_Node * The Chapman-Kolmogorov Equation as Representation of Huygens' Principle and the Monolithic Self-consistent Numerical Modelling of Lasers * Authoring System for Simulation Developments * Molecular Dynamics Study of Ion Charge Effects in the Structure of Ionic Crystals * A Computational Physics Introductory Course * Computer Calculation of Substrate Temperature Field in MBE System * Multimagnetical Simulation of the Ising Model in Two and Three Dimensions * Failure of the CTRW Treatment of the Quasicoherent Excitation Transfer * Implementation of a Parallel Conjugate Gradient Method for Simulation of Elastic Light Scattering * Algorithms for Study of Thin Film Growth * Algorithms and Programs for Physics Teaching in Romanian Technical Universities * Multicanonical Simulation of 1st order Transitions: Interface Tension of the 2D 7-State Potts Model * Two Numerical Methods for the Calculation of Periodic Orbits in Hamiltonian Systems * Chaotic Behavior in a Probabilistic Cellular Automata? * Wave Optics Computing by a Networked-based Vector Wave Automaton * Tensor Manipulation Package in REDUCE * Propagation of Electromagnetic Pulses in Stratified Media * The Simple Molecular Dynamics Model for the Study of Thermalization of the Hot Nucleon Gas * Electron Spin Polarization in PdCo Alloys Calculated by KKR-CPA-LSD Method * Simulation Studies of Microscopic Droplet Spreading * A Vectorizable Algorithm for the Multicolor Successive Overrelaxation Method * Tetragonality of the CuAu I Lattice and Its Relation to Electronic Specific Heat and Spin Susceptibility * Computer Simulation of the Formation of Metallic Aggregates Produced by Chemical Reactions in Aqueous Solution * Scaling in Growth Models with Diffusion: A Monte Carlo Study * The Nucleus as the Mesoscopic System * Neural Network Computation as Dynamic System Simulation * First-principles Theory of Surface Segregation in Binary Alloys * Data Smooth Approximation Algorithm for Estimating the Temperature Dependence of the Ice Nucleation Rate * Genetic Algorithms in Optical Design * Application of 2D-FFT in the Study of Molecular Exchange Processes by NMR * Advanced Mobility Model for Electron Transport in P-Si Inversion Layers * Computer Simulation for Film Surfaces and its Fractal Dimension * Parallel Computation Techniques and the Structure of Catalyst Surfaces * Educational SW to Teach Digital Electronics and the Corresponding Text Book * Primitive Trinomials (Mod 2) Whose Degree is a Mersenne Exponent * Stochastic Modelisation and Parallel Computing * Remarks on the Hybrid Monte Carlo Algorithm for the ∫4 Model * An Experimental Computer Assisted Workbench for Physics Teaching * A Fully Implicit Code to Model Tokamak Plasma Edge Transport * EXPFIT: An Interactive Program for Automatic Beam-foil Decay Curve Analysis * Mapping Technique for Solving General, 1-D Hamiltonian Systems * Freeway Traffic, Cellular Automata, and Some (Self-Organizing) Criticality * Photonuclear Yield Analysis by Dynamic Programming * Incremental Representation of the Simply Connected Planar Curves * Self-convergence in Monte Carlo Methods * Adaptive Mesh Technique for Shock Wave Propagation * Simulation of Supersonic Coronal Streams and Their Interaction with the Solar Wind * The Nature of Chaos in Two Systems of Ordinary Nonlinear Differential Equations * Considerations of a Window-shopper * Interpretation of Data Obtained by RTP 4-Channel Pulsed Radar Reflectometer Using a Multi Layer Perceptron * Statistics of Lattice Bosons for Finite Systems * Fractal Based Image Compression with Affine Transformations * Algorithmic Studies on Simulation Codes for Heavy-ion Reactions * An Energy-Wise Computer Simulation of DNA-Ion-Water Interactions Explains the Abnormal Structure of Poly[d(A)]:Poly[d(T)] * Computer Simulation Study of Kosterlitz-Thouless-Like Transitions * Problem-oriented Software Package GUN-EBT for Computer Simulation of Beam Formation and Transport in Technological Electron-Optical Systems * Parallelization of a Boundary Value Solver and its Application in Nonlinear Dynamics * The Symbolic Classification of Real Four-dimensional Lie Algebras * Short, Singular Pulses Generation by a Dye Laser at Two Wavelengths Simultaneously * Quantum Monte Carlo Simulations of the Apex-Oxygen-Model * Approximation Procedures for the Axial Symmetric Static Einstein-Maxwell-Higgs Theory * Crystallization on a Sphere: Parallel Simulation on a Transputer Network * FAMULUS: A Software Product (also) for Physics Education * MathCAD vs. FAMULUS -- A Brief Comparison * First-principles Dynamics Used to Study Dissociative Chemisorption * A Computer Controlled System for Crystal Growth from Melt * A Time Resolved Spectroscopic Method for Short Pulsed Particle Emission * Green's Function Computation in Radiative Transfer Theory * Random Search Optimization Technique for One-criteria and Multi-criteria Problems * Hartley Transform Applications to Thermal Drift Elimination in Scanning Tunneling Microscopy * Algorithms of Measuring, Processing and Interpretation of Experimental Data Obtained with Scanning Tunneling Microscope * Time-dependent Atom-surface Interactions * Local and Global Minima on Molecular Potential Energy Surfaces: An Example of N3 Radical * Computation of Bifurcation Surfaces * Symbolic Computations in Quantum Mechanics: Energies in Next-to-solvable Systems * A Tool for RTP Reactor and Lamp Field Design * Modelling of Particle Spectra for the Analysis of Solid State Surface * List of Participants
ERIC Educational Resources Information Center
Robison, Elizabeth Sharon
2012-01-01
Nursing education is experiencing a transition in how students are exposed to clinical situations. Technology, specifically human patient computer simulation, is replacing human exposure in clinical education (Nehring, 2010b). Kaakinen and Arwood (2009) discuss the need to apply learning theories to instructional designs involving simulation for…
The 6th International Conference on Computer Science and Computational Mathematics (ICCSCM 2017)
NASA Astrophysics Data System (ADS)
2017-09-01
The ICCSCM 2017 (The 6th International Conference on Computer Science and Computational Mathematics) has aimed to provide a platform to discuss computer science and mathematics related issues including Algebraic Geometry, Algebraic Topology, Approximation Theory, Calculus of Variations, Category Theory; Homological Algebra, Coding Theory, Combinatorics, Control Theory, Cryptology, Geometry, Difference and Functional Equations, Discrete Mathematics, Dynamical Systems and Ergodic Theory, Field Theory and Polynomials, Fluid Mechanics and Solid Mechanics, Fourier Analysis, Functional Analysis, Functions of a Complex Variable, Fuzzy Mathematics, Game Theory, General Algebraic Systems, Graph Theory, Group Theory and Generalizations, Image Processing, Signal Processing and Tomography, Information Fusion, Integral Equations, Lattices, Algebraic Structures, Linear and Multilinear Algebra; Matrix Theory, Mathematical Biology and Other Natural Sciences, Mathematical Economics and Financial Mathematics, Mathematical Physics, Measure Theory and Integration, Neutrosophic Mathematics, Number Theory, Numerical Analysis, Operations Research, Optimization, Operator Theory, Ordinary and Partial Differential Equations, Potential Theory, Real Functions, Rings and Algebras, Statistical Mechanics, Structure Of Matter, Topological Groups, Wavelets and Wavelet Transforms, 3G/4G Network Evolutions, Ad-Hoc, Mobile, Wireless Networks and Mobile Computing, Agent Computing & Multi-Agents Systems, All topics related Image/Signal Processing, Any topics related Computer Networks, Any topics related ISO SC-27 and SC- 17 standards, Any topics related PKI(Public Key Intrastructures), Artifial Intelligences(A.I.) & Pattern/Image Recognitions, Authentication/Authorization Issues, Biometric authentication and algorithms, CDMA/GSM Communication Protocols, Combinatorics, Graph Theory, and Analysis of Algorithms, Cryptography and Foundation of Computer Security, Data Base(D.B.) Management & Information Retrievals, Data Mining, Web Image Mining, & Applications, Defining Spectrum Rights and Open Spectrum Solutions, E-Comerce, Ubiquitous, RFID, Applications, Fingerprint/Hand/Biometrics Recognitions and Technologies, Foundations of High-performance Computing, IC-card Security, OTP, and Key Management Issues, IDS/Firewall, Anti-Spam mail, Anti-virus issues, Mobile Computing for E-Commerce, Network Security Applications, Neural Networks and Biomedical Simulations, Quality of Services and Communication Protocols, Quantum Computing, Coding, and Error Controls, Satellite and Optical Communication Systems, Theory of Parallel Processing and Distributed Computing, Virtual Visions, 3-D Object Retrievals, & Virtual Simulations, Wireless Access Security, etc. The success of ICCSCM 2017 is reflected in the received papers from authors around the world from several countries which allows a highly multinational and multicultural idea and experience exchange. The accepted papers of ICCSCM 2017 are published in this Book. Please check http://www.iccscm.com for further news. A conference such as ICCSCM 2017 can only become successful using a team effort, so herewith we want to thank the International Technical Committee and the Reviewers for their efforts in the review process as well as their valuable advices. We are thankful to all those who contributed to the success of ICCSCM 2017. The Secretary
Two inviscid computational simulations of separated flow about airfoils
NASA Technical Reports Server (NTRS)
Barnwell, R. W.
1976-01-01
Two inviscid computational simulations of separated flow about airfoils are described. The basic computational method is the line relaxation finite-difference method. Viscous separation is approximated with inviscid free-streamline separation. The point of separation is specified, and the pressure in the separation region is calculated. In the first simulation, the empiricism of constant pressure in the separation region is employed. This empiricism is easier to implement with the present method than with singularity methods. In the second simulation, acoustic theory is used to determine the pressure in the separation region. The results of both simulations are compared with experiment.
NASA Astrophysics Data System (ADS)
Marzari, Nicola
The last 30 years have seen the steady and exhilarating development of powerful quantum-simulation engines for extended systems, dedicated to the solution of the Kohn-Sham equations of density-functional theory, often augmented by density-functional perturbation theory, many-body perturbation theory, time-dependent density-functional theory, dynamical mean-field theory, and quantum Monte Carlo. Their implementation on massively parallel architectures, now leveraging also GPUs and accelerators, has started a massive effort in the prediction from first principles of many or of complex materials properties, leading the way to the exascale through the combination of HPC (high-performance computing) and HTC (high-throughput computing). Challenges and opportunities abound: complementing hardware and software investments and design; developing the materials' informatics infrastructure needed to encode knowledge into complex protocols and workflows of calculations; managing and curating data; resisting the complacency that we have already reached the predictive accuracy needed for materials design, or a robust level of verification of the different quantum engines. In this talk I will provide an overview of these challenges, with the ultimate prize being the computational understanding, prediction, and design of properties and performance for novel or complex materials and devices.
Coping with the Stigma of Mental Illness: Empirically-Grounded Hypotheses from Computer Simulations
ERIC Educational Resources Information Center
Kroska, Amy; Har, Sarah K.
2011-01-01
This research demonstrates how affect control theory and its computer program, "Interact", can be used to develop empirically-grounded hypotheses regarding the connection between cultural labels and behaviors. Our demonstration focuses on propositions in the modified labeling theory of mental illness. According to the MLT, negative societal…
Investigations in Computer-Aided Instruction and Computer-Aided Controls. Final Report.
ERIC Educational Resources Information Center
Rosenberg, R.C.; And Others
These research projects, designed to delve into certain relationships between humans and computers, are focused on computer-assisted instruction and on man-computer interaction. One study demonstrates that within the limits of formal engineering theory, a computer simulated laboratory (Dynamic Systems Laboratory) can be built in which freshmen…
Computer simulations of rapid granular flows of spheres interacting with a flat, frictional boundary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Louge, M.Y.
This paper employs computer simulations to test the theory of Jenkins [J. Applied Mech. [bold 59], 120 (1992)] for the interaction between a rapid granular flow of spheres and a flat, frictional wall. This paper examines the boundary conditions that relate the shear stress and energy flux at the wall to the normal stress, slip velocity, and fluctuation energy, and to the parameters that characterize a collision. It is found that while the theory captures the trends of the boundary conditions at low friction, it does not anticipate their behavior at large friction. A critical evaluation of Jenkins' assumptions suggestsmore » where his theory may be improved.« less
Determination of partial molar volumes from free energy perturbation theory.
Vilseck, Jonah Z; Tirado-Rives, Julian; Jorgensen, William L
2015-04-07
Partial molar volume is an important thermodynamic property that gives insights into molecular size and intermolecular interactions in solution. Theoretical frameworks for determining the partial molar volume (V°) of a solvated molecule generally apply Scaled Particle Theory or Kirkwood-Buff theory. With the current abilities to perform long molecular dynamics and Monte Carlo simulations, more direct methods are gaining popularity, such as computing V° directly as the difference in computed volume from two simulations, one with a solute present and another without. Thermodynamically, V° can also be determined as the pressure derivative of the free energy of solvation in the limit of infinite dilution. Both approaches are considered herein with the use of free energy perturbation (FEP) calculations to compute the necessary free energies of solvation at elevated pressures. Absolute and relative partial molar volumes are computed for benzene and benzene derivatives using the OPLS-AA force field. The mean unsigned error for all molecules is 2.8 cm(3) mol(-1). The present methodology should find use in many contexts such as the development and testing of force fields for use in computer simulations of organic and biomolecular systems, as a complement to related experimental studies, and to develop a deeper understanding of solute-solvent interactions.
A Review of Computer-Based Human Behavior Representations and Their Relation to Military Simulations
2003-08-01
described by Emery and Trist (1960), activity theory introduced by Vygotsky in the 1930s and formalized by Leont’ev (1979) and situated cognition theory by...II-6 B. Adaptive Resonance Theory (ART) .......................................................... II-6 1. Model...II-31 G. Cognitive Complexity Theory (CCT
Evaluation of the chondral modeling theory using fe-simulation and numeric shape optimization
Plochocki, Jeffrey H; Ward, Carol V; Smith, Douglas E
2009-01-01
The chondral modeling theory proposes that hydrostatic pressure within articular cartilage regulates joint size, shape, and congruence through regional variations in rates of tissue proliferation.The purpose of this study is to develop a computational model using a nonlinear two-dimensional finite element analysis in conjunction with numeric shape optimization to evaluate the chondral modeling theory. The model employed in this analysis is generated from an MR image of the medial portion of the tibiofemoral joint in a subadult male. Stress-regulated morphological changes are simulated until skeletal maturity and evaluated against the chondral modeling theory. The computed results are found to support the chondral modeling theory. The shape-optimized model exhibits increased joint congruence, broader stress distributions in articular cartilage, and a relative decrease in joint diameter. The results for the computational model correspond well with experimental data and provide valuable insights into the mechanical determinants of joint growth. The model also provides a crucial first step toward developing a comprehensive model that can be employed to test the influence of mechanical variables on joint conformation. PMID:19438771
Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Khalik, Hany S.; Turinsky, Paul J.
2005-07-15
Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. A meaningful adaption will result in high-fidelity and robust adapted core simulator models. To perform adaption, we propose an inverse theory approach in which the multitudes of input data to core simulators, i.e., reactor physics and thermal-hydraulic data, are to be adjusted to improve agreement withmore » measured observables while keeping core simulator models unadapted. At first glance, devising such adaption for typical core simulators with millions of input and observables data would spawn not only several prohibitive challenges but also numerous disparaging concerns. The challenges include the computational burdens of the sensitivity-type calculations required to construct Jacobian operators for the core simulator models. Also, the computational burdens of the uncertainty-type calculations required to estimate the uncertainty information of core simulator input data present a demanding challenge. The concerns however are mainly related to the reliability of the adjusted input data. The methodologies of adaptive simulation are well established in the literature of data adjustment. We adopt the same general framework for data adjustment; however, we refrain from solving the fundamental adjustment equations in a conventional manner. We demonstrate the use of our so-called Efficient Subspace Methods (ESMs) to overcome the computational and storage burdens associated with the core adaption problem. We illustrate the successful use of ESM-based adaptive techniques for a typical boiling water reactor core simulator adaption problem.« less
Applications of a general random-walk theory for confined diffusion.
Calvo-Muñoz, Elisa M; Selvan, Myvizhi Esai; Xiong, Ruichang; Ojha, Madhusudan; Keffer, David J; Nicholson, Donald M; Egami, Takeshi
2011-01-01
A general random walk theory for diffusion in the presence of nanoscale confinement is developed and applied. The random-walk theory contains two parameters describing confinement: a cage size and a cage-to-cage hopping probability. The theory captures the correct nonlinear dependence of the mean square displacement (MSD) on observation time for intermediate times. Because of its simplicity, the theory also requires modest computational requirements and is thus able to simulate systems with very low diffusivities for sufficiently long time to reach the infinite-time-limit regime where the Einstein relation can be used to extract the self-diffusivity. The theory is applied to three practical cases in which the degree of order in confinement varies. The three systems include diffusion of (i) polyatomic molecules in metal organic frameworks, (ii) water in proton exchange membranes, and (iii) liquid and glassy iron. For all three cases, the comparison between theory and the results of molecular dynamics (MD) simulations indicates that the theory can describe the observed diffusion behavior with a small fraction of the computational expense. The confined-random-walk theory fit to the MSDs of very short MD simulations is capable of accurately reproducing the MSDs of much longer MD simulations. Furthermore, the values of the parameter for cage size correspond to the physical dimensions of the systems and the cage-to-cage hopping probability corresponds to the activation barrier for diffusion, indicating that the two parameters in the theory are not simply fitted values but correspond to real properties of the physical system.
Some theoretical issues on computer simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, C.L.; Reidys, C.M.
1998-02-01
The subject of this paper is the development of mathematical foundations for a theory of simulation. Sequentially updated cellular automata (sCA) over arbitrary graphs are employed as a paradigmatic framework. In the development of the theory, the authors focus on the properties of causal dependencies among local mappings in a simulation. The main object of and study is the mapping between a graph representing the dependencies among entities of a simulation and a representing the equivalence classes of systems obtained by all possible updates.
ERIC Educational Resources Information Center
Kelderman, Henk
1992-01-01
Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…
First-principles simulations of heat transport
NASA Astrophysics Data System (ADS)
Puligheddu, Marcello; Gygi, Francois; Galli, Giulia
2017-11-01
Advances in understanding heat transport in solids were recently reported by both experiment and theory. However an efficient and predictive quantum simulation framework to investigate thermal properties of solids, with the same complexity as classical simulations, has not yet been developed. Here we present a method to compute the thermal conductivity of solids by performing ab initio molecular dynamics at close to equilibrium conditions, which only requires calculations of first-principles trajectories and atomic forces, thus avoiding direct computation of heat currents and energy densities. In addition the method requires much shorter sequential simulation times than ordinary molecular dynamics techniques, making it applicable within density functional theory. We discuss results for a representative oxide, MgO, at different temperatures and for ordered and nanostructured morphologies, showing the performance of the method in different conditions.
Cosmic ray diffusion: Report of the Workshop in Cosmic Ray Diffusion Theory
NASA Technical Reports Server (NTRS)
Birmingham, T. J.; Jones, F. C.
1975-01-01
A workshop in cosmic ray diffusion theory was held at Goddard Space Flight Center on May 16-17, 1974. Topics discussed and summarized are: (1) cosmic ray measurements as related to diffusion theory; (2) quasi-linear theory, nonlinear theory, and computer simulation of cosmic ray pitch-angle diffusion; and (3) magnetic field fluctuation measurements as related to diffusion theory.
Simulating chemistry using quantum computers.
Kassal, Ivan; Whitfield, James D; Perdomo-Ortiz, Alejandro; Yung, Man-Hong; Aspuru-Guzik, Alán
2011-01-01
The difficulty of simulating quantum systems, well known to quantum chemists, prompted the idea of quantum computation. One can avoid the steep scaling associated with the exact simulation of increasingly large quantum systems on conventional computers, by mapping the quantum system to another, more controllable one. In this review, we discuss to what extent the ideas in quantum computation, now a well-established field, have been applied to chemical problems. We describe algorithms that achieve significant advantages for the electronic-structure problem, the simulation of chemical dynamics, protein folding, and other tasks. Although theory is still ahead of experiment, we outline recent advances that have led to the first chemical calculations on small quantum information processors.
Control Theory and Statistical Generalizations.
ERIC Educational Resources Information Center
Powers, William T.
1990-01-01
Contrasts modeling methods in control theory to the methods of statistical generalizations in empirical studies of human or animal behavior. Presents a computer simulation that predicts behavior based on variables (effort and rewards) determined by the invariable (desired reward). Argues that control theory methods better reflect relationships to…
Nasrabad, Afshin Eskandari; Laghaei, Rozita; Eu, Byung Chan
2005-04-28
In previous work on the density fluctuation theory of transport coefficients of liquids, it was necessary to use empirical self-diffusion coefficients to calculate the transport coefficients (e.g., shear viscosity of carbon dioxide). In this work, the necessity of empirical input of the self-diffusion coefficients in the calculation of shear viscosity is removed, and the theory is thus made a self-contained molecular theory of transport coefficients of liquids, albeit it contains an empirical parameter in the subcritical regime. The required self-diffusion coefficients of liquid carbon dioxide are calculated by using the modified free volume theory for which the generic van der Waals equation of state and Monte Carlo simulations are combined to accurately compute the mean free volume by means of statistical mechanics. They have been computed as a function of density along four different isotherms and isobars. A Lennard-Jones site-site interaction potential was used to model the molecular carbon dioxide interaction. The density and temperature dependence of the theoretical self-diffusion coefficients are shown to be in excellent agreement with experimental data when the minimum critical free volume is identified with the molecular volume. The self-diffusion coefficients thus computed are then used to compute the density and temperature dependence of the shear viscosity of liquid carbon dioxide by employing the density fluctuation theory formula for shear viscosity as reported in an earlier paper (J. Chem. Phys. 2000, 112, 7118). The theoretical shear viscosity is shown to be robust and yields excellent density and temperature dependence for carbon dioxide. The pair correlation function appearing in the theory has been computed by Monte Carlo simulations.
Computational Science: A Research Methodology for the 21st Century
NASA Astrophysics Data System (ADS)
Orbach, Raymond L.
2004-03-01
Computational simulation - a means of scientific discovery that employs computer systems to simulate a physical system according to laws derived from theory and experiment - has attained peer status with theory and experiment. Important advances in basic science are accomplished by a new "sociology" for ultrascale scientific computing capability (USSCC), a fusion of sustained advances in scientific models, mathematical algorithms, computer architecture, and scientific software engineering. Expansion of current capabilities by factors of 100 - 1000 open up new vistas for scientific discovery: long term climatic variability and change, macroscopic material design from correlated behavior at the nanoscale, design and optimization of magnetic confinement fusion reactors, strong interactions on a computational lattice through quantum chromodynamics, and stellar explosions and element production. The "virtual prototype," made possible by this expansion, can markedly reduce time-to-market for industrial applications such as jet engines and safer, more fuel efficient cleaner cars. In order to develop USSCC, the National Energy Research Scientific Computing Center (NERSC) announced the competition "Innovative and Novel Computational Impact on Theory and Experiment" (INCITE), with no requirement for current DOE sponsorship. Fifty nine proposals for grand challenge scientific problems were submitted for a small number of awards. The successful grants, and their preliminary progress, will be described.
Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.
Lee, Won Hee; Bullmore, Ed; Frangou, Sophia
2017-02-01
There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Exact and efficient simulation of concordant computation
NASA Astrophysics Data System (ADS)
Cable, Hugo; Browne, Daniel E.
2015-11-01
Concordant computation is a circuit-based model of quantum computation for mixed states, that assumes that all correlations within the register are discord-free (i.e. the correlations are essentially classical) at every step of the computation. The question of whether concordant computation always admits efficient simulation by a classical computer was first considered by Eastin in arXiv:quant-ph/1006.4402v1, where an answer in the affirmative was given for circuits consisting only of one- and two-qubit gates. Building on this work, we develop the theory of classical simulation of concordant computation. We present a new framework for understanding such computations, argue that a larger class of concordant computations admit efficient simulation, and provide alternative proofs for the main results of arXiv:quant-ph/1006.4402v1 with an emphasis on the exactness of simulation which is crucial for this model. We include detailed analysis of the arithmetic complexity for solving equations in the simulation, as well as extensions to larger gates and qudits. We explore the limitations of our approach, and discuss the challenges faced in developing efficient classical simulation algorithms for all concordant computations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nascimento, Daniel R.; DePrince, A. Eugene, E-mail: deprince@chem.fsu.edu
2015-12-07
We present a combined cavity quantum electrodynamics/ab initio electronic structure approach for simulating plasmon-molecule interactions in the time domain. The simple Jaynes-Cummings-type model Hamiltonian typically utilized in such simulations is replaced with one in which the molecular component of the coupled system is treated in a fully ab initio way, resulting in a computationally efficient description of general plasmon-molecule interactions. Mutual polarization effects are easily incorporated within a standard ground-state Hartree-Fock computation, and time-dependent simulations carry the same formal computational scaling as real-time time-dependent Hartree-Fock theory. As a proof of principle, we apply this generalized method to the emergence ofmore » a Fano-like resonance in coupled molecule-plasmon systems; this feature is quite sensitive to the nanoparticle-molecule separation and the orientation of the molecule relative to the polarization of the external electric field.« less
Emotion-affected decision making in human simulation.
Zhao, Y; Kang, J; Wright, D K
2006-01-01
Human modelling is an interdisciplinary research field. The topic, emotion-affected decision making, was originally a cognitive psychology issue, but is now recognized as an important research direction for both computer science and biomedical modelling. The main aim of this paper is to attempt to bridge the gap between psychology and bioengineering in emotion-affected decision making. The work is based on Ortony's theory of emotions and bounded rationality theory, and attempts to connect the emotion process with decision making. A computational emotion model is proposed, and the initial framework of this model in virtual human simulation within the platform of Virtools is presented.
The application of the thermodynamic perturbation theory to study the hydrophobic hydration.
Mohoric, Tomaz; Urbic, Tomaz; Hribar-Lee, Barbara
2013-07-14
The thermodynamic perturbation theory was tested against newly obtained Monte Carlo computer simulations to describe the major features of the hydrophobic effect in a simple 3D-Mercedes-Benz water model: the temperature and hydrophobe size dependence on entropy, enthalpy, and free energy of transfer of a simple hydrophobic solute into water. An excellent agreement was obtained between the theoretical and simulation results. Further, the thermodynamic perturbation theory qualitatively correctly (with respect to the experimental data) describes the solvation thermodynamics under conditions where the simulation results are difficult to obtain with good enough accuracy, e.g., at high pressures.
The application of the thermodynamic perturbation theory to study the hydrophobic hydration
NASA Astrophysics Data System (ADS)
Mohorič, Tomaž; Urbic, Tomaz; Hribar-Lee, Barbara
2013-07-01
The thermodynamic perturbation theory was tested against newly obtained Monte Carlo computer simulations to describe the major features of the hydrophobic effect in a simple 3D-Mercedes-Benz water model: the temperature and hydrophobe size dependence on entropy, enthalpy, and free energy of transfer of a simple hydrophobic solute into water. An excellent agreement was obtained between the theoretical and simulation results. Further, the thermodynamic perturbation theory qualitatively correctly (with respect to the experimental data) describes the solvation thermodynamics under conditions where the simulation results are difficult to obtain with good enough accuracy, e.g., at high pressures.
Fractals: To Know, to Do, to Simulate.
ERIC Educational Resources Information Center
Talanquer, Vicente; Irazoque, Glinda
1993-01-01
Discusses the development of fractal theory and suggests fractal aggregates as an attractive alternative for introducing fractal concepts. Describes methods for producing metallic fractals and a computer simulation for drawing fractals. (MVL)
Hierarchical optimization for neutron scattering problems
Bao, Feng; Archibald, Rick; Bansal, Dipanshu; ...
2016-03-14
In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.
Hierarchical optimization for neutron scattering problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Archibald, Rick; Bansal, Dipanshu
In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.
Probabilistic Simulation for Nanocomposite Characterization
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Coroneos, Rula M.
2007-01-01
A unique probabilistic theory is described to predict the properties of nanocomposites. The simulation is based on composite micromechanics with progressive substructuring down to a nanoscale slice of a nanofiber where all the governing equations are formulated. These equations have been programmed in a computer code. That computer code is used to simulate uniaxial strengths properties of a mononanofiber laminate. The results are presented graphically and discussed with respect to their practical significance. These results show smooth distributions.
Integration of Ausubelian Learning Theory and Educational Computing.
ERIC Educational Resources Information Center
Heinze-Fry, Jane A.; And Others
1984-01-01
Examines possible benefits when Ausubelian learning approaches are integrated into computer-assisted instruction, presenting an example of this integration in a computer program dealing with introductory ecology concepts. The four program parts (tutorial, interactive concept mapping, simulations, and vee-mapping) are described. (JN)
Quantitative, steady-state properties of Catania's computational model of the operant reserve.
Berg, John P; McDowell, J J
2011-05-01
Catania (2005) found that a computational model of the operant reserve (Skinner, 1938) produced realistic behavior in initial, exploratory analyses. Although Catania's operant reserve computational model demonstrated potential to simulate varied behavioral phenomena, the model was not systematically tested. The current project replicated and extended the Catania model, clarified its capabilities through systematic testing, and determined the extent to which it produces behavior corresponding to matching theory. Significant departures from both classic and modern matching theory were found in behavior generated by the model across all conditions. The results suggest that a simple, dynamic operant model of the reflex reserve does not simulate realistic steady state behavior. Copyright © 2011 Elsevier B.V. All rights reserved.
Mazilu, I; Mazilu, D A; Melkerson, R E; Hall-Mejia, E; Beck, G J; Nshimyumukiza, S; da Fonseca, Carlos M
2016-03-01
We present exact and approximate results for a class of cooperative sequential adsorption models using matrix theory, mean-field theory, and computer simulations. We validate our models with two customized experiments using ionically self-assembled nanoparticles on glass slides. We also address the limitations of our models and their range of applicability. The exact results obtained using matrix theory can be applied to a variety of two-state systems with cooperative effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dang, Liem X.; Vo, Quynh N.; Nilsson, Mikael
We report one of the first simulations using a classical rate theory approach to predict the mechanism of the exchange process between water and aqueous uranyl ions. Using our water and ion-water polarizable force fields and molecular dynamics techniques, we computed the potentials of mean force for the uranyl ion-water pair as the function of pressures at ambient temperature. Subsequently, these simulated potentials of mean force were used to calculate rate constants using the transition rate theory; the time dependent transmission coefficients were also examined using the reactive flux method and Grote-Hynes treatments of the dynamic response of the solvent.more » The computed activation volumes using transition rate theory and the corrected rate constants are positive, thus the mechanism of this particular water-exchange is a dissociative process. We discuss our rate theory results and compare them with previously studies in which non-polarizable force fields were used. This work was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. The calculations were carried out using computer resources provided by the Office of Basic Energy Sciences.« less
Orr, Mark G; Thrush, Roxanne; Plaut, David C
2013-01-01
The reasoned action approach, although ubiquitous in health behavior theory (e.g., Theory of Reasoned Action/Planned Behavior), does not adequately address two key dynamical aspects of health behavior: learning and the effect of immediate social context (i.e., social influence). To remedy this, we put forth a computational implementation of the Theory of Reasoned Action (TRA) using artificial-neural networks. Our model re-conceptualized behavioral intention as arising from a dynamic constraint satisfaction mechanism among a set of beliefs. In two simulations, we show that constraint satisfaction can simultaneously incorporate the effects of past experience (via learning) with the effects of immediate social context to yield behavioral intention, i.e., intention is dynamically constructed from both an individual's pre-existing belief structure and the beliefs of others in the individual's social context. In a third simulation, we illustrate the predictive ability of the model with respect to empirically derived behavioral intention. As the first known computational model of health behavior, it represents a significant advance in theory towards understanding the dynamics of health behavior. Furthermore, our approach may inform the development of population-level agent-based models of health behavior that aim to incorporate psychological theory into models of population dynamics.
Orr, Mark G.; Thrush, Roxanne; Plaut, David C.
2013-01-01
The reasoned action approach, although ubiquitous in health behavior theory (e.g., Theory of Reasoned Action/Planned Behavior), does not adequately address two key dynamical aspects of health behavior: learning and the effect of immediate social context (i.e., social influence). To remedy this, we put forth a computational implementation of the Theory of Reasoned Action (TRA) using artificial-neural networks. Our model re-conceptualized behavioral intention as arising from a dynamic constraint satisfaction mechanism among a set of beliefs. In two simulations, we show that constraint satisfaction can simultaneously incorporate the effects of past experience (via learning) with the effects of immediate social context to yield behavioral intention, i.e., intention is dynamically constructed from both an individual’s pre-existing belief structure and the beliefs of others in the individual’s social context. In a third simulation, we illustrate the predictive ability of the model with respect to empirically derived behavioral intention. As the first known computational model of health behavior, it represents a significant advance in theory towards understanding the dynamics of health behavior. Furthermore, our approach may inform the development of population-level agent-based models of health behavior that aim to incorporate psychological theory into models of population dynamics. PMID:23671603
A new paradigm for atomically detailed simulations of kinetics in biophysical systems.
Elber, Ron
2017-01-01
The kinetics of biochemical and biophysical events determined the course of life processes and attracted considerable interest and research. For example, modeling of biological networks and cellular responses relies on the availability of information on rate coefficients. Atomically detailed simulations hold the promise of supplementing experimental data to obtain a more complete kinetic picture. However, simulations at biological time scales are challenging. Typical computer resources are insufficient to provide the ensemble of trajectories at the correct length that is required for straightforward calculations of time scales. In the last years, new technologies emerged that make atomically detailed simulations of rate coefficients possible. Instead of computing complete trajectories from reactants to products, these approaches launch a large number of short trajectories at different positions. Since the trajectories are short, they are computed trivially in parallel on modern computer architecture. The starting and termination positions of the short trajectories are chosen, following statistical mechanics theory, to enhance efficiency. These trajectories are analyzed. The analysis produces accurate estimates of time scales as long as hours. The theory of Milestoning that exploits the use of short trajectories is discussed, and several applications are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alexander J.
There is a lack of state-of-the-art quantum computing simulation software that scales on heterogeneous systems like Titan. Tensor Network Quantum Virtual Machine (TNQVM) provides a quantum simulator that leverages a distributed network of GPUs to simulate quantum circuits in a manner that leverages recent results from tensor network theory.
Simulation/Gaming and the Acquisition of Communicative Competence in Another Language.
ERIC Educational Resources Information Center
Garcia-Carbonell, Amparo; Rising, Beverly; Montero, Begona; Watts, Frances
2001-01-01
Discussion of communicative competence in second language acquisition focuses on a theoretical and practical meshing of simulation and gaming methodology with theories of foreign language acquisition, including task-based learning, interaction, and comprehensible input. Describes experiments conducted with computer-assisted simulations in…
A two-dimensional model of water: Theory and computer simulations
NASA Astrophysics Data System (ADS)
Urbič, T.; Vlachy, V.; Kalyuzhnyi, Yu. V.; Southall, N. T.; Dill, K. A.
2000-02-01
We develop an analytical theory for a simple model of liquid water. We apply Wertheim's thermodynamic perturbation theory (TPT) and integral equation theory (IET) for associative liquids to the MB model, which is among the simplest models of water. Water molecules are modeled as 2-dimensional Lennard-Jones disks with three hydrogen bonding arms arranged symmetrically, resembling the Mercedes-Benz (MB) logo. The MB model qualitatively predicts both the anomalous properties of pure water and the anomalous solvation thermodynamics of nonpolar molecules. IET is based on the orientationally averaged version of the Ornstein-Zernike equation. This is one of the main approximations in the present work. IET correctly predicts the pair correlation function of the model water at high temperatures. Both TPT and IET are in semi-quantitative agreement with the Monte Carlo values of the molar volume, isothermal compressibility, thermal expansion coefficient, and heat capacity. A major advantage of these theories is that they require orders of magnitude less computer time than the Monte Carlo simulations.
Bypassing the malfunction junction in warm dense matter simulations
NASA Astrophysics Data System (ADS)
Cangi, Attila; Pribram-Jones, Aurora
2015-03-01
Simulation of warm dense matter requires computational methods that capture both quantum and classical behavior efficiently under high-temperature and high-density conditions. The state-of-the-art approach to model electrons and ions under those conditions is density functional theory molecular dynamics, but this method's computational cost skyrockets as temperatures and densities increase. We propose finite-temperature potential functional theory as an in-principle-exact alternative that suffers no such drawback. In analogy to the zero-temperature theory developed previously, we derive an orbital-free free energy approximation through a coupling-constant formalism. Our density approximation and its associated free energy approximation demonstrate the method's accuracy and efficiency. A.C. has been partially supported by NSF Grant CHE-1112442. A.P.J. is supported by DOE Grant DE-FG02-97ER25308.
Intention, emotion, and action: a neural theory based on semantic pointers.
Schröder, Tobias; Stewart, Terrence C; Thagard, Paul
2014-06-01
We propose a unified theory of intentions as neural processes that integrate representations of states of affairs, actions, and emotional evaluation. We show how this theory provides answers to philosophical questions about the concept of intention, psychological questions about human behavior, computational questions about the relations between belief and action, and neuroscientific questions about how the brain produces actions. Our theory of intention ties together biologically plausible mechanisms for belief, planning, and motor control. The computational feasibility of these mechanisms is shown by a model that simulates psychologically important cases of intention. © 2013 Cognitive Science Society, Inc.
ERIC Educational Resources Information Center
Urhahne, Detlef; Nick, Sabine; Schanze, Sascha
2009-01-01
In a series of three experimental studies, the effectiveness of three-dimensional computer simulations to aid the understanding of chemical structures and their properties was investigated. Arguments for the usefulness of three-dimensional simulations were derived from Mayer's generative theory of multimedia learning. Simulations might lead to a…
Unified-theory-of-reinforcement neural networks do not simulate the blocking effect.
Calvin, Nicholas T; J McDowell, J
2015-11-01
For the last 20 years the unified theory of reinforcement (Donahoe et al., 1993) has been used to develop computer simulations to evaluate its plausibility as an account for behavior. The unified theory of reinforcement states that operant and respondent learning occurs via the same neural mechanisms. As part of a larger project to evaluate the operant behavior predicted by the theory, this project was the first replication of neural network models based on the unified theory of reinforcement. In the process of replicating these neural network models it became apparent that a previously published finding, namely, that the networks simulate the blocking phenomenon (Donahoe et al., 1993), was a misinterpretation of the data. We show that the apparent blocking produced by these networks is an artifact of the inability of these networks to generate the same conditioned response to multiple stimuli. The piecemeal approach to evaluate the unified theory of reinforcement via simulation is critiqued and alternatives are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Marsalek, Ondrej; Markland, Thomas E
2016-02-07
Path integral molecular dynamics simulations, combined with an ab initio evaluation of interactions using electronic structure theory, incorporate the quantum mechanical nature of both the electrons and nuclei, which are essential to accurately describe systems containing light nuclei. However, path integral simulations have traditionally required a computational cost around two orders of magnitude greater than treating the nuclei classically, making them prohibitively costly for most applications. Here we show that the cost of path integral simulations can be dramatically reduced by extending our ring polymer contraction approach to ab initio molecular dynamics simulations. By using density functional tight binding as a reference system, we show that our ring polymer contraction scheme gives rapid and systematic convergence to the full path integral density functional theory result. We demonstrate the efficiency of this approach in ab initio simulations of liquid water and the reactive protonated and deprotonated water dimer systems. We find that the vast majority of the nuclear quantum effects are accurately captured using contraction to just the ring polymer centroid, which requires the same number of density functional theory calculations as a classical simulation. Combined with a multiple time step scheme using the same reference system, which allows the time step to be increased, this approach is as fast as a typical classical ab initio molecular dynamics simulation and 35× faster than a full path integral calculation, while still exactly including the quantum sampling of nuclei. This development thus offers a route to routinely include nuclear quantum effects in ab initio molecular dynamics simulations at negligible computational cost.
The application of the thermodynamic perturbation theory to study the hydrophobic hydration
Mohorič, Tomaž; Urbic, Tomaz; Hribar-Lee, Barbara
2013-01-01
The thermodynamic perturbation theory was tested against newly obtained Monte Carlo computer simulations to describe the major features of the hydrophobic effect in a simple 3D-Mercedes-Benz water model: the temperature and hydrophobe size dependence on entropy, enthalpy, and free energy of transfer of a simple hydrophobic solute into water. An excellent agreement was obtained between the theoretical and simulation results. Further, the thermodynamic perturbation theory qualitatively correctly (with respect to the experimental data) describes the solvation thermodynamics under conditions where the simulation results are difficult to obtain with good enough accuracy, e.g., at high pressures. PMID:23862923
Polymer Composites Corrosive Degradation: A Computational Simulation
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Minnetyan, Levon
2007-01-01
A computational simulation of polymer composites corrosive durability is presented. The corrosive environment is assumed to manage the polymer composite degradation on a ply-by-ply basis. The degradation is correlated with a measured pH factor and is represented by voids, temperature and moisture which vary parabolically for voids and linearly for temperature and moisture through the laminate thickness. The simulation is performed by a computational composite mechanics computer code which includes micro, macro, combined stress failure and laminate theories. This accounts for starting the simulation from constitutive material properties and up to the laminate scale which exposes the laminate to the corrosive environment. Results obtained for one laminate indicate that the ply-by-ply degradation degrades the laminate to the last one or the last several plies. Results also demonstrate that the simulation is applicable to other polymer composite systems as well.
Computation of magnetic suspension of maglev systems using dynamic circuit theory
NASA Technical Reports Server (NTRS)
He, J. L.; Rote, D. M.; Coffey, H. T.
1992-01-01
Dynamic circuit theory is applied to several magnetic suspensions associated with maglev systems. These suspension systems are the loop-shaped coil guideway, the figure-eight-shaped null-flux coil guideway, and the continuous sheet guideway. Mathematical models, which can be used for the development of computer codes, are provided for each of these suspension systems. The differences and similarities of the models in using dynamic circuit theory are discussed in the paper. The paper emphasizes the transient and dynamic analysis and computer simulation of maglev systems. In general, the method discussed here can be applied to many electrodynamic suspension system design concepts. It is also suited for the computation of the performance of maglev propulsion systems. Numerical examples are presented in the paper.
NASA Technical Reports Server (NTRS)
Jones, D. W.
1971-01-01
The navigation and guidance process for the Jupiter, Saturn and Uranus planetary encounter phases of the 1977 Grand Tour interior mission was simulated. Reference approach navigation accuracies were defined and the relative information content of the various observation types were evaluated. Reference encounter guidance requirements were defined, sensitivities to assumed simulation model parameters were determined and the adequacy of the linear estimation theory was assessed. A linear sequential estimator was used to provide an estimate of the augmented state vector, consisting of the six state variables of position and velocity plus the three components of a planet position bias. The guidance process was simulated using a nonspherical model of the execution errors. Computation algorithms which simulate the navigation and guidance process were derived from theory and implemented into two research-oriented computer programs, written in FORTRAN.
Towards a Sufficient Theory of Transition in Cognitive Development.
ERIC Educational Resources Information Center
Wallace, J. G.
The work reported aims at the construction of a sufficient theory of transition in cognitive development. The method of theory construction employed is computer simulation of cognitive process. The core of the model of transition presented comprises self-modification processes that, as a result of continuously monitoring an exhaustive record of…
A Test of Two Theories in the Initial Process Stage of Coalition Formation.
ERIC Educational Resources Information Center
Flaherty, John F.; Arenson, Sidney J.
1978-01-01
Males and females participated in a coalition formation procedure by interacting with a computer program that simulated a pachisi game situation. Female partner preference data supported a weighted probability model of coalition formation over a bargaining theory. Male partner preference data did not support either theory. (Author)
A Pilot Study of the Naming Transaction Shell
1991-06-01
effective computer-based instructional design. AIDA will take established theories of knowledge, learning , and instruction and incorporate the theories...felt that anyone could learn to use the system both in design and delivery modes. Traditional course development (non- computer instruction) for the...students were studying and learning the material in the text. This often resulted in wasted effort in the simulator. By ensuring that the students knew the
ERIC Educational Resources Information Center
Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M.
2010-01-01
Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…
Computer Models of Personality: Implications for Measurement
ERIC Educational Resources Information Center
Cranton, P. A.
1976-01-01
Current research on computer models of personality is reviewed and categorized under five headings: (1) models of belief systems; (2) models of interpersonal behavior; (3) models of decision-making processes; (4) prediction models; and (5) theory-based simulations of specific processes. The use of computer models in personality measurement is…
Probabilistic Simulation for Nanocomposite Fracture
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2010-01-01
A unique probabilistic theory is described to predict the uniaxial strengths and fracture properties of nanocomposites. The simulation is based on composite micromechanics with progressive substructuring down to a nanoscale slice of a nanofiber where all the governing equations are formulated. These equations have been programmed in a computer code. That computer code is used to simulate uniaxial strengths and fracture of a nanofiber laminate. The results are presented graphically and discussed with respect to their practical significance. These results show smooth distributions from low probability to high.
Reinforce Networking Theory with OPNET Simulation
ERIC Educational Resources Information Center
Guo, Jinhua; Xiang, Weidong; Wang, Shengquan
2007-01-01
As networking systems have become more complex and expensive, hands-on experiments based on networking simulation have become essential for teaching the key computer networking topics to students. The simulation approach is the most cost effective and highly useful because it provides a virtual environment for an assortment of desirable features…
Using a Commercial Simulator to Teach Sorption Separations
ERIC Educational Resources Information Center
Wankat, Phillip C.
2006-01-01
The commercial simulator Aspen Chromatography was used in the computer laboratory of a dual-level course. The lab assignments used a cookbook approach to teach basic simulator operation and open-ended exploration to understand adsorption. The students learned theory better than in previous years despite having less lecture time. Students agreed…
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Ariano, Giacomo Mauro
2010-05-04
I will argue that the proposal of establishing operational foundations of Quantum Theory should have top-priority, and that the Lucien Hardy's program on Quantum Gravity should be paralleled by an analogous program on Quantum Field Theory (QFT), which needs to be reformulated, notwithstanding its experimental success. In this paper, after reviewing recently suggested operational 'principles of the quantumness', I address the problem on whether Quantum Theory and Special Relativity are unrelated theories, or instead, if the one implies the other. I show how Special Relativity can be indeed derived from causality of Quantum Theory, within the computational paradigm 'the universemore » is a huge quantum computer', reformulating QFT as a Quantum-Computational Field Theory (QCFT). In QCFT Special Relativity emerges from the fabric of the computational network, which also naturally embeds gauge invariance. In this scheme even the quantization rule and the Planck constant can in principle be derived as emergent from the underlying causal tapestry of space-time. In this way Quantum Theory remains the only theory operating the huge computer of the universe.Is the computational paradigm only a speculative tautology (theory as simulation of reality), or does it have a scientific value? The answer will come from Occam's razor, depending on the mathematical simplicity of QCFT. Here I will just start scratching the surface of QCFT, analyzing simple field theories, including Dirac's. The number of problems and unmotivated recipes that plague QFT strongly motivates us to undertake the QCFT project, since QCFT makes all such problems manifest, and forces a re-foundation of QFT.« less
NASA Astrophysics Data System (ADS)
Zhao, Yinjian
2017-09-01
Aiming at a high simulation accuracy, a Particle-Particle (PP) Coulombic molecular dynamics model is implemented to study the electron-ion temperature relaxation. In this model, the Coulomb's law is directly applied in a bounded system with two cutoffs at both short and long length scales. By increasing the range between the two cutoffs, it is found that the relaxation rate deviates from the BPS theory and approaches the LS theory and the GMS theory. Also, the effective minimum and maximum impact parameters (bmin* and bmax*) are obtained. For the simulated plasma condition, bmin* is about 6.352 times smaller than the Landau length (bC), and bmax* is about 2 times larger than the Debye length (λD), where bC and λD are used in the LS theory. Surprisingly, the effective relaxation time obtained from the PP model is very close to the LS theory and the GMS theory, even though the effective Coulomb logarithm is two times greater than the one used in the LS theory. Besides, this work shows that the PP model (commonly known as computationally expensive) is becoming practicable via GPU parallel computing techniques.
Song, Lingchun; Han, Jaebeom; Lin, Yen-lin; Xie, Wangshen; Gao, Jiali
2009-10-29
The explicit polarization (X-Pol) method has been examined using ab initio molecular orbital theory and density functional theory. The X-Pol potential was designed to provide a novel theoretical framework for developing next-generation force fields for biomolecular simulations. Importantly, the X-Pol potential is a general method, which can be employed with any level of electronic structure theory. The present study illustrates the implementation of the X-Pol method using ab initio Hartree-Fock theory and hybrid density functional theory. The computational results are illustrated by considering a set of bimolecular complexes of small organic molecules and ions with water. The computed interaction energies and hydrogen bond geometries are in good accord with CCSD(T) calculations and B3LYP/aug-cc-pVDZ optimizations.
NASA Astrophysics Data System (ADS)
Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul
2015-03-01
Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 103-105 molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online.
Parsing partial molar volumes of small molecules: a molecular dynamics study.
Patel, Nisha; Dubins, David N; Pomès, Régis; Chalikian, Tigran V
2011-04-28
We used molecular dynamics (MD) simulations in conjunction with the Kirkwood-Buff theory to compute the partial molar volumes for a number of small solutes of various chemical natures. We repeated our computations using modified pair potentials, first, in the absence of the Coulombic term and, second, in the absence of the Coulombic and the attractive Lennard-Jones terms. Comparison of our results with experimental data and the volumetric results of Monte Carlo simulation with hard sphere potentials and scaled particle theory-based computations led us to conclude that, for small solutes, the partial molar volume computed with the Lennard-Jones potential in the absence of the Coulombic term nearly coincides with the cavity volume. On the other hand, MD simulations carried out with the pair interaction potentials containing only the repulsive Lennard-Jones term produce unrealistically large partial molar volumes of solutes that are close to their excluded volumes. Our simulation results are in good agreement with the reported schemes for parsing partial molar volume data on small solutes. In particular, our determined interaction volumes() and the thickness of the thermal volume for individual compounds are in good agreement with empirical estimates. This work is the first computational study that supports and lends credence to the practical algorithms of parsing partial molar volume data that are currently in use for molecular interpretations of volumetric data.
ERIC Educational Resources Information Center
Weems, Scott A.; Reggia, James A.
2006-01-01
The Wernicke-Lichtheim-Geschwind (WLG) theory of the neurobiological basis of language is of great historical importance, and it continues to exert a substantial influence on most contemporary theories of language in spite of its widely recognized limitations. Here, we suggest that neurobiologically grounded computational models based on the WLG…
Deterministic theory of Monte Carlo variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueki, T.; Larsen, E.W.
1996-12-31
The theoretical estimation of variance in Monte Carlo transport simulations, particularly those using variance reduction techniques, is a substantially unsolved problem. In this paper, the authors describe a theory that predicts the variance in a variance reduction method proposed by Dwivedi. Dwivedi`s method combines the exponential transform with angular biasing. The key element of this theory is a new modified transport problem, containing the Monte Carlo weight w as an extra independent variable, which simulates Dwivedi`s Monte Carlo scheme. The (deterministic) solution of this modified transport problem yields an expression for the variance. The authors give computational results that validatemore » this theory.« less
A Probabilistic Framework for the Validation and Certification of Computer Simulations
NASA Technical Reports Server (NTRS)
Ghanem, Roger; Knio, Omar
2000-01-01
The paper presents a methodology for quantifying, propagating, and managing the uncertainty in the data required to initialize computer simulations of complex phenomena. The purpose of the methodology is to permit the quantitative assessment of a certification level to be associated with the predictions from the simulations, as well as the design of a data acquisition strategy to achieve a target level of certification. The value of a methodology that can address the above issues is obvious, specially in light of the trend in the availability of computational resources, as well as the trend in sensor technology. These two trends make it possible to probe physical phenomena both with physical sensors, as well as with complex models, at previously inconceivable levels. With these new abilities arises the need to develop the knowledge to integrate the information from sensors and computer simulations. This is achieved in the present work by tracing both activities back to a level of abstraction that highlights their commonalities, thus allowing them to be manipulated in a mathematically consistent fashion. In particular, the mathematical theory underlying computer simulations has long been associated with partial differential equations and functional analysis concepts such as Hilbert spares and orthogonal projections. By relying on a probabilistic framework for the modeling of data, a Hilbert space framework emerges that permits the modeling of coefficients in the governing equations as random variables, or equivalently, as elements in a Hilbert space. This permits the development of an approximation theory for probabilistic problems that parallels that of deterministic approximation theory. According to this formalism, the solution of the problem is identified by its projection on a basis in the Hilbert space of random variables, as opposed to more traditional techniques where the solution is approximated by its first or second-order statistics. The present representation, in addition to capturing significantly more information than the traditional approach, facilitates the linkage between different interacting stochastic systems as is typically observed in real-life situations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marsalek, Ondrej; Markland, Thomas E., E-mail: tmarkland@stanford.edu
Path integral molecular dynamics simulations, combined with an ab initio evaluation of interactions using electronic structure theory, incorporate the quantum mechanical nature of both the electrons and nuclei, which are essential to accurately describe systems containing light nuclei. However, path integral simulations have traditionally required a computational cost around two orders of magnitude greater than treating the nuclei classically, making them prohibitively costly for most applications. Here we show that the cost of path integral simulations can be dramatically reduced by extending our ring polymer contraction approach to ab initio molecular dynamics simulations. By using density functional tight binding asmore » a reference system, we show that our ring polymer contraction scheme gives rapid and systematic convergence to the full path integral density functional theory result. We demonstrate the efficiency of this approach in ab initio simulations of liquid water and the reactive protonated and deprotonated water dimer systems. We find that the vast majority of the nuclear quantum effects are accurately captured using contraction to just the ring polymer centroid, which requires the same number of density functional theory calculations as a classical simulation. Combined with a multiple time step scheme using the same reference system, which allows the time step to be increased, this approach is as fast as a typical classical ab initio molecular dynamics simulation and 35× faster than a full path integral calculation, while still exactly including the quantum sampling of nuclei. This development thus offers a route to routinely include nuclear quantum effects in ab initio molecular dynamics simulations at negligible computational cost.« less
Status of the Electroforming Shield Design (ESD) project
NASA Technical Reports Server (NTRS)
Fletcher, R. E.
1977-01-01
The utilization of a digital computer to augment electrodeposition/electroforming processes in which nonconducting shielding controls local cathodic current distribution is reported. The primary underlying philosophy of the physics of electrodeposition was presented. The technical approach taken to analytically simulate electrolytic tank variables was also included. A FORTRAN computer program has been developed and implemented. The program utilized finite element techniques and electrostatic theory to simulate electropotential fields and ionic transport.
Application of High Performance Computing for Simulations of N-Dodecane Jet Spray with Evaporation
2016-11-01
sprays and develop a predictive theory for comparison to measurements in the laboratory of turbulent diesel sprays. 15. SUBJECT TERMS high...models into future simulations of turbulent jet sprays and develop a predictive theory for comparison to measurements in the lab of turbulent diesel ...A critical component of maintaining performance and durability of a diesel engine involves the formation of a fuel-air mixture as a diesel jet spray
A prototype of behavior selection mechanism based on emotion
NASA Astrophysics Data System (ADS)
Zhang, Guofeng; Li, Zushu
2007-12-01
In bionic methodology rather than in design methodology more familiar with, summarizing the psychological researches of emotion, we propose the biologic mechanism of emotion, emotion selection role in creature evolution and a anima framework including emotion similar to the classical control structure; and consulting Prospect Theory, build an Emotion Characteristic Functions(ECF) that computer emotion; two more emotion theories are added to them that higher emotion is preferred and middle emotion makes brain run more efficiently, emotional behavior mechanism comes into being. A simulation of proposed mechanism are designed and carried out on Alife Swarm software platform. In this simulation, a virtual grassland ecosystem is achieved where there are two kinds of artificial animals: herbivore and preyer. These artificial animals execute four types of behavior: wandering, escaping, finding food, finding sex partner in their lives. According the theories of animal ethnology, escaping from preyer is prior to other behaviors for its existence, finding food is secondly important behavior, rating is third one and wandering is last behavior. In keeping this behavior order, based on our behavior characteristic function theory, the specific functions of emotion computing are built of artificial autonomous animals. The result of simulation confirms the behavior selection mechanism.
Ion distributions in electrolyte confined by multiple dielectric interfaces
NASA Astrophysics Data System (ADS)
Jing, Yufei; Zwanikken, Jos W.; Jadhao, Vikram; de La Cruz, Monica
2014-03-01
The distribution of ions at dielectric interfaces between liquids characterized by different dielectric permittivities is crucial to nanoscale assembly processes in many biological and synthetic materials such as cell membranes, colloids and oil-water emulsions. The knowledge of ionic structure of these systems is also exploited in energy storage devices such as double-layer super-capacitors. The presence of multiple dielectric interfaces often complicates computing the desired ionic distributions via simulations or theory. Here, we use coarse-grained models to compute the ionic distributions in a system of electrolyte confined by two planar dielectric interfaces using Car-Parrinello molecular dynamics simulations and liquid state theory. We compute the density profiles for various electrolyte concentrations, stoichiometric ratios and dielectric contrasts. The explanations for the trends in these profiles and discuss their effects on the behavior of the confined charged fluid are also presented.
Statistical computation of tolerance limits
NASA Technical Reports Server (NTRS)
Wheeler, J. T.
1993-01-01
Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.
Atmospheric simulation using a liquid crystal wavefront-controlling device
NASA Astrophysics Data System (ADS)
Brooks, Matthew R.; Goda, Matthew E.
2004-10-01
Test and evaluation of laser warning devices is important due to the increased use of laser devices in aerial applications. This research consists of an atmospheric aberrating system to enable in-lab testing of various detectors and sensors. This system employs laser light at 632.8nm from a Helium-Neon source and a spatial light modulator (SLM) to cause phase changes using a birefringent liquid crystal material. Measuring outgoing radiation from the SLM using a CCD targetboard and Shack-Hartmann wavefront sensor reveals an acceptable resemblance of system output to expected atmospheric theory. Over three turbulence scenarios, an error analysis reveals that turbulence data matches theory. A wave optics computer simulation is created analogous to the lab-bench design. Phase data, intensity data, and a computer simulation affirm lab-bench results so that the aberrating SLM system can be operated confidently.
Lee, Sanghun; Park, Sung Soo
2011-11-03
Dielectric constants of electrolytic organic solvents are calculated employing nonpolarizable Molecular Dynamics simulation with Electronic Continuum (MDEC) model and Density Functional Theory. The molecular polarizabilities are obtained by the B3LYP/6-311++G(d,p) level of theory to estimate high-frequency refractive indices while the densities and dipole moment fluctuations are computed using nonpolarizable MD simulations. The dielectric constants reproduced from these procedures are evaluated to provide a reliable approach for estimating the experimental data. An additional feature, two representative solvents which have similar molecular weights but are different dielectric properties, i.e., ethyl methyl carbonate and propylene carbonate, are compared using MD simulations and the distinctly different dielectric behaviors are observed at short times as well as at long times.
Nucleic acids: theory and computer simulation, Y2K.
Beveridge, D L; McConnell, K J
2000-04-01
Molecular dynamics simulations on DNA and RNA that include solvent are now being performed under realistic environmental conditions of water activity and salt. Improvements to force-fields and treatments of long-range interactions have significantly increased the reliability of simulations. New studies of sequence effects, axis bending, solvation and conformational transitions have appeared.
ERIC Educational Resources Information Center
Jafari, Mina; Welden, Alicia Rae; Williams, Kyle L.; Winograd, Blair; Mulvihill, Ellen; Hendrickson, Heidi P.; Lenard, Michael; Gottfried, Amy; Geva, Eitan
2017-01-01
In this paper, we report on the implementation of a novel compute-to-learn pedagogy, which is based upon the theories of situated cognition and meaningful learning. The "compute-to-learn" pedagogy is designed to simulate an authentic research experience as part of the undergraduate curriculum, including project development, teamwork,…
NASA Astrophysics Data System (ADS)
Aboona, Bassam; Holt, Jeremy
2017-09-01
Chiral effective field theory provides a modern framework for understanding the structure and dynamics of nuclear many-body systems. Recent works have had much success in applying the theory to describe the ground- and excited-state properties of light and medium-mass atomic nuclei when combined with ab initio numerical techniques. Our aim is to extend the application of chiral effective field theory to describe the nuclear equation of state required for supercomputer simulations of core-collapse supernovae. Given the large range of densities, temperatures, and proton fractions probed during stellar core collapse, microscopic calculations of the equation of state require large computational resources on the order of one million CPU hours. We investigate the use of graphics processing units (GPUs) to significantly reduce the computational cost of these calculations, which will enable a more accurate and precise description of this important input to numerical astrophysical simulations. Cyclotron Institute at Texas A&M, NSF Grant: PHY 1659847, DOE Grant: DE-FG02-93ER40773.
NASA Astrophysics Data System (ADS)
Arendt, V.; Shalchi, A.
2018-06-01
We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.
NASA Technical Reports Server (NTRS)
Seldner, K.
1976-01-01
The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.
Bardhan, Jaydeep P; Knepley, Matthew G; Anitescu, Mihai
2009-03-14
The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.
NASA Astrophysics Data System (ADS)
Bardhan, Jaydeep P.; Knepley, Matthew G.; Anitescu, Mihai
2009-03-01
The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.
Dynamics of Numerics & Spurious Behaviors in CFD Computations. Revised
NASA Technical Reports Server (NTRS)
Yee, Helen C.; Sweby, Peter K.
1997-01-01
The global nonlinear behavior of finite discretizations for constant time steps and fixed or adaptive grid spacings is studied using tools from dynamical systems theory. Detailed analysis of commonly used temporal and spatial discretizations for simple model problems is presented. The role of dynamics in the understanding of long time behavior of numerical integration and the nonlinear stability, convergence, and reliability of using time-marching approaches for obtaining steady-state numerical solutions in computational fluid dynamics (CFD) is explored. The study is complemented with examples of spurious behavior observed in steady and unsteady CFD computations. The CFD examples were chosen to illustrate non-apparent spurious behavior that was difficult to detect without extensive grid and temporal refinement studies and some knowledge from dynamical systems theory. Studies revealed the various possible dangers of misinterpreting numerical simulation of realistic complex flows that are constrained by available computing power. In large scale computations where the physics of the problem under study is not well understood and numerical simulations are the only viable means of solution, extreme care must be taken in both computation and interpretation of the numerical data. The goal of this paper is to explore the important role that dynamical systems theory can play in the understanding of the global nonlinear behavior of numerical algorithms and to aid the identification of the sources of numerical uncertainties in CFD.
Optimized Materials From First Principles Simulations: Are We There Yet?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galli, G; Gygi, F
2005-07-26
In the past thirty years, the use of scientific computing has become pervasive in all disciplines: collection and interpretation of most experimental data is carried out using computers, and physical models in computable form, with various degrees of complexity and sophistication, are utilized in all fields of science. However, full prediction of physical and chemical phenomena based on the basic laws of Nature, using computer simulations, is a revolution still in the making, and it involves some formidable theoretical and computational challenges. We illustrate the progress and successes obtained in recent years in predicting fundamental properties of materials in condensedmore » phases and at the nanoscale, using ab-initio, quantum simulations. We also discuss open issues related to the validation of the approximate, first principles theories used in large scale simulations, and the resulting complex interplay between computation and experiment. Finally, we describe some applications, with focus on nanostructures and liquids, both at ambient and under extreme conditions.« less
Effective potential kinetic theory for strongly coupled plasmas
NASA Astrophysics Data System (ADS)
Baalrud, Scott D.; Daligault, Jérôme
2016-11-01
The effective potential theory (EPT) is a recently proposed method for extending traditional plasma kinetic and transport theory into the strongly coupled regime. Validation from experiments and molecular dynamics simulations have shown it to be accurate up to the onset of liquid-like correlation parameters (corresponding to Γ ≃ 10-50 for the one-component plasma, depending on the process of interest). Here, this theory is briefly reviewed along with comparisons between the theory and molecular dynamics simulations for self-diffusivity and viscosity of the one-component plasma. A number of new results are also provided, including calculations of friction coefficients, energy exchange rates, stopping power, and mobility. The theory is also cast in the Landau and Fokker-Planck kinetic forms, which may prove useful for enabling efficient kinetic computations.
Thermodynamics of ferrofluids in applied magnetic fields.
Elfimova, Ekaterina A; Ivanov, Alexey O; Camp, Philip J
2013-10-01
The thermodynamic properties of ferrofluids in applied magnetic fields are examined using theory and computer simulation. The dipolar hard sphere model is used. The second and third virial coefficients (B(2) and B(3)) are evaluated as functions of the dipolar coupling constant λ, and the Langevin parameter α. The formula for B(3) for a system in an applied field is different from that in the zero-field case, and a derivation is presented. The formulas are compared to results from Mayer-sampling calculations, and the trends with increasing λ and α are examined. Very good agreement between theory and computation is demonstrated for the realistic values λ≤2. The analytical formulas for the virial coefficients are incorporated in to various forms of virial expansion, designed to minimize the effects of truncation. The theoretical results for the equation of state are compared against results from Monte Carlo simulations. In all cases, the so-called logarithmic free energy theory is seen to be superior. In this theory, the virial expansion of the Helmholtz free energy is re-summed in to a logarithmic function. Its success is due to the approximate representation of high-order terms in the virial expansion, while retaining the exact low-concentration behavior. The theory also yields the magnetization, and a comparison with simulation results and a competing modified mean-field theory shows excellent agreement. Finally, the putative field-dependent critical parameters for the condensation transition are obtained and compared against existing simulation results for the Stockmayer fluid. Dipolar hard spheres do not undergo the transition, but the presence of isotropic attractions, as in the Stockmayer fluid, gives rise to condensation even in zero field. A comparison of the relative changes in critical parameters with increasing field strength shows excellent agreement between theory and simulation, showing that the theoretical treatment of the dipolar interactions is robust.
Determination of partial molar volumes from free energy perturbation theory†
Vilseck, Jonah Z.; Tirado-Rives, Julian
2016-01-01
Partial molar volume is an important thermodynamic property that gives insights into molecular size and intermolecular interactions in solution. Theoretical frameworks for determining the partial molar volume (V°) of a solvated molecule generally apply Scaled Particle Theory or Kirkwood–Buff theory. With the current abilities to perform long molecular dynamics and Monte Carlo simulations, more direct methods are gaining popularity, such as computing V° directly as the difference in computed volume from two simulations, one with a solute present and another without. Thermodynamically, V° can also be determined as the pressure derivative of the free energy of solvation in the limit of infinite dilution. Both approaches are considered herein with the use of free energy perturbation (FEP) calculations to compute the necessary free energies of solvation at elevated pressures. Absolute and relative partial molar volumes are computed for benzene and benzene derivatives using the OPLS-AA force field. The mean unsigned error for all molecules is 2.8 cm3 mol−1. The present methodology should find use in many contexts such as the development and testing of force fields for use in computer simulations of organic and biomolecular systems, as a complement to related experimental studies, and to develop a deeper understanding of solute–solvent interactions. PMID:25589343
Computation in generalised probabilisitic theories
NASA Astrophysics Data System (ADS)
Lee, Ciarán M.; Barrett, Jonathan
2015-08-01
From the general difficulty of simulating quantum systems using classical systems, and in particular the existence of an efficient quantum algorithm for factoring, it is likely that quantum computation is intrinsically more powerful than classical computation. At present, the best upper bound known for the power of quantum computation is that {{BQP}}\\subseteq {{AWPP}}, where {{AWPP}} is a classical complexity class (known to be included in {{PP}}, hence {{PSPACE}}). This work investigates limits on computational power that are imposed by simple physical, or information theoretic, principles. To this end, we define a circuit-based model of computation in a class of operationally-defined theories more general than quantum theory, and ask: what is the minimal set of physical assumptions under which the above inclusions still hold? We show that given only an assumption of tomographic locality (roughly, that multipartite states and transformations can be characterized by local measurements), efficient computations are contained in {{AWPP}}. This inclusion still holds even without assuming a basic notion of causality (where the notion is, roughly, that probabilities for outcomes cannot depend on future measurement choices). Following Aaronson, we extend the computational model by allowing post-selection on measurement outcomes. Aaronson showed that the corresponding quantum complexity class, {{PostBQP}}, is equal to {{PP}}. Given only the assumption of tomographic locality, the inclusion in {{PP}} still holds for post-selected computation in general theories. Hence in a world with post-selection, quantum theory is optimal for computation in the space of all operational theories. We then consider whether one can obtain relativized complexity results for general theories. It is not obvious how to define a sensible notion of a computational oracle in the general framework that reduces to the standard notion in the quantum case. Nevertheless, it is possible to define computation relative to a ‘classical oracle’. Then, we show there exists a classical oracle relative to which efficient computation in any theory satisfying the causality assumption does not include {{NP}}.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
NASA Astrophysics Data System (ADS)
Bruni, Marco; Thomas, Daniel B.; Wands, David
2014-02-01
We present the first calculation of an intrinsically relativistic quantity, the leading-order correction to Newtonian theory, in fully nonlinear cosmological large-scale structure studies. Traditionally, nonlinear structure formation in standard ΛCDM cosmology is studied using N-body simulations, based on Newtonian gravitational dynamics on an expanding background. When one derives the Newtonian regime in a way that is a consistent approximation to the Einstein equations, the first relativistic correction to the usual Newtonian scalar potential is a gravitomagnetic vector potential, giving rise to frame dragging. At leading order, this vector potential does not affect the matter dynamics, thus it can be computed from Newtonian N-body simulations. We explain how we compute the vector potential from simulations in ΛCDM and examine its magnitude relative to the scalar potential, finding that the power spectrum of the vector potential is of the order 10-5 times the scalar power spectrum over the range of nonlinear scales we consider. On these scales the vector potential is up to two orders of magnitudes larger than the value predicted by second-order perturbation theory extrapolated to the same scales. We also discuss some possible observable effects and future developments.
Simulation of X-ray absorption spectra with orthogonality constrained density functional theory.
Derricotte, Wallace D; Evangelista, Francesco A
2015-06-14
Orthogonality constrained density functional theory (OCDFT) [F. A. Evangelista, P. Shushkov and J. C. Tully, J. Phys. Chem. A, 2013, 117, 7378] is a variational time-independent approach for the computation of electronic excited states. In this work we extend OCDFT to compute core-excited states and generalize the original formalism to determine multiple excited states. Benchmark computations on a set of 13 small molecules and 40 excited states show that unshifted OCDFT/B3LYP excitation energies have a mean absolute error of 1.0 eV. Contrary to time-dependent DFT, OCDFT excitation energies for first- and second-row elements are computed with near-uniform accuracy. OCDFT core excitation energies are insensitive to the choice of the functional and the amount of Hartree-Fock exchange. We show that OCDFT is a powerful tool for the assignment of X-ray absorption spectra of large molecules by simulating the gas-phase near-edge spectrum of adenine and thymine.
NASA Astrophysics Data System (ADS)
Lindsey, Rebecca; Goldman, Nir; Fried, Laurence
2017-06-01
Atomistic modeling of chemistry at extreme conditions remains a challenge, despite continuing advances in computing resources and simulation tools. While first principles methods provide a powerful predictive tool, the time and length scales associated with chemistry at extreme conditions (ns and μm, respectively) largely preclude extension of such models to molecular dynamics. In this work, we develop a simulation approach that retains the accuracy of density functional theory (DFT) while decreasing computational effort by several orders of magnitude. We generate n-body descriptions for atomic interactions by mapping forces arising from short density functional theory (DFT) trajectories on to simple Chebyshev polynomial series. We examine the importance of including greater than 2-body interactions, model transferability to different state points, and discuss approaches to ensure smooth and reasonable model shape outside of the distance domain sampled by the DFT training set. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Characterizing Representational Learning: A Combined Simulation and Tutorial on Perturbation Theory
ERIC Educational Resources Information Center
Kohnle, Antje; Passante, Gina
2017-01-01
Analyzing, constructing, and translating between graphical, pictorial, and mathematical representations of physics ideas and reasoning flexibly through them ("representational competence") is a key characteristic of expertise in physics but is a challenge for learners to develop. Interactive computer simulations and University of…
SAFSIM theory manual: A computer program for the engineering simulation of flow systems
NASA Astrophysics Data System (ADS)
Dobranich, Dean
1993-12-01
SAFSIM (System Analysis Flow SIMulator) is a FORTRAN computer program for simulating the integrated performance of complex flow systems. SAFSIM provides sufficient versatility to allow the engineering simulation of almost any system, from a backyard sprinkler system to a clustered nuclear reactor propulsion system. In addition to versatility, speed and robustness are primary SAFSIM development goals. SAFSIM contains three basic physics modules: (1) a fluid mechanics module with flow network capability; (2) a structure heat transfer module with multiple convection and radiation exchange surface capability; and (3) a point reactor dynamics module with reactivity feedback and decay heat capability. Any or all of the physics modules can be implemented, as the problem dictates. SAFSIM can be used for compressible and incompressible, single-phase, multicomponent flow systems. Both the fluid mechanics and structure heat transfer modules employ a one-dimensional finite element modeling approach. This document contains a description of the theory incorporated in SAFSIM, including the governing equations, the numerical methods, and the overall system solution strategies.
Relativistic interpretation of Newtonian simulations for cosmic structure formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fidler, Christian; Tram, Thomas; Crittenden, Robert
2016-09-01
The standard numerical tools for studying non-linear collapse of matter are Newtonian N -body simulations. Previous work has shown that these simulations are in accordance with General Relativity (GR) up to first order in perturbation theory, provided that the effects from radiation can be neglected. In this paper we show that the present day matter density receives more than 1% corrections from radiation on large scales if Newtonian simulations are initialised before z =50. We provide a relativistic framework in which unmodified Newtonian simulations are compatible with linear GR even in the presence of radiation. Our idea is to usemore » GR perturbation theory to keep track of the evolution of relativistic species and the relativistic space-time consistent with the Newtonian trajectories computed in N -body simulations. If metric potentials are sufficiently small, they can be computed using a first-order Einstein–Boltzmann code such as CLASS. We make this idea rigorous by defining a class of GR gauges, the Newtonian motion gauges, which are defined such that matter particles follow Newtonian trajectories. We construct a simple example of a relativistic space-time within which unmodified Newtonian simulations can be interpreted.« less
Stimulation from Simulation? A Teaching Model of Hillslope Hydrology for Use on Microcomputers.
ERIC Educational Resources Information Center
Burt, Tim; Butcher, Dave
1986-01-01
The design and use of a simple computer model which simulates a hillslope hydrology is described in a teaching context. The model shows a relatively complex environmental system can be constructed on the basis of a simple but realistic theory, thus allowing students to simulate the hydrological response of real hillslopes. (Author/TRS)
ERIC Educational Resources Information Center
Schenk, Robert E.
Intended for use with college students in introductory macroeconomics or American economic history courses, these two computer simulations of two basic macroeconomic models--a simple Keynesian-type model and a quantity-theory-of-money model--present largely incompatible explanations of the Great Depression. Written in Basic, the simulations are…
NASA Astrophysics Data System (ADS)
Wang, DeLiang; Terman, David
1995-01-01
A novel class of locally excitatory, globally inhibitory oscillator networks (LEGION) is proposed and investigated analytically and by computer simulation. The model of each oscillator corresponds to a standard relaxation oscillator with two time scales. The network exhibits a mechanism of selective gating, whereby an oscillator jumping up to its active phase rapidly recruits the oscillators stimulated by the same pattern, while preventing other oscillators from jumping up. We show analytically that with the selective gating mechanism the network rapidly achieves both synchronization within blocks of oscillators that are stimulated by connected regions and desynchronization between different blocks. Computer simulations demonstrate LEGION's promising ability for segmenting multiple input patterns in real time. This model lays a physical foundation for the oscillatory correlation theory of feature binding, and may provide an effective computational framework for scene segmentation and figure/ground segregation.
Cognitive Tools for Assessment and Learning in a High Information Flow Environment.
ERIC Educational Resources Information Center
Lajoie, Susanne P.; Azevedo, Roger; Fleiszer, David M.
1998-01-01
Describes the development of a simulation-based intelligent tutoring system for nurses working in a surgical intensive care unit. Highlights include situative learning theories and models of instruction, modeling expertise, complex decision making, linking theories of learning to the design of computer-based learning environments, cognitive task…
The expansion of polarization charge layers into magnetized vacuum - Theory and computer simulations
NASA Technical Reports Server (NTRS)
Galvez, Miguel; Borovsky, Joseph E.
1991-01-01
The formation and evolution of polarization charge layers on cylindrical plasma streams moving in vacuum are investigated using analytic theory and 2D electrostatic particle-in-cell computer simulations. It is shown that the behavior of the electron charge layer goes through three stages. An early time expansion is driven by electrostatic repulsion of electrons in the charge layer. At the intermediate stage, the simulations show that the electron-charge-layer expansion is halted by the positively charged plasma stream. Electrons close to the stream are pulled back to the stream and a second electron expansion follows in time. At the late stage, the expansion of the ion charge layer along the magnetic field lines accompanies the electron expansion to form an ambipolar expansion. It is found that the velocities of these electron-ion expansions greatly exceed the velocities of ambipolar expansions which are driven by plasma temperatures.
Whitley, Heather D.; Scullard, Christian R.; Benedict, Lorin X.; ...
2014-12-04
Here, we present a discussion of kinetic theory treatments of linear electrical and thermal transport in hydrogen plasmas, for a regime of interest to inertial confinement fusion applications. In order to assess the accuracy of one of the more involved of these approaches, classical Lenard-Balescu theory, we perform classical molecular dynamics simulations of hydrogen plasmas using 2-body quantum statistical potentials and compute both electrical and thermal conductivity from out particle trajectories using the Kubo approach. Our classical Lenard-Balescu results employing the identical statistical potentials agree well with the simulations.
NASA Astrophysics Data System (ADS)
John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.
2016-04-01
We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.
A theoretical framework for strain-related trabecular bone maintenance and adaptation.
Ruimerman, R; Hilbers, P; van Rietbergen, B; Huiskes, R
2005-04-01
It is assumed that density and morphology of trabecular bone is partially controlled by mechanical forces. How these effects are expressed in the local metabolic functions of osteoclast resorption and osteoblast formation is not known. In order to investigate possible mechano-biological pathways for these mechanisms we have proposed a mathematical theory (Nature 405 (2000) 704). This theory is based on hypothetical osteocyte stimulation of osteoblast bone formation, as an effect of elevated strain in the bone matrix, and a role for microcracks and disuse in promoting osteoclast resorption. Applied in a 2-D Finite Element Analysis model, the theory explained the formation of trabecular patterns. In this article we present a 3-D FEA model based on the same theory and investigated its potential morphological predictability of metabolic reactions to mechanical loads. The computations simulated the development of trabecular morphological details during growth, relative to measurements in growing pigs, reasonably realistic. They confirmed that the proposed mechanisms also inherently lead to optimal stress transfer. Alternative loading directions produced new trabecular orientations. Reduction of load reduced trabecular thickness, connectivity and mass in the simulation, as is seen in disuse osteoporosis. Simulating the effects of estrogen deficiency through increased osteoclast resorption frequencies produced osteoporotic morphologies as well, as seen in post-menopausal osteoporosis. We conclude that the theory provides a suitable computational framework to investigate hypothetical relationships between bone loading and metabolic expressions.
ERIC Educational Resources Information Center
Moore, John W., Ed.
1988-01-01
Describes five computer software packages; four for MS-DOS Systems and one for Apple II. Included are SPEC20, an interactive simulation of a Bausch and Lomb Spectronic-20; a database for laboratory chemicals and programs for visualizing Boltzmann-like distributions, orbital plot for the hydrogen atom and molecular orbital theory. (CW)
Any Ontological Model of the Single Qubit Stabilizer Formalism must be Contextual
NASA Astrophysics Data System (ADS)
Lillystone, Piers; Wallman, Joel J.
Quantum computers allow us to easily solve some problems classical computers find hard. Non-classical improvements in computational power should be due to some non-classical property of quantum theory. Contextuality, a more general notion of non-locality, is a necessary, but not sufficient, resource for quantum speed-up. Proofs of contextuality can be constructed for the classically simulable stabilizer formalism. Previous proofs of stabilizer contextuality are known for 2 or more qubits, for example the Mermin-Peres magic square. In the work presented we extend these results and prove that any ontological model of the single qubit stabilizer theory must be contextual, as defined by R. Spekkens, and give a relation between our result and the Mermin-Peres square. By demonstrating that contextuality is present in the qubit stabilizer formalism we provide further insight into the contextuality present in quantum theory. Understanding the contextuality of classical sub-theories will allow us to better identify the physical properties of quantum theory required for computational speed up. This research was supported by CIFAR, the Government of Ontario, and the Government of Canada through NSERC and Industry Canada.
Computation of the radiation amplitude of oscillons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fodor, Gyula; Forgacs, Peter; LMPT, CNRS-UMR 6083, Universite de Tours, Parc de Grandmont, 37200 Tours
2009-03-15
The radiation loss of small-amplitude oscillons (very long-living, spatially localized, time-dependent solutions) in one-dimensional scalar field theories is computed in the small-amplitude expansion analytically using matched asymptotic series expansions and Borel summation. The amplitude of the radiation is beyond all orders in perturbation theory and the method used has been developed by Segur and Kruskal in Phys. Rev. Lett. 58, 747 (1987). Our results are in good agreement with those of long-time numerical simulations of oscillons.
Statistical substantiation of the van der Waals theory of inhomogeneous fluids
NASA Astrophysics Data System (ADS)
Baidakov, V. G.; Protsenko, S. P.; Chernykh, G. G.; Boltachev, G. Sh.
2002-04-01
Computer experiments on simulation of thermodynamic properties and structural characteristics of a Lennard-Jones fluid in one- and two-phase models have been performed for the purpose of checking the base concepts of the van der Waals theory. Calculations have been performed by the method of molecular dynamics at cutoff radii of the intermolecular potential rc,1=2.6σ and rc,2=6.78σ. The phase equilibrium parameters, surface tension, and density distribution have been determined in a two-phase model with a flat liquid-vapor interface. The strong dependence of these properties on the value of rc is shown. The p,ρ,T properties and correlation functions have been calculated in a homogeneous model for a stable and a metastable fluid. An equation of state for a Lennard-Jones fluid describing stable, metastable, and labile regions has been built. It is shown that at T>=1.1 the properties of a flat interface within the computer experimental error can be described by the van der Waals square-gradient theory with an influence parameter κ independent of the density. Taking into account the density dependence of κ through the second moment of the direct correlation function will deteriorate the agreement of the theory with data of computer simulation. The contribution of terms of a higher order than (∇ρ)2 to the Helmholtz free energy of an inhomogeneous system has been considered. It is shown that taking into account terms proportional to (∇ρ)4 leaves no way of obtaining agreement between the theory and simulation data, while taking into consideration of terms proportional to (∇ρ)6 makes it possible to describe with adequate accuracy all the properties of a flat interface in the temperature range from the triple to the critical point.
Quantum simulation from the bottom up: the case of rebits
NASA Astrophysics Data System (ADS)
Enshan Koh, Dax; Yuezhen Niu, Murphy; Yoder, Theodore J.
2018-05-01
Typically, quantum mechanics is thought of as a linear theory with unitary evolution governed by the Schrödinger equation. While this is technically true and useful for a physicist, with regards to computation it is an unfortunately narrow point of view. Just as a classical computer can simulate highly nonlinear functions of classical states, so too can the more general quantum computer simulate nonlinear evolutions of quantum states. We detail one particular simulation of nonlinearity on a quantum computer, showing how the entire class of -unitary evolutions (on n qubits) can be simulated using a unitary, real-amplitude quantum computer (consisting of n + 1 qubits in total). These operators can be represented as the sum of a linear and antilinear operator, and add an intriguing new set of nonlinear quantum gates to the toolbox of the quantum algorithm designer. Furthermore, a subgroup of these nonlinear evolutions, called the -Cliffords, can be efficiently classically simulated, by making use of the fact that Clifford operators can simulate non-Clifford (in fact, non-linear) operators. This perspective of using the physical operators that we have to simulate non-physical ones that we do not is what we call bottom-up simulation, and we give some examples of its broader implications.
Density functional theory calculation of refractive indices of liquid-forming silicon oil compounds
NASA Astrophysics Data System (ADS)
Lee, Sanghun; Park, Sung Soo; Hagelberg, Frank
2012-02-01
A combination of quantum chemical calculation and molecular dynamics simulation is applied to compute refractive indices of liquid-forming silicon oils. The densities of these species are obtained from molecular dynamics simulations based on the NPT ensemble while the molecular polarizabilities are evaluated by density functional theory. This procedure is shown to yield results well compatible with available experimental data, suggesting that it represents a robust and economic route for determining the refractive indices of liquid-forming organic complexes containing silicon.
Further studies using matched filter theory and stochastic simulation for gust loads prediction
NASA Technical Reports Server (NTRS)
Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd Iii
1993-01-01
This paper describes two analysis methods -- one deterministic, the other stochastic -- for computing maximized and time-correlated gust loads for aircraft with nonlinear control systems. The first method is based on matched filter theory; the second is based on stochastic simulation. The paper summarizes the methods, discusses the selection of gust intensity for each method and presents numerical results. A strong similarity between the results from the two methods is seen to exist for both linear and nonlinear configurations.
Conceptual strategies and inter-theory relations: The case of nanoscale cracks
NASA Astrophysics Data System (ADS)
Bursten, Julia R.
2018-05-01
This paper introduces a new account of inter-theory relations in physics, which I call the conceptual strategies account. Using the example of a multiscale computer simulation model of nanoscale crack propagation in silicon, I illustrate this account and contrast it with existing reductive, emergent, and handshaking approaches. The conceptual strategies account develops the notion that relations among physical theories, and among their models, are constrained but not dictated by limitations from physics, mathematics, and computation, and that conceptual reasoning within those limits is required both to generate and to understand the relations between theories. Conceptual strategies result in a variety of types of relations between theories and models. These relations are themselves epistemic objects, like theories and models, and as such are an under-recognized part of the epistemic landscape of science.
The Modeling of Human Intelligence in the Computer as Demonstrated in the Game of DIPLOMAT.
ERIC Educational Resources Information Center
Collins, James Edward; Paulsen, Thomas Dean
An attempt was made to develop human-like behavior in the computer. A theory of the human learning process was described. A computer game was presented which simulated the human capabilities of reasoning and learning. The program was required to make intelligent decisions based on past experiences and critical analysis of the present situation.…
Hard sphere perturbation theory for fluids with soft-repulsive-core potentials
NASA Astrophysics Data System (ADS)
Ben-Amotz, Dor; Stell, George
2004-03-01
The thermodynamic properties of fluids with very soft repulsive-core potentials, resembling those of some liquid metals, are predicted with unprecedented accuracy using a new first-order thermodynamic perturbation theory. This theory is an extension of Mansoori-Canfield/Rasaiah-Stell (MCRS) perturbation theory, obtained by including a configuration integral correction recently identified by Mon, who evaluated it by computer simulation. In this work we derive an analytic expression for Mon's correction in terms of the radial distribution function of the soft-core fluid, g0(r), approximated using Lado's self-consistent extension of Weeks-Chandler-Andersen (WCA) theory. Comparisons with WCA and MCRS predictions show that our new extended-MCRS theory outperforms other first-order theories when applied to fluids with very soft inverse-power potentials (n⩽6), and predicts free energies that are within 0.3kT of simulation results up to the fluid freezing point.
TSC all-employee meeting - January 19, 2011
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, Alan
2011-01-25
Annual presentation on TSC accomplishments and state of the Directorate. This information is general knowledge and intended for all employees within the Theory, Simulation and Computation Directorate.
Cognitive Load Theory vs. Constructivist Approaches: Which Best Leads to Efficient, Deep Learning?
ERIC Educational Resources Information Center
Vogel-Walcutt, J. J.; Gebrim, J. B.; Bowers, C.; Carper, T. M.; Nicholson, D.
2011-01-01
Computer-assisted learning, in the form of simulation-based training, is heavily focused upon by the military. Because computer-based learning offers highly portable, reusable, and cost-efficient training options, the military has dedicated significant resources to the investigation of instructional strategies that improve learning efficiency…
2011-12-01
REMD while reproducing the energy landscape of explicit solvent simulations . ’ INTRODUCTION Molecular dynamics (MD) simulations of proteins can pro...Mongan, J.; McCammon, J. A. Accelerated molecular dynamics : a promising and efficient simulation method for biomolecules. J. Chem. Phys. 2004, 120 (24...Chemical Theory and Computation ARTICLE (8) Abraham,M. J.; Gready, J. E. Ensuringmixing efficiency of replica- exchange molecular dynamics simulations . J
Design and Performance Frameworks for Constructing Problem-Solving Simulations
ERIC Educational Resources Information Center
Stevens, Rons; Palacio-Cayetano, Joycelin
2003-01-01
Rapid advancements in hardware, software, and connectivity are helping to shorten the times needed to develop computer simulations for science education. These advancements, however, have not been accompanied by corresponding theories of how best to design and use these technologies for teaching, learning, and testing. Such design frameworks…
Simulation of Fatigue Behavior of High Temperature Metal Matrix Composites
NASA Technical Reports Server (NTRS)
Tong, Mike T.; Singhal, Suren N.; Chamis, Christos C.; Murthy, Pappu L. N.
1996-01-01
A generalized relatively new approach is described for the computational simulation of fatigue behavior of high temperature metal matrix composites (HT-MMCs). This theory is embedded in a specialty-purpose computer code. The effectiveness of the computer code to predict the fatigue behavior of HT-MMCs is demonstrated by applying it to a silicon-fiber/titanium-matrix HT-MMC. Comparative results are shown for mechanical fatigue, thermal fatigue, thermomechanical (in-phase and out-of-phase) fatigue, as well as the effects of oxidizing environments on fatigue life. These results show that the new approach reproduces available experimental data remarkably well.
Graph-based linear scaling electronic structure theory.
Niklasson, Anders M N; Mniszewski, Susan M; Negre, Christian F A; Cawkwell, Marc J; Swart, Pieter J; Mohd-Yusof, Jamal; Germann, Timothy C; Wall, Michael E; Bock, Nicolas; Rubensson, Emanuel H; Djidjev, Hristo
2016-06-21
We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.
Graph-based linear scaling electronic structure theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.
2016-06-21
We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.
The Million-Body Problem: Particle Simulations in Astrophysics
Rasio, Fred
2018-05-21
Computer simulations using particles play a key role in astrophysics. They are widely used to study problems across the entire range of astrophysical scales, from the dynamics of stars, gaseous nebulae, and galaxies, to the formation of the largest-scale structures in the universe. The 'particles' can be anything from elementary particles to macroscopic fluid elements, entire stars, or even entire galaxies. Using particle simulations as a common thread, this talk will present an overview of computational astrophysics research currently done in our theory group at Northwestern. Topics will include stellar collisions and the gravothermal catastrophe in dense star clusters.
Structural Composites Corrosive Management by Computational Simulation
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Minnetyan, Levon
2006-01-01
A simulation of corrosive management on polymer composites durability is presented. The corrosive environment is assumed to manage the polymer composite degradation on a ply-by-ply basis. The degradation is correlated with a measured Ph factor and is represented by voids, temperature, and moisture which vary parabolically for voids and linearly for temperature and moisture through the laminate thickness. The simulation is performed by a computational composite mechanics computer code which includes micro, macro, combined stress failure, and laminate theories. This accounts for starting the simulation from constitutive material properties and up to the laminate scale which exposes the laminate to the corrosive environment. Results obtained for one laminate indicate that the ply-by-ply managed degradation degrades the laminate to the last one or the last several plies. Results also demonstrate that the simulation is applicable to other polymer composite systems as well.
Computational Intelligence for Medical Imaging Simulations.
Chang, Victor
2017-11-25
This paper describes how to simulate medical imaging by computational intelligence to explore areas that cannot be easily achieved by traditional ways, including genes and proteins simulations related to cancer development and immunity. This paper has presented simulations and virtual inspections of BIRC3, BIRC6, CCL4, KLKB1 and CYP2A6 with their outputs and explanations, as well as brain segment intensity due to dancing. Our proposed MapReduce framework with the fusion algorithm can simulate medical imaging. The concept is very similar to the digital surface theories to simulate how biological units can get together to form bigger units, until the formation of the entire unit of biological subject. The M-Fusion and M-Update function by the fusion algorithm can achieve a good performance evaluation which can process and visualize up to 40 GB of data within 600 s. We conclude that computational intelligence can provide effective and efficient healthcare research offered by simulations and visualization.
Phase diagram of two-dimensional hard rods from fundamental mixed measure density functional theory
NASA Astrophysics Data System (ADS)
Wittmann, René; Sitta, Christoph E.; Smallenburg, Frank; Löwen, Hartmut
2017-10-01
A density functional theory for the bulk phase diagram of two-dimensional orientable hard rods is proposed and tested against Monte Carlo computer simulation data. In detail, an explicit density functional is derived from fundamental mixed measure theory and freely minimized numerically for hard discorectangles. The phase diagram, which involves stable isotropic, nematic, smectic, and crystalline phases, is obtained and shows good agreement with the simulation data. Our functional is valid for a multicomponent mixture of hard particles with arbitrary convex shapes and provides a reliable starting point to explore various inhomogeneous situations of two-dimensional hard rods and their Brownian dynamics.
A Schema Theory Account of Some Cognitive Processes in Complex Learning. Technical Report No. 81.
ERIC Educational Resources Information Center
Munro, Allen; Rigney, Joseph W.
Procedural semantics models have diminished the distinction between data structures and procedures in computer simulations of human intelligence. This development has theoretical consequences for models of cognition. One type of procedural semantics model, called schema theory, is presented, and a variety of cognitive processes are explained in…
Recursive renormalization group theory based subgrid modeling
NASA Technical Reports Server (NTRS)
Zhou, YE
1991-01-01
Advancing the knowledge and understanding of turbulence theory is addressed. Specific problems to be addressed will include studies of subgrid models to understand the effects of unresolved small scale dynamics on the large scale motion which, if successful, might substantially reduce the number of degrees of freedom that need to be computed in turbulence simulation.
Curtis, Evan T; Jamieson, Randall K
2018-04-01
Current theory has divided memory into multiple systems, resulting in a fractionated account of human behaviour. By an alternative perspective, memory is a single system. However, debate over the details of different single-system theories has overshadowed the converging agreement among them, slowing the reunification of memory. Evidence in favour of dividing memory often takes the form of dissociations observed in amnesia, where amnesic patients are impaired on some memory tasks but not others. The dissociations are taken as evidence for separate explicit and implicit memory systems. We argue against this perspective. We simulate two key dissociations between classification and recognition in a computational model of memory, A Theory of Nonanalytic Association. We assume that amnesia reflects a quantitative difference in the quality of encoding. We also present empirical evidence that replicates the dissociations in healthy participants, simulating amnesic behaviour by reducing study time. In both analyses, we successfully reproduce the dissociations. We integrate our computational and empirical successes with the success of alternative models and manipulations and argue that our demonstrations, taken in concert with similar demonstrations with similar models, provide converging evidence for a more general set of single-system analyses that support the conclusion that a wide variety of memory phenomena can be explained by a unified and coherent set of principles.
SELECTED ANNOTATED BIBLIOGRAPHY ON SYSTEMS OF THEORETICAL DEVICES,
BIONICS, BIBLIOGRAPHIES), (*BIBLIOGRAPHIES, BIONICS), (*CYBERNETICS, BIBLIOGRAPHIES), MATHEMATICS, COMPUTER LOGIC, NETWORKS, NERVOUS SYSTEM , THEORY , SEQUENCE SWITCHES, SWITCHING CIRCUITS, REDUNDANT COMPONENTS, LEARNING, MATHEMATICAL MODELS, BEHAVIOR, NERVES, SIMULATION, NERVE CELLS
NASA Astrophysics Data System (ADS)
Tripathi, Anurag; Khakhar, D. V.
2010-04-01
We study smooth, slightly inelastic particles flowing under gravity on a bumpy inclined plane using event-driven and discrete-element simulations. Shallow layers (ten particle diameters) are used to enable simulation using the event-driven method within reasonable computational times. Steady flows are obtained in a narrow range of angles (13°-14.5°) ; lower angles result in stopping of the flow and higher angles in continuous acceleration. The flow is relatively dense with the solid volume fraction, ν≈0.5 , and significant layering of particles is observed. We derive expressions for the stress, heat flux, and dissipation for the hard and soft particle models from first principles. The computed mean velocity, temperature, stress, dissipation, and heat flux profiles of hard particles are compared to soft particle results for different values of stiffness constant (k) . The value of stiffness constant for which results for hard and soft particles are identical is found to be k≥2×106mg/d , where m is the mass of a particle, g is the acceleration due to gravity, and d is the particle diameter. We compare the simulation results to constitutive relations obtained from the kinetic theory of Jenkins and Richman [J. T. Jenkins and M. W. Richman, Arch. Ration. Mech. Anal. 87, 355 (1985)] for pressure, dissipation, viscosity, and thermal conductivity. We find that all the quantities are very well predicted by kinetic theory for volume fractions ν<0.5 . At higher densities, obtained for thicker layers ( H=15d and H=20d ), the kinetic theory does not give accurate prediction. Deviations of the kinetic theory predictions from simulation results are relatively small for dissipation and heat flux and most significant deviations are observed for shear viscosity and pressure. The results indicate the range of applicability of soft particle simulations and kinetic theory for dense flows.
NASA Technical Reports Server (NTRS)
Otto, John C.; Paraschivoiu, Marius; Yesilyurt, Serhat; Patera, Anthony T.
1995-01-01
Engineering design and optimization efforts using computational systems rapidly become resource intensive. The goal of the surrogate-based approach is to perform a complete optimization with limited resources. In this paper we present a Bayesian-validated approach that informs the designer as to how well the surrogate performs; in particular, our surrogate framework provides precise (albeit probabilistic) bounds on the errors incurred in the surrogate-for-simulation substitution. The theory and algorithms of our computer{simulation surrogate framework are first described. The utility of the framework is then demonstrated through two illustrative examples: maximization of the flowrate of fully developed ow in trapezoidal ducts; and design of an axisymmetric body that achieves a target Stokes drag.
Lattice dynamics calculations based on density-functional perturbation theory in real space
NASA Astrophysics Data System (ADS)
Shang, Honghui; Carbogno, Christian; Rinke, Patrick; Scheffler, Matthias
2017-06-01
A real-space formalism for density-functional perturbation theory (DFPT) is derived and applied for the computation of harmonic vibrational properties in molecules and solids. The practical implementation using numeric atom-centered orbitals as basis functions is demonstrated exemplarily for the all-electron Fritz Haber Institute ab initio molecular simulations (FHI-aims) package. The convergence of the calculations with respect to numerical parameters is carefully investigated and a systematic comparison with finite-difference approaches is performed both for finite (molecules) and extended (periodic) systems. Finally, the scaling tests and scalability tests on massively parallel computer systems demonstrate the computational efficiency.
Xiao, Bo; Imel, Zac E; Georgiou, Panayiotis; Atkins, David C; Narayanan, Shrikanth S
2016-05-01
Empathy is an important psychological process that facilitates human communication and interaction. Enhancement of empathy has profound significance in a range of applications. In this paper, we review emerging directions of research on computational analysis of empathy expression and perception as well as empathic interactions, including their simulation. We summarize the work on empathic expression analysis by the targeted signal modalities (e.g., text, audio, and facial expressions). We categorize empathy simulation studies into theory-based emotion space modeling or application-driven user and context modeling. We summarize challenges in computational study of empathy including conceptual framing and understanding of empathy, data availability, appropriate use and validation of machine learning techniques, and behavior signal processing. Finally, we propose a unified view of empathy computation and offer a series of open problems for future research.
NASA Astrophysics Data System (ADS)
Cheng, Shengfeng; Wen, Chengyuan; Egorov, Sergei
2015-03-01
Molecular dynamics simulations and self-consistent field theory calculations are employed to study the interactions between a nanoparticle and a polymer brush at various densities of chains grafted to a plane. Simulations with both implicit and explicit solvent are performed. In either case the nanoparticle is loaded to the brush at a constant velocity. Then a series of simulations are performed to compute the force exerted on the nanoparticle that is fixed at various distances from the grafting plane. The potential of mean force is calculated and compared to the prediction based on a self-consistent field theory. Our simulations show that the explicit solvent leads to effects that are not captured in simulations with implicit solvent, indicating the importance of including explicit solvent in molecular simulations of such systems. Our results also demonstrate an interesting correlation between the force on the nanoparticle and the density profile of the brush. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for this research.
MoCog1: A computer simulation of recognition-primed human decision making, considering emotions
NASA Technical Reports Server (NTRS)
Gevarter, William B.
1992-01-01
The successful results of the first stage of a research effort to develop a versatile computer model of motivated human cognitive behavior are reported. Most human decision making appears to be an experience-based, relatively straightforward, largely automatic response to situations, utilizing cues and opportunities perceived from the current environment. The development, considering emotions, of the architecture and computer program associated with such 'recognition-primed' decision-making is described. The resultant computer program (MoCog1) was successfully utilized as a vehicle to simulate earlier findings that relate how an individual's implicit theories orient the individual toward particular goals, with resultant cognitions, affects, and behavior in response to their environment.
MoCog1: A computer simulation of recognition-primed human decision making
NASA Technical Reports Server (NTRS)
Gevarter, William B.
1991-01-01
The results of the first stage of a research effort to develop a 'sophisticated' computer model of human cognitive behavior are described. Most human decision making is an experience-based, relatively straight-forward, largely automatic response to internal goals and drives, utilizing cues and opportunities perceived from the current environment. The development of the architecture and computer program (MoCog1) associated with such 'recognition-primed' decision making is discussed. The resultant computer program was successfully utilized as a vehicle to simulate earlier findings that relate how an individual's implicit theories orient the individual toward particular goals, with resultant cognitions, affects, and behavior in response to their environment.
Tempel, David G; Aspuru-Guzik, Alán
2012-01-01
We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms.
2D Quantum Simulation of MOSFET Using the Non Equilibrium Green's Function Method
NASA Technical Reports Server (NTRS)
Svizhenko, Alexel; Anantram, M. P.; Govindan, T. R.; Yan, Jerry (Technical Monitor)
2000-01-01
The objectives this viewgraph presentation summarizes include: (1) the development of a quantum mechanical simulator for ultra short channel MOSFET simulation, including theory, physical approximations, and computer code; (2) explore physics that is not accessible by semiclassical methods; (3) benchmarking of semiclassical and classical methods; and (4) study other two-dimensional devices and molecular structure, from discretized Hamiltonian to tight-binding Hamiltonian.
Electromagnetic Showers at High Energy
ERIC Educational Resources Information Center
Loos, J. S.; Dawson, S. L.
1978-01-01
Some of the properties of electromagnetic showers observed in an experimental study are illustrated. Experimental data and results from quantum electrodynamics are discussed. Data and theory are compared using computer simulation. (BB)
Measuring uncertainty by extracting fuzzy rules using rough sets
NASA Technical Reports Server (NTRS)
Worm, Jeffrey A.
1991-01-01
Despite the advancements in the computer industry in the past 30 years, there is still one major deficiency. Computers are not designed to handle terms where uncertainty is present. To deal with uncertainty, techniques other than classical logic must be developed. The methods are examined of statistical analysis, the Dempster-Shafer theory, rough set theory, and fuzzy set theory to solve this problem. The fundamentals of these theories are combined to possibly provide the optimal solution. By incorporating principles from these theories, a decision making process may be simulated by extracting two sets of fuzzy rules: certain rules and possible rules. From these rules a corresponding measure of how much these rules is believed is constructed. From this, the idea of how much a fuzzy diagnosis is definable in terms of a set of fuzzy attributes is studied.
ERIC Educational Resources Information Center
Morrison, Robert G.; Doumas, Leonidas A. A.; Richland, Lindsey E.
2011-01-01
Theories accounting for the development of analogical reasoning tend to emphasize either the centrality of relational knowledge accretion or changes in information processing capability. Simulations in LISA (Hummel & Holyoak, 1997, 2003), a neurally inspired computer model of analogical reasoning, allow us to explore how these factors may…
A Theory for the Neural Basis of Language Part 2: Simulation Studies of the Model
ERIC Educational Resources Information Center
Baron, R. J.
1974-01-01
Computer simulation studies of the proposed model are presented. Processes demonstrated are (1) verbally directed recall of visual experience; (2) understanding of verbal information; (3) aspects of learning and forgetting; (4) the dependence of recognition and understanding in context; and (5) elementary concepts of sentence production. (Author)
NASA Technical Reports Server (NTRS)
Baker, A. J.; Iannelli, G. S.; Manhardt, Paul D.; Orzechowski, J. A.
1993-01-01
This report documents the user input and output data requirements for the FEMNAS finite element Navier-Stokes code for real-gas simulations of external aerodynamics flowfields. This code was developed for the configuration aerodynamics branch of NASA ARC, under SBIR Phase 2 contract NAS2-124568 by Computational Mechanics Corporation (COMCO). This report is in two volumes. Volume 1 contains the theory for the derived finite element algorithm and describes the test cases used to validate the computer program described in the Volume 2 user guide.
Recent developments in the theory of protein folding: searching for the global energy minimum.
Scheraga, H A
1996-04-16
Statistical mechanical theories and computer simulation are being used to gain an understanding of the fundamental features of protein folding. A major obstacle in the computation of protein structures is the multiple-minima problem arising from the existence of many local minima in the multidimensional energy landscape of the protein. This problem has been surmounted for small open-chain and cyclic peptides, and for regular-repeating sequences of models of fibrous proteins. Progress is being made in resolving this problem for globular proteins.
A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling
NASA Astrophysics Data System (ADS)
Aslam, Kamran
This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.
Modeling plastic deformation of post-irradiated copper micro-pillars
NASA Astrophysics Data System (ADS)
Crosby, Tamer; Po, Giacomo; Ghoniem, Nasr M.
2014-12-01
We present here an application of a fundamentally new theoretical framework for description of the simultaneous evolution of radiation damage and plasticity that can describe both in situ and ex situ deformation of structural materials [1]. The theory is based on the variational principle of maximum entropy production rate; with constraints on dislocation climb motion that are imposed by point defect fluxes as a result of irradiation. The developed theory is implemented in a new computational code that facilitates the simulation of irradiated and unirradiated materials alike in a consistent fashion [2]. Discrete Dislocation Dynamics (DDD) computer simulations are presented here for irradiated fcc metals that address the phenomenon of dislocation channel formation in post-irradiated copper. The focus of the simulations is on the role of micro-pillar boundaries and the statistics of dislocation pinning by stacking-fault tetrahedra (SFTs) on the onset of dislocation channel and incipient surface crack formation. The simulations show that the spatial heterogeneity in the distribution of SFTs naturally leads to localized plastic deformation and incipient surface fracture of micro-pillars.
Gamut relativity: a new computational approach to brightness and lightness perception.
Vladusich, Tony
2013-01-09
This article deconstructs the conventional theory that "brightness" and "lightness" constitute perceptual dimensions corresponding to the physical dimensions of luminance and reflectance, and builds in its place the theory that brightness and lightness correspond to computationally defined "modes," rather than dimensions, of perception. According to the theory, called gamut relativity, "blackness" and "whiteness" constitute the perceptual dimensions (forming a two-dimensional "blackness-whiteness" space) underlying achromatic color perception (black, white, and gray shades). These perceptual dimensions are postulated to be related to the neural activity levels in the ON and OFF channels of vision. The theory unifies and generalizes a number of extant concepts in the brightness and lightness literature, such as simultaneous contrast, anchoring, and scission, and quantitatively simulates several challenging perceptual phenomena, including the staircase Gelb effect and the effects of task instructions on achromatic color-matching behavior, all with a single free parameter. The theory also provides a new conception of achromatic color constancy in terms of the relative distances between points in blackness-whiteness space. The theory suggests a host of striking conclusions, the most important of which is that the perceptual dimensions of vision should be generically specified according to the computational properties of the brain, rather than in terms of "reified" physical dimensions. This new approach replaces the computational goal of estimating absolute physical quantities ("inverse optics") with the goal of computing object properties relatively.
NASA Technical Reports Server (NTRS)
Joslin, Ronald D.; Streett, Craig L.; Chang, Chau-Lyan
1992-01-01
Spatially evolving instabilities in a boundary layer on a flat plate are computed by direct numerical simulation (DNS) of the incompressible Navier-Stokes equations. In a truncated physical domain, a nonstaggered mesh is used for the grid. A Chebyshev-collocation method is used normal to the wall; finite difference and compact difference methods are used in the streamwise direction; and a Fourier series is used in the spanwise direction. For time stepping, implicit Crank-Nicolson and explicit Runge-Kutta schemes are used to the time-splitting method. The influence-matrix technique is used to solve the pressure equation. At the outflow boundary, the buffer-domain technique is used to prevent convective wave reflection or upstream propagation of information from the boundary. Results of the DNS are compared with those from both linear stability theory (LST) and parabolized stability equation (PSE) theory. Computed disturbance amplitudes and phases are in very good agreement with those of LST (for small inflow disturbance amplitudes). A measure of the sensitivity of the inflow condition is demonstrated with both LST and PSE theory used to approximate inflows. Although the DNS numerics are very different than those of PSE theory, the results are in good agreement. A small discrepancy in the results that does occur is likely a result of the variation in PSE boundary condition treatment in the far field. Finally, a small-amplitude wave triad is forced at the inflow, and simulation results are compared with those of LST. Again, very good agreement is found between DNS and LST results for the 3-D simulations, the implication being that the disturbance amplitudes are sufficiently small that nonlinear interactions are negligible.
Cognitive Development in Children: Five Monographs of the Society for Research in Child Development.
ERIC Educational Resources Information Center
Society for Research in Child Development.
Five conference reports that originally appeared as monographs of the Society for Research in Child Development concern cognition in young children. Included in a section on thought are articles on Piaget and his theories, computer simulation on human thinking, and an information processing theory of intellectual development. The development of…
An Explanatory Item Response Theory Approach for a Computer-Based Case Simulation Test
ERIC Educational Resources Information Center
Kahraman, Nilüfer
2014-01-01
Problem: Practitioners working with multiple-choice tests have long utilized Item Response Theory (IRT) models to evaluate the performance of test items for quality assurance. The use of similar applications for performance tests, however, is often encumbered due to the challenges encountered in working with complicated data sets in which local…
Multi-scale simulations of space problems with iPIC3D
NASA Astrophysics Data System (ADS)
Lapenta, Giovanni; Bettarini, Lapo; Markidis, Stefano
The implicit Particle-in-Cell method for the computer simulation of space plasma, and its im-plementation in a three-dimensional parallel code, called iPIC3D, are presented. The implicit integration in time of the Vlasov-Maxwell system removes the numerical stability constraints and enables kinetic plasma simulations at magnetohydrodynamics scales. Simulations of mag-netic reconnection in plasma are presented to show the effectiveness of the algorithm. In particular we will show a number of simulations done for large scale 3D systems using the physical mass ratio for Hydrogen. Most notably one simulation treats kinetically a box of tens of Earth radii in each direction and was conducted using about 16000 processors of the Pleiades NASA computer. The work is conducted in collaboration with the MMS-IDS theory team from University of Colorado (M. Goldman, D. Newman and L. Andersson). Reference: Stefano Markidis, Giovanni Lapenta, Rizwan-uddin Multi-scale simulations of plasma with iPIC3D Mathematics and Computers in Simulation, Available online 17 October 2009, http://dx.doi.org/10.1016/j.matcom.2009.08.038
NASA Astrophysics Data System (ADS)
Buongiorno Nardelli, Marco
High-Throughput Quantum-Mechanics computation of materials properties by ab initio methods has become the foundation of an effective approach to materials design, discovery and characterization. This data driven approach to materials science currently presents the most promising path to the development of advanced technological materials that could solve or mitigate important social and economic challenges of the 21st century. In particular, the rapid proliferation of computational data on materials properties presents the possibility to complement and extend materials property databases where the experimental data is lacking and difficult to obtain. Enhanced repositories such as AFLOWLIB open novel opportunities for structure discovery and optimization, including uncovering of unsuspected compounds, metastable structures and correlations between various properties. The practical realization of these opportunities depends almost exclusively on the the design of efficient algorithms for electronic structure simulations of realistic material systems beyond the limitations of the current standard theories. In this talk, I will review recent progress in theoretical and computational tools, and in particular, discuss the development and validation of novel functionals within Density Functional Theory and of local basis representations for effective ab-initio tight-binding schemes. Marco Buongiorno Nardelli is a pioneer in the development of computational platforms for theory/data/applications integration rooted in his profound and extensive expertise in the design of electronic structure codes and in his vision for sustainable and innovative software development for high-performance materials simulations. His research activities range from the design and discovery of novel materials for 21st century applications in renewable energy, environment, nano-electronics and devices, the development of advanced electronic structure theories and high-throughput techniques in materials genomics and computational materials design, to an active role as community scientific software developer (QUANTUM ESPRESSO, WanT, AFLOWpi)
The cyclotron maser theory of AKR and Z-mode radiation. [Auroral Kilometric Radiation
NASA Technical Reports Server (NTRS)
Wu, C. S.
1985-01-01
The cyclotron maser mechanism which may be responsible for the generation of auroral kilometric radiation and Z-mode radiation is discussed. Emphasis is placed on the basic concepts of the cyclotron maser theory, particularly the relativistic effect of the cyclotron resonance condition. Recent development of the theory is reviewed. Finally, the results of a computer simulation study which helps to understand the nonlinear saturation of the maser instability are reported.
Scientific Visualization and Computational Science: Natural Partners
NASA Technical Reports Server (NTRS)
Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)
1995-01-01
Scientific visualization is developing rapidly, stimulated by computational science, which is gaining acceptance as a third alternative to theory and experiment. Computational science is based on numerical simulations of mathematical models derived from theory. But each individual simulation is like a hypothetical experiment; initial conditions are specified, and the result is a record of the observed conditions. Experiments can be simulated for situations that can not really be created or controlled. Results impossible to measure can be computed.. Even for observable values, computed samples are typically much denser. Numerical simulations also extend scientific exploration where the mathematics is analytically intractable. Numerical simulations are used to study phenomena from subatomic to intergalactic scales and from abstract mathematical structures to pragmatic engineering of everyday objects. But computational science methods would be almost useless without visualization. The obvious reason is that the huge amounts of data produced require the high bandwidth of the human visual system, and interactivity adds to the power. Visualization systems also provide a single context for all the activities involved from debugging the simulations, to exploring the data, to communicating the results. Most of the presentations today have their roots in image processing, where the fundamental task is: Given an image, extract information about the scene. Visualization has developed from computer graphics, and the inverse task: Given a scene description, make an image. Visualization extends the graphics paradigm by expanding the possible input. The goal is still to produce images; the difficulty is that the input is not a scene description displayable by standard graphics methods. Visualization techniques must either transform the data into a scene description or extend graphics techniques to display this odd input. Computational science is a fertile field for visualization research because the results vary so widely and include things that have no known appearance. The amount of data creates additional challenges for both hardware and software systems. Evaluations of visualization should ultimately reflect the insight gained into the scientific phenomena. So making good visualizations requires consideration of characteristics of the user and the purpose of the visualization. Knowledge about human perception and graphic design is also relevant. It is this breadth of knowledge that stimulates proposals for multidisciplinary visualization teams and intelligent visualization assistant software. Visualization is an immature field, but computational science is stimulating research on a broad front.
Three-dimensional computer model for the atmospheric general circulation experiment
NASA Technical Reports Server (NTRS)
Roberts, G. O.
1984-01-01
An efficient, flexible, three-dimensional, hydrodynamic, computer code has been developed for a spherical cap geometry. The code will be used to simulate NASA's Atmospheric General Circulation Experiment (AGCE). The AGCE is a spherical, baroclinic experiment which will model the large-scale dynamics of our atmosphere; it has been proposed to NASA for future Spacelab flights. In the AGCE a radial dielectric body force will simulate gravity, with hot fluid tending to move outwards. In order that this force be dominant, the AGCE must be operated in a low gravity environment such as Spacelab. The full potential of the AGCE will only be realized by working in conjunction with an accurate computer model. Proposed experimental parameter settings will be checked first using model runs. Then actual experimental results will be compared with the model predictions. This interaction between experiment and theory will be very valuable in determining the nature of the AGCE flows and hence their relationship to analytical theories and actual atmospheric dynamics.
Structure and osmotic pressure of ionic microgel dispersions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hedrick, Mary M.; Department of Chemistry and Biochemistry, North Dakota State University, Fargo, North Dakota 58108-6050; Chung, Jun Kyung
We investigate structural and thermodynamic properties of aqueous dispersions of ionic microgels—soft colloidal gel particles that exhibit unusual phase behavior. Starting from a coarse-grained model of microgel macroions as charged spheres that are permeable to microions, we perform simulations and theoretical calculations using two complementary implementations of Poisson-Boltzmann (PB) theory. Within a one-component model, based on a linear-screening approximation for effective electrostatic pair interactions, we perform molecular dynamics simulations to compute macroion-macroion radial distribution functions, static structure factors, and macroion contributions to the osmotic pressure. For the same model, using a variational approximation for the free energy, we compute bothmore » macroion and microion contributions to the osmotic pressure. Within a spherical cell model, which neglects macroion correlations, we solve the nonlinear PB equation to compute microion distributions and osmotic pressures. By comparing the one-component and cell model implementations of PB theory, we demonstrate that the linear-screening approximation is valid for moderately charged microgels. By further comparing cell model predictions with simulation data for osmotic pressure, we chart the cell model’s limits in predicting osmotic pressures of salty dispersions.« less
Suzuoka, Daiki; Takahashi, Hideaki; Ishiyama, Tatsuya; Morita, Akihiro
2012-12-07
We have developed a method of molecular simulations utilizing a polarizable force field in combination with the theory of energy representation (ER) for the purpose of establishing an efficient and accurate methodology to compute solvation free energies. The standard version of the ER method is, however, based on the assumption that the solute-solvent interaction is pairwise additive for its construction. A crucial step in the present method is to introduce an intermediate state in the solvation process to treat separately the many-body interaction associated with the polarizable model. The intermediate state is chosen so that the solute-solvent interaction can be formally written in the pairwise form, though the solvent molecules are interacting with each other with polarizable charges dependent on the solvent configuration. It is, then, possible to extract the free energy contribution δμ due to the many-body interaction between solute and solvent from the total solvation free energy Δμ. It is shown that the free energy δμ can be computed by an extension of the recent development implemented in quantum mechanical∕molecular mechanical simulations. To assess the numerical robustness of the approach, we computed the solvation free energies of a water and a methanol molecule in water solvent, where two paths for the solvation processes were examined by introducing different intermediate states. The solvation free energies of a water molecule associated with the two paths were obtained as -5.3 and -5.8 kcal∕mol. Those of a methanol molecule were determined as -3.5 and -3.7 kcal∕mol. These results of the ER simulations were also compared with those computed by a numerically exact approach. It was demonstrated that the present approach produces the solvation free energies in comparable accuracies to simulations of thermodynamic integration (TI) method within a tenth of computational time used for the TI simulations.
Elastic constants from microscopic strain fluctuations
Sengupta; Nielaba; Rao; Binder
2000-02-01
Fluctuations of the instantaneous local Lagrangian strain epsilon(ij)(r,t), measured with respect to a static "reference" lattice, are used to obtain accurate estimates of the elastic constants of model solids from atomistic computer simulations. The measured strains are systematically coarse-grained by averaging them within subsystems (of size L(b)) of a system (of total size L) in the canonical ensemble. Using a simple finite size scaling theory we predict the behavior of the fluctuations
Simulating motivated cognition
NASA Technical Reports Server (NTRS)
Gevarter, William B.
1991-01-01
A research effort to develop a sophisticated computer model of human behavior is described. A computer framework of motivated cognition was developed. Motivated cognition focuses on the motivations or affects that provide the context and drive in human cognition and decision making. A conceptual architecture of the human decision-making approach from the perspective of information processing in the human brain is developed in diagrammatic form. A preliminary version of such a diagram is presented. This architecture is then used as a vehicle for successfully constructing a computer program simulation Dweck and Leggett's findings that relate how an individual's implicit theories orient them toward particular goals, with resultant cognitions, affects, and behavior.
A computer simulation approach to measurement of human control strategy
NASA Technical Reports Server (NTRS)
Green, J.; Davenport, E. L.; Engler, H. F.; Sears, W. E., III
1982-01-01
Human control strategy is measured through use of a psychologically-based computer simulation which reflects a broader theory of control behavior. The simulation is called the human operator performance emulator, or HOPE. HOPE was designed to emulate control learning in a one-dimensional preview tracking task and to measure control strategy in that setting. When given a numerical representation of a track and information about current position in relation to that track, HOPE generates positions for a stick controlling the cursor to be moved along the track. In other words, HOPE generates control stick behavior corresponding to that which might be used by a person learning preview tracking.
NASA Astrophysics Data System (ADS)
Pala, M. G.; Esseni, D.
2018-03-01
This paper presents the theory, implementation, and application of a quantum transport modeling approach based on the nonequilibrium Green's function formalism and a full-band empirical pseudopotential Hamiltonian. We here propose to employ a hybrid real-space/plane-wave basis that results in a significant reduction of the computational complexity compared to a full plane-wave basis. To this purpose, we provide a theoretical formulation in the hybrid basis of the quantum confinement, the self-energies of the leads, and the coupling between the device and the leads. After discussing the theory and the implementation of the new simulation methodology, we report results for complete, self-consistent simulations of different electron devices, including a silicon Esaki diode, a thin-body silicon field effect transistor (FET), and a germanium tunnel FET. The simulated transistors have technologically relevant geometrical features with a semiconductor film thickness of about 4 nm and a channel length ranging from 10 to 17 nm. We believe that the newly proposed formalism may find applications also in transport models based on ab initio Hamiltonians, as those employed in density functional theory methods.
Xiao, Bo; Imel, Zac E.; Georgiou, Panayiotis; Atkins, David C.; Narayanan, Shrikanth S.
2017-01-01
Empathy is an important psychological process that facilitates human communication and interaction. Enhancement of empathy has profound significance in a range of applications. In this paper, we review emerging directions of research on computational analysis of empathy expression and perception as well as empathic interactions, including their simulation. We summarize the work on empathic expression analysis by the targeted signal modalities (e.g., text, audio, facial expressions). We categorize empathy simulation studies into theory-based emotion space modeling or application-driven user and context modeling. We summarize challenges in computational study of empathy including conceptual framing and understanding of empathy, data availability, appropriate use and validation of machine learning techniques, and behavior signal processing. Finally, we propose a unified view of empathy computation, and offer a series of open problems for future research. PMID:27017830
2016-01-01
The nucleation of crystals in liquids is one of nature’s most ubiquitous phenomena, playing an important role in areas such as climate change and the production of drugs. As the early stages of nucleation involve exceedingly small time and length scales, atomistic computer simulations can provide unique insights into the microscopic aspects of crystallization. In this review, we take stock of the numerous molecular dynamics simulations that, in the past few decades, have unraveled crucial aspects of crystal nucleation in liquids. We put into context the theoretical framework of classical nucleation theory and the state-of-the-art computational methods by reviewing simulations of such processes as ice nucleation and the crystallization of molecules in solutions. We shall see that molecular dynamics simulations have provided key insights into diverse nucleation scenarios, ranging from colloidal particles to natural gas hydrates, and that, as a result, the general applicability of classical nucleation theory has been repeatedly called into question. We have attempted to identify the most pressing open questions in the field. We believe that, by improving (i) existing interatomic potentials and (ii) currently available enhanced sampling methods, the community can move toward accurate investigations of realistic systems of practical interest, thus bringing simulations a step closer to experiments. PMID:27228560
Sosso, Gabriele C; Chen, Ji; Cox, Stephen J; Fitzner, Martin; Pedevilla, Philipp; Zen, Andrea; Michaelides, Angelos
2016-06-22
The nucleation of crystals in liquids is one of nature's most ubiquitous phenomena, playing an important role in areas such as climate change and the production of drugs. As the early stages of nucleation involve exceedingly small time and length scales, atomistic computer simulations can provide unique insights into the microscopic aspects of crystallization. In this review, we take stock of the numerous molecular dynamics simulations that, in the past few decades, have unraveled crucial aspects of crystal nucleation in liquids. We put into context the theoretical framework of classical nucleation theory and the state-of-the-art computational methods by reviewing simulations of such processes as ice nucleation and the crystallization of molecules in solutions. We shall see that molecular dynamics simulations have provided key insights into diverse nucleation scenarios, ranging from colloidal particles to natural gas hydrates, and that, as a result, the general applicability of classical nucleation theory has been repeatedly called into question. We have attempted to identify the most pressing open questions in the field. We believe that, by improving (i) existing interatomic potentials and (ii) currently available enhanced sampling methods, the community can move toward accurate investigations of realistic systems of practical interest, thus bringing simulations a step closer to experiments.
Human-computer interaction in multitask situations
NASA Technical Reports Server (NTRS)
Rouse, W. B.
1977-01-01
Human-computer interaction in multitask decisionmaking situations is considered, and it is proposed that humans and computers have overlapping responsibilities. Queueing theory is employed to model this dynamic approach to the allocation of responsibility between human and computer. Results of simulation experiments are used to illustrate the effects of several system variables including number of tasks, mean time between arrivals of action-evoking events, human-computer speed mismatch, probability of computer error, probability of human error, and the level of feedback between human and computer. Current experimental efforts are discussed and the practical issues involved in designing human-computer systems for multitask situations are considered.
Quantum simulation of quantum field theory using continuous variables
Marshall, Kevin; Pooser, Raphael C.; Siopsis, George; ...
2015-12-14
Much progress has been made in the field of quantum computing using continuous variables over the last couple of years. This includes the generation of extremely large entangled cluster states (10,000 modes, in fact) as well as a fault tolerant architecture. This has lead to the point that continuous-variable quantum computing can indeed be thought of as a viable alternative for universal quantum computing. With that in mind, we present a new algorithm for continuous-variable quantum computers which gives an exponential speedup over the best known classical methods. Specifically, this relates to efficiently calculating the scattering amplitudes in scalar bosonicmore » quantum field theory, a problem that is known to be hard using a classical computer. Thus, we give an experimental implementation based on cluster states that is feasible with today's technology.« less
Quantum simulation of quantum field theory using continuous variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Kevin; Pooser, Raphael C.; Siopsis, George
Much progress has been made in the field of quantum computing using continuous variables over the last couple of years. This includes the generation of extremely large entangled cluster states (10,000 modes, in fact) as well as a fault tolerant architecture. This has lead to the point that continuous-variable quantum computing can indeed be thought of as a viable alternative for universal quantum computing. With that in mind, we present a new algorithm for continuous-variable quantum computers which gives an exponential speedup over the best known classical methods. Specifically, this relates to efficiently calculating the scattering amplitudes in scalar bosonicmore » quantum field theory, a problem that is known to be hard using a classical computer. Thus, we give an experimental implementation based on cluster states that is feasible with today's technology.« less
Quantitative computer simulations of extraterrestrial processing operations
NASA Technical Reports Server (NTRS)
Vincent, T. L.; Nikravesh, P. E.
1989-01-01
The automation of a small, solid propellant mixer was studied. Temperature control is under investigation. A numerical simulation of the system is under development and will be tested using different control options. Control system hardware is currently being put into place. The construction of mathematical models and simulation techniques for understanding various engineering processes is also studied. Computer graphics packages were utilized for better visualization of the simulation results. The mechanical mixing of propellants is examined. Simulation of the mixing process is being done to study how one can control for chaotic behavior to meet specified mixing requirements. An experimental mixing chamber is also being built. It will allow visual tracking of particles under mixing. The experimental unit will be used to test ideas from chaos theory, as well as to verify simulation results. This project has applications to extraterrestrial propellant quality and reliability.
CYBER 200 Applications Seminar
NASA Technical Reports Server (NTRS)
Gary, J. P. (Compiler)
1984-01-01
Applications suited for the CYBER 200 digital computer are discussed. Various areas of application including meteorology, algorithms, fluid dynamics, monte carlo methods, petroleum, electronic circuit simulation, biochemistry, lattice gauge theory, economics and ray tracing are discussed.
Discovering the gas laws and understanding the kinetic theory of gases with an iPad app
NASA Astrophysics Data System (ADS)
Davies, Gary B.
2017-07-01
Carrying out classroom experiments that demonstrate Boyle’s law and Gay-Lussac’s law can be challenging. Even if we are able to conduct classroom experiments using pressure gauges and syringes, the results of these experiments do little to illuminate the kinetic theory of gases. However, molecular dynamics simulations that run on computers allow us to visualise the behaviour of individual particles and to link this behaviour to the bulk properties of the gas e.g. its pressure and temperature. In this article, I describe how to carry out ‘computer experiments’ using a commercial molecular dynamics iPad app called Atoms in Motion [1]. Using the app, I show how to obtain data from simulations that demonstrate Boyle’s law and Gay-Lussac’s law, and hence also the combined gas law.
Solution of the one-dimensional consolidation theory equation with a pseudospectral method
Sepulveda, N.; ,
1991-01-01
The one-dimensional consolidation theory equation is solved for an aquifer system using a pseudospectral method. The spatial derivatives are computed using Fast Fourier Transforms and the time derivative is solved using a fourth-order Runge-Kutta scheme. The computer model calculates compaction based on the void ratio changes accumulated during the simulated periods of time. Compactions and expansions resulting from groundwater withdrawals and recharges are simulated for two observation wells in Santa Clara Valley and two in San Joaquin Valley, California. Field data previously published are used to obtain mean values for the soil grain density and the compression index and to generate depth-dependent profiles for hydraulic conductivity and initial void ratio. The water-level plots for the wells studied were digitized and used to obtain the time dependent profiles of effective stress.
NASA Astrophysics Data System (ADS)
Jin, Yongmei
In recent years, theoretical modeling and computational simulation of microstructure evolution and materials property has been attracting much attention. While significant advances have been made, two major challenges remain. One is the integration of multiple physical phenomena for simulation of complex materials behavior, the other is the bridging over multiple length and time scales in materials modeling and simulation. The research presented in this Thesis is focused mainly on tackling the first major challenge. In this Thesis, a unified Phase Field Microelasticity (PFM) approach is developed. This approach is an advanced version of the phase field method that takes into account the exact elasticity of arbitrarily anisotropic, elastically and structurally inhomogeneous systems. The proposed theory and models are applicable to infinite solids, elastic half-space, and finite bodies with arbitrary-shaped free surfaces, which may undergo various concomitant physical processes. The Phase Field Microelasticity approach is employed to formulate the theories and models of martensitic transformation, dislocation dynamics, and crack evolution in single crystal and polycrystalline solids. It is also used to study strain relaxation in heteroepitaxial thin films through misfit dislocation and surface roughening. Magnetic domain evolution in nanocrystalline thin films is also investigated. Numerous simulation studies are performed. Comparison with analytical predictions and experimental observations are presented. Agreement verities the theory and models as realistic simulation tools for computational materials science and engineering. The same Phase Field Microelasticity formalism of individual models of different physical phenomena makes it easy to integrate multiple physical processes into one unified simulation model, where multiple phenomena are treated as various relaxation modes that together act as one common cooperative phenomenon. The model does not impose a priori constraints on possible microstructure evolution paths. This gives the model predicting power, where material system itself "chooses" the optimal path for multiple processes. The advances made in this Thesis present a significant step forward to overcome the first challenge, mesoscale multi-physics modeling and simulation of materials. At the end of this Thesis, the way to tackle the second challenge, bridging over multiple length and time scales in materials modeling and simulation, is discussed based on connection between the mesoscale Phase Field Microelasticity modeling and microscopic atomistic calculation as well as macroscopic continuum theory.
Grzetic, Douglas J; Delaney, Kris T; Fredrickson, Glenn H
2018-05-28
We derive the effective Flory-Huggins parameter in polarizable polymeric systems, within a recently introduced polarizable field theory framework. The incorporation of bead polarizabilities in the model self-consistently embeds dielectric response, as well as van der Waals interactions. The latter generate a χ parameter (denoted χ̃) between any two species with polarizability contrast. Using one-loop perturbation theory, we compute corrections to the structure factor Sk and the dielectric function ϵ^(k) for a polarizable binary homopolymer blend in the one-phase region of the phase diagram. The electrostatic corrections to S(k) can be entirely accounted for by a renormalization of the excluded volume parameter B into three van der Waals-corrected parameters B AA , B AB , and B BB , which then determine χ̃. The one-loop theory not only enables the quantitative prediction of χ̃ but also provides useful insight into the dependence of χ̃ on the electrostatic environment (for example, its sensitivity to electrostatic screening). The unapproximated polarizable field theory is amenable to direct simulation via complex Langevin sampling, which we employ here to test the validity of the one-loop results. From simulations of S(k) and ϵ^(k) for a system of polarizable homopolymers, we find that the one-loop theory is best suited to high concentrations, where it performs very well. Finally, we measure χ̃N in simulations of a polarizable diblock copolymer melt and obtain excellent agreement with the one-loop theory. These constitute the first fully fluctuating simulations conducted within the polarizable field theory framework.
NASA Astrophysics Data System (ADS)
Grzetic, Douglas J.; Delaney, Kris T.; Fredrickson, Glenn H.
2018-05-01
We derive the effective Flory-Huggins parameter in polarizable polymeric systems, within a recently introduced polarizable field theory framework. The incorporation of bead polarizabilities in the model self-consistently embeds dielectric response, as well as van der Waals interactions. The latter generate a χ parameter (denoted χ ˜ ) between any two species with polarizability contrast. Using one-loop perturbation theory, we compute corrections to the structure factor S (k ) and the dielectric function ɛ ^ (k ) for a polarizable binary homopolymer blend in the one-phase region of the phase diagram. The electrostatic corrections to S(k) can be entirely accounted for by a renormalization of the excluded volume parameter B into three van der Waals-corrected parameters BAA, BAB, and BBB, which then determine χ ˜ . The one-loop theory not only enables the quantitative prediction of χ ˜ but also provides useful insight into the dependence of χ ˜ on the electrostatic environment (for example, its sensitivity to electrostatic screening). The unapproximated polarizable field theory is amenable to direct simulation via complex Langevin sampling, which we employ here to test the validity of the one-loop results. From simulations of S(k) and ɛ ^ (k ) for a system of polarizable homopolymers, we find that the one-loop theory is best suited to high concentrations, where it performs very well. Finally, we measure χ ˜ N in simulations of a polarizable diblock copolymer melt and obtain excellent agreement with the one-loop theory. These constitute the first fully fluctuating simulations conducted within the polarizable field theory framework.
The Role of Computer Simulation in Nanoporous Metals—A Review
Xia, Re; Wu, Run Ni; Liu, Yi Lun; Sun, Xiao Yu
2015-01-01
Nanoporous metals (NPMs) have proven to be all-round candidates in versatile and diverse applications. In this decade, interest has grown in the fabrication, characterization and applications of these intriguing materials. Most existing reviews focus on the experimental and theoretical works rather than the numerical simulation. Actually, with numerous experiments and theory analysis, studies based on computer simulation, which may model complex microstructure in more realistic ways, play a key role in understanding and predicting the behaviors of NPMs. In this review, we present a comprehensive overview of the computer simulations of NPMs, which are prepared through chemical dealloying. Firstly, we summarize the various simulation approaches to preparation, processing, and the basic physical and chemical properties of NPMs. In this part, the emphasis is attached to works involving dealloying, coarsening and mechanical properties. Then, we conclude with the latest progress as well as the future challenges in simulation studies. We believe that highlighting the importance of simulations will help to better understand the properties of novel materials and help with new scientific research on these materials. PMID:28793491
NASA Astrophysics Data System (ADS)
Frenkel, Daan
2007-03-01
During the past decade there has been a unique synergy between theory, experiment and simulation in Soft Matter Physics. In colloid science, computer simulations that started out as studies of highly simplified model systems, have acquired direct experimental relevance because experimental realizations of these simple models can now be synthesized. Whilst many numerical predictions concerning the phase behavior of colloidal systems have been vindicated by experiments, the jury is still out on others. In my talk I will discuss some of the recent technical developments, new findings and open questions in computational soft-matter science.
Weikl, Thomas R; Hu, Jinglei; Xu, Guang-Kui; Lipowsky, Reinhard
2016-09-02
The adhesion of cell membranes is mediated by the binding of membrane-anchored receptor and ligand proteins. In this article, we review recent results from simulations and theory that lead to novel insights on how the binding equilibrium and kinetics of these proteins is affected by the membranes and by the membrane anchoring and molecular properties of the proteins. Simulations and theory both indicate that the binding equilibrium constant [Formula: see text] and the on- and off-rate constants of anchored receptors and ligands in their 2-dimensional (2D) membrane environment strongly depend on the membrane roughness from thermally excited shape fluctuations on nanoscales. Recent theory corroborated by simulations provides a general relation between [Formula: see text] and the binding constant [Formula: see text] of soluble variants of the receptors and ligands that lack the membrane anchors and are free to diffuse in 3 dimensions (3D).
Weikl, Thomas R.; Hu, Jinglei; Xu, Guang-Kui; Lipowsky, Reinhard
2016-01-01
ABSTRACT The adhesion of cell membranes is mediated by the binding of membrane-anchored receptor and ligand proteins. In this article, we review recent results from simulations and theory that lead to novel insights on how the binding equilibrium and kinetics of these proteins is affected by the membranes and by the membrane anchoring and molecular properties of the proteins. Simulations and theory both indicate that the binding equilibrium constant K2D and the on- and off-rate constants of anchored receptors and ligands in their 2-dimensional (2D) membrane environment strongly depend on the membrane roughness from thermally excited shape fluctuations on nanoscales. Recent theory corroborated by simulations provides a general relation between K2D and the binding constant K3D of soluble variants of the receptors and ligands that lack the membrane anchors and are free to diffuse in 3 dimensions (3D). PMID:27294442
NASA Technical Reports Server (NTRS)
Kikuchi, Hideaki; Kalia, Rajiv; Nakano, Aiichiro; Vashishta, Priya; Iyetomi, Hiroshi; Ogata, Shuji; Kouno, Takahisa; Shimojo, Fuyuki; Tsuruta, Kanji; Saini, Subhash;
2002-01-01
A multidisciplinary, collaborative simulation has been performed on a Grid of geographically distributed PC clusters. The multiscale simulation approach seamlessly combines i) atomistic simulation backed on the molecular dynamics (MD) method and ii) quantum mechanical (QM) calculation based on the density functional theory (DFT), so that accurate but less scalable computations are performed only where they are needed. The multiscale MD/QM simulation code has been Grid-enabled using i) a modular, additive hybridization scheme, ii) multiple QM clustering, and iii) computation/communication overlapping. The Gridified MD/QM simulation code has been used to study environmental effects of water molecules on fracture in silicon. A preliminary run of the code has achieved a parallel efficiency of 94% on 25 PCs distributed over 3 PC clusters in the US and Japan, and a larger test involving 154 processors on 5 distributed PC clusters is in progress.
Discovering the Gas Laws and Understanding the Kinetic Theory of Gases with an iPad App
ERIC Educational Resources Information Center
Davies, Gary B.
2017-01-01
Carrying out classroom experiments that demonstrate Boyle's law and Gay-Lussac's law can be challenging. Even if we are able to conduct classroom experiments using pressure gauges and syringes, the results of these experiments do little to illuminate the kinetic theory of gases. However, molecular dynamics simulations that run on computers allow…
The fast algorithm of spark in compressive sensing
NASA Astrophysics Data System (ADS)
Xie, Meihua; Yan, Fengxia
2017-01-01
Compressed Sensing (CS) is an advanced theory on signal sampling and reconstruction. In CS theory, the reconstruction condition of signal is an important theory problem, and spark is a good index to study this problem. But the computation of spark is NP hard. In this paper, we study the problem of computing spark. For some special matrixes, for example, the Gaussian random matrix and 0-1 random matrix, we obtain some conclusions. Furthermore, for Gaussian random matrix with fewer rows than columns, we prove that its spark equals to the number of its rows plus one with probability 1. For general matrix, two methods are given to compute its spark. One is the method of directly searching and the other is the method of dual-tree searching. By simulating 24 Gaussian random matrixes and 18 0-1 random matrixes, we tested the computation time of these two methods. Numerical results showed that the dual-tree searching method had higher efficiency than directly searching, especially for those matrixes which has as much as rows and columns.
Phase behavior and orientational ordering in block copolymers doped with anisotropic nanoparticles
NASA Astrophysics Data System (ADS)
Osipov, M. A.; Gorkunov, M. V.; Berezkin, A. V.; Kudryavtsev, Y. V.
2018-04-01
A molecular field theory and coarse-grained computer simulations with dissipative particle dynamics have been used to study the spontaneous orientational ordering of anisotropic nanoparticles in the lamellar and hexagonal phases of diblock copolymers and the effect of nanoparticles on the phase behavior of these systems. Both the molecular theory and computer simulations indicate that strongly anisotropic nanoparticles are ordered orientationally mainly in the boundary region between the domains and the nematic order parameter possesses opposite signs in adjacent domains. The orientational order is induced by the boundary and by the interaction between nanoparticles and the monomer units in different domains. In simulations, sufficiently long and strongly selective nanoparticles are ordered also inside the domains. The nematic order parameter and local concentration profiles of nanoparticles have been calculated numerically using the model of a nanoparticle with two interaction centers and also determined using the results of computer simulations. A number of phase diagrams have been obtained which illustrate the effect of nanoparticle selectivity and molar fraction of the stability ranges of various phases. Different morphologies have been identified by analyzing the static structure factor and a phase diagram has been constructed in coordinates' nanoparticle concentration-copolymer composition. Orientational ordering of even a small fraction of nanoparticles may result in a significant increase of the dielectric anisotropy of a polymer nanocomposite, which is important for various applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huš, Matej; Urbic, Tomaz, E-mail: tomaz.urbic@fkkt.uni-lj.si; Munaò, Gianmarco
Thermodynamic and structural properties of a coarse-grained model of methanol are examined by Monte Carlo simulations and reference interaction site model (RISM) integral equation theory. Methanol particles are described as dimers formed from an apolar Lennard-Jones sphere, mimicking the methyl group, and a sphere with a core-softened potential as the hydroxyl group. Different closure approximations of the RISM theory are compared and discussed. The liquid structure of methanol is investigated by calculating site-site radial distribution functions and static structure factors for a wide range of temperatures and densities. Results obtained show a good agreement between RISM and Monte Carlo simulations.more » The phase behavior of methanol is investigated by employing different thermodynamic routes for the calculation of the RISM free energy, drawing gas-liquid coexistence curves that match the simulation data. Preliminary indications for a putative second critical point between two different liquid phases of methanol are also discussed.« less
Automated Classification of Phonological Errors in Aphasic Language
Ahuja, Sanjeev B.; Reggia, James A.; Berndt, Rita S.
1984-01-01
Using heuristically-guided state space search, a prototype program has been developed to simulate and classify phonemic errors occurring in the speech of neurologically-impaired patients. Simulations are based on an interchangeable rule/operator set of elementary errors which represent a theory of phonemic processing faults. This work introduces and evaluates a novel approach to error simulation and classification, it provides a prototype simulation tool for neurolinguistic research, and it forms the initial phase of a larger research effort involving computer modelling of neurolinguistic processes.
Multiple exciton generation in chiral carbon nanotubes: Density functional theory based computation
NASA Astrophysics Data System (ADS)
Kryjevski, Andrei; Mihaylov, Deyan; Kilina, Svetlana; Kilin, Dmitri
2017-10-01
We use a Boltzmann transport equation (BE) to study time evolution of a photo-excited state in a nanoparticle including phonon-mediated exciton relaxation and the multiple exciton generation (MEG) processes, such as exciton-to-biexciton multiplication and biexciton-to-exciton recombination. BE collision integrals are computed using Kadanoff-Baym-Keldysh many-body perturbation theory based on density functional theory simulations, including exciton effects. We compute internal quantum efficiency (QE), which is the number of excitons generated from an absorbed photon in the course of the relaxation. We apply this approach to chiral single-wall carbon nanotubes (SWCNTs), such as (6,2) and (6,5). We predict efficient MEG in the (6,2) and (6,5) SWCNTs within the solar spectrum range starting at the 2Eg energy threshold and with QE reaching ˜1.6 at about 3Eg, where Eg is the electronic gap.
Multiple exciton generation in chiral carbon nanotubes: Density functional theory based computation.
Kryjevski, Andrei; Mihaylov, Deyan; Kilina, Svetlana; Kilin, Dmitri
2017-10-21
We use a Boltzmann transport equation (BE) to study time evolution of a photo-excited state in a nanoparticle including phonon-mediated exciton relaxation and the multiple exciton generation (MEG) processes, such as exciton-to-biexciton multiplication and biexciton-to-exciton recombination. BE collision integrals are computed using Kadanoff-Baym-Keldysh many-body perturbation theory based on density functional theory simulations, including exciton effects. We compute internal quantum efficiency (QE), which is the number of excitons generated from an absorbed photon in the course of the relaxation. We apply this approach to chiral single-wall carbon nanotubes (SWCNTs), such as (6,2) and (6,5). We predict efficient MEG in the (6,2) and (6,5) SWCNTs within the solar spectrum range starting at the 2E g energy threshold and with QE reaching ∼1.6 at about 3E g , where E g is the electronic gap.
Traffic Flow Density Distribution Based on FEM
NASA Astrophysics Data System (ADS)
Ma, Jing; Cui, Jianming
In analysis of normal traffic flow, it usually uses the static or dynamic model to numerical analyze based on fluid mechanics. However, in such handling process, the problem of massive modeling and data handling exist, and the accuracy is not high. Finite Element Method (FEM) is a production which is developed from the combination of a modern mathematics, mathematics and computer technology, and it has been widely applied in various domain such as engineering. Based on existing theory of traffic flow, ITS and the development of FEM, a simulation theory of the FEM that solves the problems existing in traffic flow is put forward. Based on this theory, using the existing Finite Element Analysis (FEA) software, the traffic flow is simulated analyzed with fluid mechanics and the dynamics. Massive data processing problem of manually modeling and numerical analysis is solved, and the authenticity of simulation is enhanced.
NASA Astrophysics Data System (ADS)
Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.
2018-02-01
Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.
Large-N kinetic theory for highly occupied systems
NASA Astrophysics Data System (ADS)
Walz, R.; Boguslavski, K.; Berges, J.
2018-06-01
We consider an effective kinetic description for quantum many-body systems, which is not based on a weak-coupling or diluteness expansion. Instead, it employs an expansion in the number of field components N of the underlying scalar quantum field theory. Extending previous studies, we demonstrate that the large-N kinetic theory at next-to-leading order is able to describe important aspects of highly occupied systems, which are beyond standard perturbative kinetic approaches. We analyze the underlying quasiparticle dynamics by computing the effective scattering matrix elements analytically and solve numerically the large-N kinetic equation for a highly occupied system far from equilibrium. This allows us to compute the universal scaling form of the distribution function at an infrared nonthermal fixed point within a kinetic description, and we compare to existing lattice field theory simulation results.
Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.
Sheppard, C W.
1969-03-01
A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.
Properties of Organic Liquids when Simulated with Long-Range Lennard-Jones Interactions.
Fischer, Nina M; van Maaren, Paul J; Ditz, Jonas C; Yildirim, Ahmet; van der Spoel, David
2015-07-14
In order to increase the accuracy of classical computer simulations, existing methodologies may need to be adapted. Hitherto, most force fields employ a truncated potential function to model van der Waals interactions, sometimes augmented with an analytical correction. Although such corrections are accurate for homogeneous systems with a long cutoff, they should not be used in inherently inhomogeneous systems such as biomolecular and interface systems. For such cases, a variant of the particle mesh Ewald algorithm (Lennard-Jones PME) was already proposed 20 years ago (Essmann et al. J. Chem. Phys. 1995, 103, 8577-8593), but it was implemented only recently (Wennberg et al. J. Chem. Theory Comput. 2013, 9, 3527-3537) in a major simulation code (GROMACS). The availability of this method allows surface tensions of liquids as well as bulk properties to be established, such as density and enthalpy of vaporization, without approximations due to truncation. Here, we report on simulations of ≈150 liquids (taken from a force field benchmark: Caleman et al. J. Chem. Theory Comput. 2012, 8, 61-74) using three different force fields and compare simulations with and without explicit long-range van der Waals interactions. We find that the density and enthalpy of vaporization increase for most liquids using the generalized Amber force field (GAFF, Wang et al. J. Comput. Chem. 2004, 25, 1157-1174) and the Charmm generalized force field (CGenFF, Vanommeslaeghe et al. J. Comput. Chem. 2010, 31, 671-690) but less so for OPLS/AA (Jorgensen and Tirado-Rives, Proc. Natl. Acad. Sci. U.S.A. 2005, 102, 6665-6670), which was parametrized with an analytical correction to the van der Waals potential. The surface tension increases by ≈10(-2) N/m for all force fields. These results suggest that van der Waals attractions in force fields are too strong, in particular for the GAFF and CGenFF. In addition to the simulation results, we introduce a new version of a web server, http://virtualchemistry.org, aimed at facilitating sharing and reuse of input files for molecular simulations.
Cao, Siqin; Sheong, Fu Kit; Huang, Xuhui
2015-08-07
Reference interaction site model (RISM) has recently become a popular approach in the study of thermodynamical and structural properties of the solvent around macromolecules. On the other hand, it was widely suggested that there exists water density depletion around large hydrophobic solutes (>1 nm), and this may pose a great challenge to the RISM theory. In this paper, we develop a new analytical theory, the Reference Interaction Site Model with Hydrophobicity induced density Inhomogeneity (RISM-HI), to compute solvent radial distribution function (RDF) around large hydrophobic solute in water as well as its mixture with other polyatomic organic solvents. To achieve this, we have explicitly considered the density inhomogeneity at the solute-solvent interface using the framework of the Yvon-Born-Green hierarchy, and the RISM theory is used to obtain the solute-solvent pair correlation. In order to efficiently solve the relevant equations while maintaining reasonable accuracy, we have also developed a new closure called the D2 closure. With this new theory, the solvent RDFs around a large hydrophobic particle in water and different water-acetonitrile mixtures could be computed, which agree well with the results of the molecular dynamics simulations. Furthermore, we show that our RISM-HI theory can also efficiently compute the solvation free energy of solute with a wide range of hydrophobicity in various water-acetonitrile solvent mixtures with a reasonable accuracy. We anticipate that our theory could be widely applied to compute the thermodynamic and structural properties for the solvation of hydrophobic solute.
Numerical simulation of swept-wing flows
NASA Technical Reports Server (NTRS)
Reed, Helen L.
1991-01-01
Efforts of the last six months to computationally model the transition process characteristics of flow over swept wings are described. Specifically, the crossflow instability and crossflow/Tollmien-Schlichting wave interactions are analyzed through the numerical solution of the full 3D Navier-Stokes equations including unsteadiness, curvature, and sweep. This approach is chosen because of the complexity of the problem and because it appears that linear stability theory is insufficient to explain the discrepancies between different experiments and between theory and experiment. The leading edge region of a swept wing is considered in a 3D spatial simulation with random disturbances as the initial conditions.
An object oriented code for simulating supersymmetric Yang-Mills theories
NASA Astrophysics Data System (ADS)
Catterall, Simon; Joseph, Anosh
2012-06-01
We present SUSY_LATTICE - a C++ program that can be used to simulate certain classes of supersymmetric Yang-Mills (SYM) theories, including the well known N=4 SYM in four dimensions, on a flat Euclidean space-time lattice. Discretization of SYM theories is an old problem in lattice field theory. It has resisted solution until recently when new ideas drawn from orbifold constructions and topological field theories have been brought to bear on the question. The result has been the creation of a new class of lattice gauge theories in which the lattice action is invariant under one or more supersymmetries. The resultant theories are local, free of doublers and also possess exact gauge-invariance. In principle they form the basis for a truly non-perturbative definition of the continuum SYM theories. In the continuum limit they reproduce versions of the SYM theories formulated in terms of twisted fields, which on a flat space-time is just a change of the field variables. In this paper, we briefly review these ideas and then go on to provide the details of the C++ code. We sketch the design of the code, with particular emphasis being placed on SYM theories with N=(2,2) in two dimensions and N=4 in three and four dimensions, making one-to-one comparisons between the essential components of the SYM theories and their corresponding counterparts appearing in the simulation code. The code may be used to compute several quantities associated with the SYM theories such as the Polyakov loop, mean energy, and the width of the scalar eigenvalue distributions. Program summaryProgram title: SUSY_LATTICE Catalogue identifier: AELS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9315 No. of bytes in distributed program, including test data, etc.: 95 371 Distribution format: tar.gz Programming language: C++ Computer: PCs and Workstations Operating system: Any, tested on Linux machines Classification:: 11.6 Nature of problem: To compute some of the observables of supersymmetric Yang-Mills theories such as supersymmetric action, Polyakov/Wilson loops, scalar eigenvalues and Pfaffian phases. Solution method: We use the Rational Hybrid Monte Carlo algorithm followed by a Leapfrog evolution and a Metropolis test. The input parameters of the model are read in from a parameter file. Restrictions: This code applies only to supersymmetric gauge theories with extended supersymmetry, which undergo the process of maximal twisting. (See Section 2 of the manuscript for details.) Running time: From a few minutes to several hours depending on the amount of statistics needed.
Thickened boundary layer theory for air film drag reduction on a van body surface
NASA Astrophysics Data System (ADS)
Xie, Xiaopeng; Cao, Lifeng; Huang, Heng
2018-05-01
To elucidate drag reduction mechanism on a van body surface under air film condition, a thickened boundary layer theory was proposed and a frictional resistance calculation model of the van body surface was established. The frictional resistance on the van body surface was calculated with different parameters of air film thickness. In addition, the frictional resistance of the van body surface under the air film condition was analyzed by computational fluid dynamics (CFD) simulation and different air film states that influenced the friction resistance on the van body surface were discussed. As supported by the CFD simulation results, the thickened boundary layer theory may provide reference for practical application of air film drag reduction on a van body surface.
Fast computation algorithms for speckle pattern simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nascov, Victor; Samoilă, Cornel; Ursuţiu, Doru
2013-11-13
We present our development of a series of efficient computation algorithms, generally usable to calculate light diffraction and particularly for speckle pattern simulation. We use mainly the scalar diffraction theory in the form of Rayleigh-Sommerfeld diffraction formula and its Fresnel approximation. Our algorithms are based on a special form of the convolution theorem and the Fast Fourier Transform. They are able to evaluate the diffraction formula much faster than by direct computation and we have circumvented the restrictions regarding the relative sizes of the input and output domains, met on commonly used procedures. Moreover, the input and output planes canmore » be tilted each to other and the output domain can be off-axis shifted.« less
U(1) Wilson lattice gauge theories in digital quantum simulators
NASA Astrophysics Data System (ADS)
Muschik, Christine; Heyl, Markus; Martinez, Esteban; Monz, Thomas; Schindler, Philipp; Vogell, Berit; Dalmonte, Marcello; Hauke, Philipp; Blatt, Rainer; Zoller, Peter
2017-10-01
Lattice gauge theories describe fundamental phenomena in nature, but calculating their real-time dynamics on classical computers is notoriously difficult. In a recent publication (Martinez et al 2016 Nature 534 516), we proposed and experimentally demonstrated a digital quantum simulation of the paradigmatic Schwinger model, a U(1)-Wilson lattice gauge theory describing the interplay between fermionic matter and gauge bosons. Here, we provide a detailed theoretical analysis of the performance and the potential of this protocol. Our strategy is based on analytically integrating out the gauge bosons, which preserves exact gauge invariance but results in complicated long-range interactions between the matter fields. Trapped-ion platforms are naturally suited to implementing these interactions, allowing for an efficient quantum simulation of the model, with a number of gate operations that scales polynomially with system size. Employing numerical simulations, we illustrate that relevant phenomena can be observed in larger experimental systems, using as an example the production of particle-antiparticle pairs after a quantum quench. We investigate theoretically the robustness of the scheme towards generic error sources, and show that near-future experiments can reach regimes where finite-size effects are insignificant. We also discuss the challenges in quantum simulating the continuum limit of the theory. Using our scheme, fundamental phenomena of lattice gauge theories can be probed using a broad set of experimentally accessible observables, including the entanglement entropy and the vacuum persistence amplitude.
Computer simulations of electromagnetic cool ion beam instabilities. [in near earth space
NASA Technical Reports Server (NTRS)
Gary, S. P.; Madland, C. D.; Schriver, D.; Winske, D.
1986-01-01
Electromagnetic ion beam instabilities driven by cool ion beams at propagation parallel or antiparallel to a uniform magnetic field are studied using computer simulations. The elements of linear theory applicable to electromagnetic ion beam instabilities and the simulations derived from a one-dimensional hybrid computer code are described. The quasi-linear regime of the right-hand resonant ion beam instability, and the gyrophase bunching of the nonlinear regime of the right-hand resonant and nonresonant instabilities are examined. It is detected that in the quasi-linear regime the instability saturation is due to a reduction in the beam core relative drift speed and an increase in the perpendicular-to-parallel beam temperature; in the nonlinear regime the instabilities saturate when half the initial beam drift kinetic energy density is converted to fluctuating magnetic field energy density.
Numerical experiments in homogeneous turbulence
NASA Technical Reports Server (NTRS)
Rogallo, R. S.
1981-01-01
The direct simulation methods developed by Orszag and Patternson (1972) for isotropic turbulence were extended to homogeneous turbulence in an incompressible fluid subjected to uniform deformation or rotation. The results of simulations for irrotational strain (plane and axisymmetric), shear, rotation, and relaxation toward isotropy following axisymmetric strain are compared with linear theory and experimental data. Emphasis is placed on the shear flow because of its importance and because of the availability of accurate and detailed experimental data. The computed results are used to assess the accuracy of two popular models used in the closure of the Reynolds-stress equations. Data from a variety of the computed fields and the details of the numerical methods used in the simulation are also presented.
Enhanced sampling techniques in biomolecular simulations.
Spiwok, Vojtech; Sucur, Zoran; Hosek, Petr
2015-11-01
Biomolecular simulations are routinely used in biochemistry and molecular biology research; however, they often fail to match expectations of their impact on pharmaceutical and biotech industry. This is caused by the fact that a vast amount of computer time is required to simulate short episodes from the life of biomolecules. Several approaches have been developed to overcome this obstacle, including application of massively parallel and special purpose computers or non-conventional hardware. Methodological approaches are represented by coarse-grained models and enhanced sampling techniques. These techniques can show how the studied system behaves in long time-scales on the basis of relatively short simulations. This review presents an overview of new simulation approaches, the theory behind enhanced sampling methods and success stories of their applications with a direct impact on biotechnology or drug design. Copyright © 2014 Elsevier Inc. All rights reserved.
Nonperturbative finite-temperature Yang-Mills theory
NASA Astrophysics Data System (ADS)
Cyrol, Anton K.; Mitter, Mario; Pawlowski, Jan M.; Strodthoff, Nils
2018-03-01
We present nonperturbative correlation functions in Landau-gauge Yang-Mills theory at finite temperature. The results are obtained from the functional renormalisation group within a self-consistent approximation scheme. In particular, we compute the magnetic and electric components of the gluon propagator, and the three- and four-gluon vertices. We also show the ghost propagator and the ghost-gluon vertex at finite temperature. Our results for the propagators are confronted with lattice simulations and our Debye mass is compared to hard thermal loop perturbation theory.
NASA Astrophysics Data System (ADS)
Wissing, Dennis Robert
The purpose of the this research was to explore undergraduates' conceptual development for oxygen transport and utilization, as a component of a cardiopulmonary physiology and advanced respiratory care course in the allied health program. This exploration focused on the student's development of knowledge and the presence of alternative conceptions, prior to, during, and after completing cardiopulmonary physiology and advanced respiratory care courses. Using the simulation program, SimBioSysTM (Samsel, 1994), student-participants completed a series of laboratory exercises focusing on cardiopulmonary disease states. This study examined data gathered from: (1) a novice group receiving the simulation program prior to instruction, (2) a novice group that experienced the simulation program following course completion in cardiopulmonary physiology, and (3) an intermediate group who experienced the simulation program following completion of formal education in Respiratory Care. This research was based on the theory of Human Constructivism as described by Mintzes, Wandersee, and Novak (1997). Data-gathering techniques were based on theories supported by Novak (1984), Wandersee (1997), and Chi (1997). Data were generated by exams, interviews, verbal analysis (Chi, 1997), and concept mapping. Results suggest that simulation may be an effective instructional method for assessing conceptual development and diagnosing alternative conceptions in undergraduates enrolled in a cardiopulmonary science program. Use of simulation in conjunction with clinical interview and concept mapping may assist in verifying gaps in learning and conceptual knowledge. This study found only limited evidence to support the use of computer simulation prior to lecture to augment learning. However, it was demonstrated that students' prelecture experience with the computer simulation helped the instructor assess what the learner knew so he or she could be taught accordingly. In addition, use of computer simulation after formal instruction was shown to be useful in aiding students identified by the instructor as needing remediation.
Non-Adiabatic Molecular Dynamics Methods for Materials Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furche, Filipp; Parker, Shane M.; Muuronen, Mikko J.
2017-04-04
The flow of radiative energy in light-driven materials such as photosensitizer dyes or photocatalysts is governed by non-adiabatic transitions between electronic states and cannot be described within the Born-Oppenheimer approximation commonly used in electronic structure theory. The non-adiabatic molecular dynamics (NAMD) methods based on Tully surface hopping and time-dependent density functional theory developed in this project have greatly extended the range of molecular materials that can be tackled by NAMD simulations. New algorithms to compute molecular excited state and response properties efficiently were developed. Fundamental limitations of common non-linear response methods were discovered and characterized. Methods for accurate computations ofmore » vibronic spectra of materials such as black absorbers were developed and applied. It was shown that open-shell TDDFT methods capture bond breaking in NAMD simulations, a longstanding challenge for single-reference molecular dynamics simulations. The methods developed in this project were applied to study the photodissociation of acetaldehyde and revealed that non-adiabatic effects are experimentally observable in fragment kinetic energy distributions. Finally, the project enabled the first detailed NAMD simulations of photocatalytic water oxidation by titania nanoclusters, uncovering the mechanism of this fundamentally important reaction for fuel generation and storage.« less
Tait, E. W.; Ratcliff, L. E.; Payne, M. C.; ...
2016-04-20
Experimental techniques for electron energy loss spectroscopy (EELS) combine high energy resolution with high spatial resolution. They are therefore powerful tools for investigating the local electronic structure of complex systems such as nanostructures, interfaces and even individual defects. Interpretation of experimental electron energy loss spectra is often challenging and can require theoretical modelling of candidate structures, which themselves may be large and complex, beyond the capabilities of traditional cubic-scaling density functional theory. In this work, we present functionality to compute electron energy loss spectra within the onetep linear-scaling density functional theory code. We first demonstrate that simulated spectra agree withmore » those computed using conventional plane wave pseudopotential methods to a high degree of precision. The ability of onetep to tackle large problems is then exploited to investigate convergence of spectra with respect to supercell size. As a result, we apply the novel functionality to a study of the electron energy loss spectra of defects on the (1 0 1) surface of an anatase slab and determine concentrations of defects which might be experimentally detectable.« less
A computational fluid dynamics approach to nucleation in the water-sulfuric acid system.
Herrmann, E; Brus, D; Hyvärinen, A-P; Stratmann, F; Wilck, M; Lihavainen, H; Kulmala, M
2010-08-12
This study presents a computational fluid dynamics modeling approach to investigate the nucleation in the water-sulfuric acid system in a flow tube. On the basis of an existing experimental setup (Brus, D.; Hyvärinen, A.-P.; Viisanen, Y.; Kulmala, M.; Lihavainen, H. Atmos. Chem. Phys. 2010, 10, 2631-2641), we first establish the effect of convection on the flow profile. We then proceed to simulate nucleation for relative humidities of 10, 30, and 50% and for sulfuric acid concentration between 10(9) to 3 x 10(10) cm(-3). We describe the nucleation zone in detail and determine how flow rate and relative humidity affect its characteristics. Experimental nucleation rates are compared to rates gained from classical binary and kinetic nucleation theory as well as cluster activation theory. For low RH values, kinetic theory yields the best agreement with experimental results while binary nucleation best reproduces the experimental nucleation behavior at 50% relative humidity. Particle growth is modeled for an example case at 50% relative humidity. The final simulated diameter is very close to the experimental result.
NASA Astrophysics Data System (ADS)
Hernández Vera, Mario; Wester, Roland; Gianturco, Francesco Antonio
2018-01-01
We construct the velocity map images of the proton transfer reaction between helium and molecular hydrogen ion {{{H}}}2+. We perform simulations of imaging experiments at one representative total collision energy taking into account the inherent aberrations of the velocity mapping in order to explore the feasibility of direct comparisons between theory and future experiments planned in our laboratory. The asymptotic angular distributions of the fragments in a 3D velocity space is determined from the quantum state-to-state differential reactive cross sections and reaction probabilities which are computed by using the time-independent coupled channel hyperspherical coordinate method. The calculations employ an earlier ab initio potential energy surface computed at the FCI/cc-pVQZ level of theory. The present simulations indicate that the planned experiments would be selective enough to differentiate between product distributions resulting from different initial internal states of the reactants.
Challenges in Computational Social Modeling and Simulation for National Security Decision Making
2011-06-01
This study is grounded within a system-activity theory , a logico-philosophical model of interdisciplinary research [13, 14], the concepts of social...often a difficult challenge. Ironically, social science research methods , such as ethnography , may be tremendously helpful in designing these...social sciences. Moreover, CSS projects draw on knowledge and methods from other fields of study , including graph theory , information visualization
NASA Astrophysics Data System (ADS)
He, Yue-Jing; Hung, Wei-Chih; Syu, Cheng-Jyun
2017-12-01
The finite-element method (FEM) and eigenmode expansion method (EEM) were adopted to analyze the guided modes and spectrum of phase-shift fiber Bragg grating at five phase-shift degrees (including zero, 1/4π, 1/2π, 3/4π, and π). In previous studies on optical fiber grating, conventional coupled-mode theory was crucial. This theory contains abstruse knowledge about physics and complex computational processes, and thus is challenging for users. Therefore, a numerical simulation method was coupled with a simple and rigorous design procedure to help beginners and users to overcome difficulty in entering the field; in addition, graphical simulation results were presented. To reduce the difference between the simulated context and the actual context, a perfectly matched layer and perfectly reflecting boundary were added to the FEM and the EEM. When the FEM was used for grid cutting, the object meshing method and the boundary meshing method proposed in this study were used to effectively enhance computational accuracy and substantially reduce the time required for simulation. In summary, users can use the simulation results in this study to easily and rapidly design an optical fiber communication system and optical sensors with spectral characteristics.
Complex systems and health behavior change: insights from cognitive science.
Orr, Mark G; Plaut, David C
2014-05-01
To provide proof-of-concept that quantum health behavior can be instantiated as a computational model that is informed by cognitive science, the Theory of Reasoned Action, and quantum health behavior theory. We conducted a synthetic review of the intersection of quantum health behavior change and cognitive science. We conducted simulations, using a computational model of quantum health behavior (a constraint satisfaction artificial neural network) and tested whether the model exhibited quantum-like behavior. The model exhibited clear signs of quantum-like behavior. Quantum health behavior can be conceptualized as constraint satisfaction: a mitigation between current behavioral state and the social contexts in which it operates. We outlined implications for moving forward with computational models of both quantum health behavior and health behavior in general.
MoCog1: A computer simulation of recognition-primed human decision making
NASA Technical Reports Server (NTRS)
Gevarter, William B.
1991-01-01
This report describes the successful results of the first stage of a research effort to develop a 'sophisticated' computer model of human cognitive behavior. Most human decision-making is of the experience-based, relatively straight-forward, largely automatic, type of response to internal goals and drives, utilizing cues and opportunities perceived from the current environment. This report describes the development of the architecture and computer program associated with such 'recognition-primed' decision-making. The resultant computer program was successfully utilized as a vehicle to simulate findings that relate how an individual's implicit theories orient them toward particular goals, with resultant cognitions, affects, and behavior in response to their environment. The present work is an expanded version and is based on research reported while the author was an employee of NASA ARC.
A new computer-aided simulation model for polycrystalline silicon film resistors
NASA Astrophysics Data System (ADS)
Ching-Yuan Wu; Weng-Dah Ken
1983-07-01
A general transport theory for the I-V characteristics of a polycrystalline film resistor has been derived by including the effects of carrier degeneracy, majority-carrier thermionic-diffusion across the space charge regions produced by carrier trapping in the grain boundaries, and quantum mechanical tunneling through the grain boundaries. Based on the derived transport theory, a new conduction model for the electrical resistivity of polycrystalline film resitors has been developed by incorporating the effects of carrier trapping and dopant segregation in the grain boundaries. Moreover, an empirical formula for the coefficient of the dopant-segregation effects has been proposed, which enables us to predict the dependence of the electrical resistivity of phosphorus-and arsenic-doped polycrystalline silicon films on thermal annealing temperature. Phosphorus-doped polycrystalline silicon resistors have been fabricated by using ion-implantation with doses ranged from 1.6 × 10 11 to 5 × 10 15/cm 2. The dependence of the electrical resistivity on doping concentration and temperature have been measured and shown to be in good agreement with the results of computer simulations. In addition, computer simulations for boron-and arsenic-doped polycrystalline silicon resistors have also been performed and shown to be consistent with the experimental results published by previous authors.
Wagar, Brandon M; Thagard, Paul
2004-01-01
The authors present a neurological theory of how cognitive information and emotional information are integrated in the nucleus accumbens during effective decision making. They describe how the nucleus accumbens acts as a gateway to integrate cognitive information from the ventromedial prefrontal cortex and the hippocampus with emotional information from the amygdala. The authors have modeled this integration by a network of spiking artificial neurons organized into separate areas and used this computational model to simulate 2 kinds of cognitive-affective integration. The model simulates successful performance by people with normal cognitive-affective integration. The model also simulates the historical case of Phineas Gage as well as subsequent patients whose ability to make decisions became impeded by damage to the ventromedial prefrontal cortex.
Designing, programming, and optimizing a (small) quantum computer
NASA Astrophysics Data System (ADS)
Svore, Krysta
In 1982, Richard Feynman proposed to use a computer founded on the laws of quantum physics to simulate physical systems. In the more than thirty years since, quantum computers have shown promise to solve problems in number theory, chemistry, and materials science that would otherwise take longer than the lifetime of the universe to solve on an exascale classical machine. The practical realization of a quantum computer requires understanding and manipulating subtle quantum states while experimentally controlling quantum interference. It also requires an end-to-end software architecture for programming, optimizing, and implementing a quantum algorithm on the quantum device hardware. In this talk, we will introduce recent advances in connecting abstract theory to present-day real-world applications through software. We will highlight recent advancement of quantum algorithms and the challenges in ultimately performing a scalable solution on a quantum device.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, Santanu; Dang, Liem X.
In this paper, we present the first computer simulation of methanol exchange dynamics between the first and second solvation shells around different cations and anions. After water, methanol is the most frequently used solvent for ions. Methanol has different structural and dynamical properties than water, so its ion solvation process is different. To this end, we performed molecular dynamics simulations using polarizable potential models to describe methanol-methanol and ion-methanol interactions. In particular, we computed methanol exchange rates by employing the transition state theory, the Impey-Madden-McDonald method, the reactive flux approach, and the Grote-Hynes theory. We observed that methanol exchange occursmore » at a nanosecond time scale for Na+ and at a picosecond time scale for other ions. We also observed a trend in which, for like charges, the exchange rate is slower for smaller ions because they are more strongly bound to methanol. This work was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. The calculations were carried out using computer resources provided by the Office of Basic Energy Sciences.« less
Creating executable architectures using Visual Simulation Objects (VSO)
NASA Astrophysics Data System (ADS)
Woodring, John W.; Comiskey, John B.; Petrov, Orlin M.; Woodring, Brian L.
2005-05-01
Investigations have been performed to identify a methodology for creating executable models of architectures and simulations of architecture that lead to an understanding of their dynamic properties. Colored Petri Nets (CPNs) are used to describe architecture because of their strong mathematical foundations, the existence of techniques for their verification and graph theory"s well-established history of success in modern science. CPNs have been extended to interoperate with legacy simulations via a High Level Architecture (HLA) compliant interface. It has also been demonstrated that an architecture created as a CPN can be integrated with Department of Defense Architecture Framework products to ensure consistency between static and dynamic descriptions. A computer-aided tool, Visual Simulation Objects (VSO), which aids analysts in specifying, composing and executing architectures, has been developed to verify the methodology and as a prototype commercial product.
Plasmonic resonances of nanoparticles from large-scale quantum mechanical simulations
NASA Astrophysics Data System (ADS)
Zhang, Xu; Xiang, Hongping; Zhang, Mingliang; Lu, Gang
2017-09-01
Plasmonic resonance of metallic nanoparticles results from coherent motion of its conduction electrons, driven by incident light. For the nanoparticles less than 10 nm in diameter, localized surface plasmonic resonances become sensitive to the quantum nature of the conduction electrons. Unfortunately, quantum mechanical simulations based on time-dependent Kohn-Sham density functional theory are computationally too expensive to tackle metal particles larger than 2 nm. Herein, we introduce the recently developed time-dependent orbital-free density functional theory (TD-OFDFT) approach which enables large-scale quantum mechanical simulations of plasmonic responses of metallic nanostructures. Using TD-OFDFT, we have performed quantum mechanical simulations to understand size-dependent plasmonic response of Na nanoparticles and plasmonic responses in Na nanoparticle dimers and trimers. An outlook of future development of the TD-OFDFT method is also presented.
Gas-liquid coexistence in a system of dipolar soft spheres.
Jia, Ran; Braun, Heiko; Hentschke, Reinhard
2010-12-01
The existence of gas-liquid coexistence in dipolar fluids with no other contribution to attractive interaction than dipole-dipole interaction is a basic and open question in the theory of fluids. Here we compute the gas-liquid critical point in a system of dipolar soft spheres subject to an external electric field using molecular dynamics computer simulation. Tracking the critical point as the field strength is approaching zero we find the following limiting values: T(c)=0.063 and ρ(c)=0.0033 (dipole moment μ=1). These values are confirmed by independent simulation at zero field strength.
Local deformation for soft tissue simulation
Omar, Nadzeri; Zhong, Yongmin; Smith, Julian; Gu, Chengfan
2016-01-01
ABSTRACT This paper presents a new methodology to localize the deformation range to improve the computational efficiency for soft tissue simulation. This methodology identifies the local deformation range from the stress distribution in soft tissues due to an external force. A stress estimation method is used based on elastic theory to estimate the stress in soft tissues according to a depth from the contact surface. The proposed methodology can be used with both mass-spring and finite element modeling approaches for soft tissue deformation. Experimental results show that the proposed methodology can improve the computational efficiency while maintaining the modeling realism. PMID:27286482
Enzymatic Kinetic Isotope Effects from Path-Integral Free Energy Perturbation Theory.
Gao, J
2016-01-01
Path-integral free energy perturbation (PI-FEP) theory is presented to directly determine the ratio of quantum mechanical partition functions of different isotopologs in a single simulation. Furthermore, a double averaging strategy is used to carry out the practical simulation, separating the quantum mechanical path integral exactly into two separate calculations, one corresponding to a classical molecular dynamics simulation of the centroid coordinates, and another involving free-particle path-integral sampling over the classical, centroid positions. An integrated centroid path-integral free energy perturbation and umbrella sampling (PI-FEP/UM, or simply, PI-FEP) method along with bisection sampling was summarized, which provides an accurate and fast convergent method for computing kinetic isotope effects for chemical reactions in solution and in enzymes. The PI-FEP method is illustrated by a number of applications, to highlight the computational precision and accuracy, the rule of geometrical mean in kinetic isotope effects, enhanced nuclear quantum effects in enzyme catalysis, and protein dynamics on temperature dependence of kinetic isotope effects. © 2016 Elsevier Inc. All rights reserved.
Long-ranged contributions to solvation free energies from theory and short-ranged models
Remsing, Richard C.; Liu, Shule; Weeks, John D.
2016-01-01
Long-standing problems associated with long-ranged electrostatic interactions have plagued theory and simulation alike. Traditional lattice sum (Ewald-like) treatments of Coulomb interactions add significant overhead to computer simulations and can produce artifacts from spurious interactions between simulation cell images. These subtle issues become particularly apparent when estimating thermodynamic quantities, such as free energies of solvation in charged and polar systems, to which long-ranged Coulomb interactions typically make a large contribution. In this paper, we develop a framework for determining very accurate solvation free energies of systems with long-ranged interactions from models that interact with purely short-ranged potentials. Our approach is generally applicable and can be combined with existing computational and theoretical techniques for estimating solvation thermodynamics. We demonstrate the utility of our approach by examining the hydration thermodynamics of hydrophobic and ionic solutes and the solvation of a large, highly charged colloid that exhibits overcharging, a complex nonlinear electrostatic phenomenon whereby counterions from the solvent effectively overscreen and locally invert the integrated charge of the solvated object. PMID:26929375
Applications of complex systems theory in nursing education, research, and practice.
Clancy, Thomas R; Effken, Judith A; Pesut, Daniel
2008-01-01
The clinical and administrative processes in today's healthcare environment are becoming increasingly complex. Multiple providers, new technology, competition, and the growing ubiquity of information all contribute to the notion of health care as a complex system. A complex system (CS) is characterized by a highly connected network of entities (e.g., physical objects, people or groups of people) from which higher order behavior emerges. Research in the transdisciplinary field of CS has focused on the use of computational modeling and simulation as a methodology for analyzing CS behavior. The creation of virtual worlds through computer simulation allows researchers to analyze multiple variables simultaneously and begin to understand behaviors that are common regardless of the discipline. The application of CS principles, mediated through computer simulation, informs nursing practice of the benefits and drawbacks of new procedures, protocols and practices before having to actually implement them. The inclusion of new computational tools and their applications in nursing education is also gaining attention. For example, education in CSs and applied computational applications has been endorsed by The Institute of Medicine, the American Organization of Nurse Executives and the American Association of Colleges of Nursing as essential training of nurse leaders. The purpose of this article is to review current research literature regarding CS science within the context of expert practice and implications for the education of nurse leadership roles. The article focuses on 3 broad areas: CS defined, literature review and exemplars from CS research and applications of CS theory in nursing leadership education. The article also highlights the key role nursing informaticists play in integrating emerging computational tools in the analysis of complex nursing systems.
Simple and accurate theory for strong shock waves in a dense hard-sphere fluid.
Montanero, J M; López de Haro, M; Santos, A; Garzó, V
1999-12-01
Following an earlier work by Holian et al. [Phys. Rev. E 47, R24 (1993)] for a dilute gas, we present a theory for strong shock waves in a hard-sphere fluid described by the Enskog equation. The idea is to use the Navier-Stokes hydrodynamic equations but taking the temperature in the direction of shock propagation rather than the actual temperature in the computation of the transport coefficients. In general, for finite densities, this theory agrees much better with Monte Carlo simulations than the Navier-Stokes and (linear) Burnett theories, in contrast to the well-known superiority of the Burnett theory for dilute gases.
NASA Astrophysics Data System (ADS)
Escobar Gómez, J. D.; Torres-Verdín, C.
2018-03-01
Single-well pressure-diffusion simulators enable improved quantitative understanding of hydraulic-testing measurements in the presence of arbitrary spatial variations of rock properties. Simulators of this type implement robust numerical algorithms which are often computationally expensive, thereby making the solution of the forward modeling problem onerous and inefficient. We introduce a time-domain perturbation theory for anisotropic permeable media to efficiently and accurately approximate the transient pressure response of spatially complex aquifers. Although theoretically valid for any spatially dependent rock/fluid property, our single-phase flow study emphasizes arbitrary spatial variations of permeability and anisotropy, which constitute key objectives of hydraulic-testing operations. Contrary to time-honored techniques, the perturbation method invokes pressure-flow deconvolution to compute the background medium's permeability sensitivity function (PSF) with a single numerical simulation run. Subsequently, the first-order term of the perturbed solution is obtained by solving an integral equation that weighs the spatial variations of permeability with the spatial-dependent and time-dependent PSF. Finally, discrete convolution transforms the constant-flow approximation to arbitrary multirate conditions. Multidimensional numerical simulation studies for a wide range of single-well field conditions indicate that perturbed solutions can be computed in less than a few CPU seconds with relative errors in pressure of <5%, corresponding to perturbations in background permeability of up to two orders of magnitude. Our work confirms that the proposed joint perturbation-convolution (JPC) method is an efficient alternative to analytical and numerical solutions for accurate modeling of pressure-diffusion phenomena induced by Neumann or Dirichlet boundary conditions.
Spiking network simulation code for petascale computers.
Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M; Plesser, Hans E; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz
2014-01-01
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.
Spiking network simulation code for petascale computers
Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M.; Plesser, Hans E.; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz
2014-01-01
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682
NASA Astrophysics Data System (ADS)
Cao, Siqin; Zhu, Lizhe; Huang, Xuhui
2018-04-01
The 3D reference interaction site model (3DRISM) is a powerful tool to study the thermodynamic and structural properties of liquids. However, for hydrophobic solutes, the inhomogeneity of the solvent density around them poses a great challenge to the 3DRISM theory. To address this issue, we have previously introduced the hydrophobic-induced density inhomogeneity theory (HI) for purely hydrophobic solutes. To further consider the complex hydrophobic solutes containing partial charges, here we propose the D2MSA closure to incorporate the short-range and long-range interactions with the D2 closure and the mean spherical approximation, respectively. We demonstrate that our new theory can compute the solvent distributions around real hydrophobic solutes in water and complex organic solvents that agree well with the explicit solvent molecular dynamics simulations.
Comparison of the Melting Temperatures of Classical and Quantum Water Potential Models
NASA Astrophysics Data System (ADS)
Du, Sen; Yoo, Soohaeng; Li, Jinjin
2017-08-01
As theoretical approaches and technical methods improve over time, the field of computer simulations for water has greatly progressed. Water potential models become much more complex when additional interactions and advanced theories are considered. Macroscopic properties of water predicted by computer simulations using water potential models are expected to be consistent with experimental outcomes. As such, discrepancies between computer simulations and experiments could be a criterion to comment on the performances of various water potential models. Notably, water can occur not only as liquid phases but also as solid and vapor phases. Therefore, the melting temperature related to the solid and liquid phase equilibrium is an effective parameter to judge the performances of different water potential models. As a mini review, our purpose is to introduce some water models developed in recent years and the melting temperatures obtained through simulations with such models. Moreover, some explanations referred to in the literature are described for the additional evaluation of the water potential models.
Theory for the solvation of nonpolar solutes in water
NASA Astrophysics Data System (ADS)
Urbic, T.; Vlachy, V.; Kalyuzhnyi, Yu. V.; Dill, K. A.
2007-11-01
We recently developed an angle-dependent Wertheim integral equation theory (IET) of the Mercedes-Benz (MB) model of pure water [Silverstein et al., J. Am. Chem. Soc. 120, 3166 (1998)]. Our approach treats explicitly the coupled orientational constraints within water molecules. The analytical theory offers the advantage of being less computationally expensive than Monte Carlo simulations by two orders of magnitude. Here we apply the angle-dependent IET to studying the hydrophobic effect, the transfer of a nonpolar solute into MB water. We find that the theory reproduces the Monte Carlo results qualitatively for cold water and quantitatively for hot water.
Theory for the solvation of nonpolar solutes in water.
Urbic, T; Vlachy, V; Kalyuzhnyi, Yu V; Dill, K A
2007-11-07
We recently developed an angle-dependent Wertheim integral equation theory (IET) of the Mercedes-Benz (MB) model of pure water [Silverstein et al., J. Am. Chem. Soc. 120, 3166 (1998)]. Our approach treats explicitly the coupled orientational constraints within water molecules. The analytical theory offers the advantage of being less computationally expensive than Monte Carlo simulations by two orders of magnitude. Here we apply the angle-dependent IET to studying the hydrophobic effect, the transfer of a nonpolar solute into MB water. We find that the theory reproduces the Monte Carlo results qualitatively for cold water and quantitatively for hot water.
Design of object-oriented distributed simulation classes
NASA Technical Reports Server (NTRS)
Schoeffler, James D. (Principal Investigator)
1995-01-01
Distributed simulation of aircraft engines as part of a computer aided design package is being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for 'Numerical Propulsion Simulation System'. NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT 'Actor' model of a concurrent object and uses 'connectors' to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.
Design of Object-Oriented Distributed Simulation Classes
NASA Technical Reports Server (NTRS)
Schoeffler, James D.
1995-01-01
Distributed simulation of aircraft engines as part of a computer aided design package being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for "Numerical Propulsion Simulation System". NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT "Actor" model of a concurrent object and uses "connectors" to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.
Cloud Quantum Computing of an Atomic Nucleus
NASA Astrophysics Data System (ADS)
Dumitrescu, E. F.; McCaskey, A. J.; Hagen, G.; Jansen, G. R.; Morris, T. D.; Papenbrock, T.; Pooser, R. C.; Dean, D. J.; Lougovski, P.
2018-05-01
We report a quantum simulation of the deuteron binding energy on quantum processors accessed via cloud servers. We use a Hamiltonian from pionless effective field theory at leading order. We design a low-depth version of the unitary coupled-cluster ansatz, use the variational quantum eigensolver algorithm, and compute the binding energy to within a few percent. Our work is the first step towards scalable nuclear structure computations on a quantum processor via the cloud, and it sheds light on how to map scientific computing applications onto nascent quantum devices.
Cloud Quantum Computing of an Atomic Nucleus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumitrescu, Eugene F.; McCaskey, Alex J.; Hagen, Gaute
Here, we report a quantum simulation of the deuteron binding energy on quantum processors accessed via cloud servers. We use a Hamiltonian from pionless effective field theory at leading order. We design a low-depth version of the unitary coupled-cluster ansatz, use the variational quantum eigensolver algorithm, and compute the binding energy to within a few percent. Our work is the first step towards scalable nuclear structure computations on a quantum processor via the cloud, and it sheds light on how to map scientific computing applications onto nascent quantum devices.
Cloud Quantum Computing of an Atomic Nucleus.
Dumitrescu, E F; McCaskey, A J; Hagen, G; Jansen, G R; Morris, T D; Papenbrock, T; Pooser, R C; Dean, D J; Lougovski, P
2018-05-25
We report a quantum simulation of the deuteron binding energy on quantum processors accessed via cloud servers. We use a Hamiltonian from pionless effective field theory at leading order. We design a low-depth version of the unitary coupled-cluster ansatz, use the variational quantum eigensolver algorithm, and compute the binding energy to within a few percent. Our work is the first step towards scalable nuclear structure computations on a quantum processor via the cloud, and it sheds light on how to map scientific computing applications onto nascent quantum devices.
Cloud Quantum Computing of an Atomic Nucleus
Dumitrescu, Eugene F.; McCaskey, Alex J.; Hagen, Gaute; ...
2018-05-23
Here, we report a quantum simulation of the deuteron binding energy on quantum processors accessed via cloud servers. We use a Hamiltonian from pionless effective field theory at leading order. We design a low-depth version of the unitary coupled-cluster ansatz, use the variational quantum eigensolver algorithm, and compute the binding energy to within a few percent. Our work is the first step towards scalable nuclear structure computations on a quantum processor via the cloud, and it sheds light on how to map scientific computing applications onto nascent quantum devices.
ECON-KG: A Code for Computation of Electrical Conductivity Using Density Functional Theory
2017-10-01
is presented. Details of the implementation and instructions for execution are presented, and an example calculation of the frequency- dependent ...shown to depend on carbon content,3 and electrical conductivity models have become a requirement for input into continuum-level simulations being... dependent electrical conductivity is computed as a weighted sum over k-points: () = ∑ () ∗ () , (2) where W(k) is
High-order hydrodynamic algorithms for exascale computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, Nathaniel Ray
Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broadmore » range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.« less
LHC@Home: a BOINC-based volunteer computing infrastructure for physics studies at CERN
NASA Astrophysics Data System (ADS)
Barranco, Javier; Cai, Yunhai; Cameron, David; Crouch, Matthew; Maria, Riccardo De; Field, Laurence; Giovannozzi, Massimo; Hermes, Pascal; Høimyr, Nils; Kaltchev, Dobrin; Karastathis, Nikos; Luzzi, Cinzia; Maclean, Ewen; McIntosh, Eric; Mereghetti, Alessio; Molson, James; Nosochkov, Yuri; Pieloni, Tatiana; Reid, Ivan D.; Rivkin, Lenny; Segal, Ben; Sjobak, Kyrre; Skands, Peter; Tambasco, Claudia; Veken, Frederik Van der; Zacharov, Igor
2017-12-01
The LHC@Home BOINC project has provided computing capacity for numerical simulations to researchers at CERN since 2004, and has since 2011 been expanded with a wider range of applications. The traditional CERN accelerator physics simulation code SixTrack enjoys continuing volunteers support, and thanks to virtualisation a number of applications from the LHC experiment collaborations and particle theory groups have joined the consolidated LHC@Home BOINC project. This paper addresses the challenges related to traditional and virtualized applications in the BOINC environment, and how volunteer computing has been integrated into the overall computing strategy of the laboratory through the consolidated LHC@Home service. Thanks to the computing power provided by volunteers joining LHC@Home, numerous accelerator beam physics studies have been carried out, yielding an improved understanding of charged particle dynamics in the CERN Large Hadron Collider (LHC) and its future upgrades. The main results are highlighted in this paper.
Using Theory and Simulation to Design Self-Healing Surfaces
2007-11-16
blends, microcapsules Anna C. Balazs University of Pittsburgh Office of Sponsored Programs 3700 O’Hara St Pittsburgh, PA 15260 - REPORT DOCUMENTATION PAGE...novel computational approach (P5) to simulate the rolling motion of fluid-driven, particle-filled microcapsules along heterogeneous, adhesive substrates...established guidelines for designing particle-filled microcapsules that perform a “repair and go” function and could ultimately be used to restore
Mkanya, Anele; Pellicane, Giuseppe; Pini, Davide; Caccamo, Carlo
2017-09-13
We report extensive calculations, based on the modified hypernetted chain (MHNC) theory, on the hierarchical reference theory (HRT), and on Monte Carlo simulations, of thermodynamical, structural and phase coexistence properties of symmetric binary hard-core Yukawa mixtures (HCYM) with attractive interactions at equal species concentration. The obtained results are throughout compared with those available in the literature for the same systems. It turns out that the MHNC predictions for thermodynamic and structural quantities are quite accurate in comparison with the MC data. The HRT is equally accurate for thermodynamics, and slightly less accurate for structure. Liquid-vapor (LV) and liquid-liquid (LL) consolute coexistence conditions as emerging from simulations, are also highly satisfactorily reproduced by both the MHNC and HRT for relatively long ranged potentials. When the potential range reduces, the MHNC faces problems in determining the LV binodal line; however, the LL consolute line and the critical end point (CEP) temperature and density turn out to be still satisfactorily predicted within this theory. The HRT also predicts with good accuracy the CEP position. The possibility of employing liquid state theories HCYM for the purpose of reliably determining phase equilibria in multicomponent colloidal fluids of current technological interest, is discussed.
NASA Astrophysics Data System (ADS)
Mkanya, Anele; Pellicane, Giuseppe; Pini, Davide; Caccamo, Carlo
2017-09-01
We report extensive calculations, based on the modified hypernetted chain (MHNC) theory, on the hierarchical reference theory (HRT), and on Monte Carlo simulations, of thermodynamical, structural and phase coexistence properties of symmetric binary hard-core Yukawa mixtures (HCYM) with attractive interactions at equal species concentration. The obtained results are throughout compared with those available in the literature for the same systems. It turns out that the MHNC predictions for thermodynamic and structural quantities are quite accurate in comparison with the MC data. The HRT is equally accurate for thermodynamics, and slightly less accurate for structure. Liquid-vapor (LV) and liquid-liquid (LL) consolute coexistence conditions as emerging from simulations, are also highly satisfactorily reproduced by both the MHNC and HRT for relatively long ranged potentials. When the potential range reduces, the MHNC faces problems in determining the LV binodal line; however, the LL consolute line and the critical end point (CEP) temperature and density turn out to be still satisfactorily predicted within this theory. The HRT also predicts with good accuracy the CEP position. The possibility of employing liquid state theories HCYM for the purpose of reliably determining phase equilibria in multicomponent colloidal fluids of current technological interest, is discussed.
ERIC Educational Resources Information Center
Clark, William M.; Jackson, Yaminah Z.; Morin, Michael T.; Ferraro, Giacomo P.
2011-01-01
Laboratory experiments and computer models for studying the mass transfer process of removing CO2 from air using water or dilute NaOH solution as absorbent are presented. Models tie experiment to theory and give a visual representation of concentration profiles and also illustrate the two-film theory and the relative importance of various…
NASA Technical Reports Server (NTRS)
Kuo, B. C.; Singh, G.
1974-01-01
The dynamics of the Large Space Telescope (LST) control system were studied in order to arrive at a simplified model for computer simulation without loss of accuracy. The frictional nonlinearity of the Control Moment Gyroscope (CMG) Control Loop was analyzed in a model to obtain data for the following: (1) a continuous describing function for the gimbal friction nonlinearity; (2) a describing function of the CMG nonlinearity using an analytical torque equation; and (3) the discrete describing function and function plots for CMG functional linearity. Preliminary computer simulations are shown for the simplified LST system, first without, and then with analytical torque expressions. Transfer functions of the sampled-data LST system are also described. A final computer simulation is presented which uses elements of the simplified sampled-data LST system with analytical CMG frictional torque expressions.
A Review of Enhanced Sampling Approaches for Accelerated Molecular Dynamics
NASA Astrophysics Data System (ADS)
Tiwary, Pratyush; van de Walle, Axel
Molecular dynamics (MD) simulations have become a tool of immense use and popularity for simulating a variety of systems. With the advent of massively parallel computer resources, one now routinely sees applications of MD to systems as large as hundreds of thousands to even several million atoms, which is almost the size of most nanomaterials. However, it is not yet possible to reach laboratory timescales of milliseconds and beyond with MD simulations. Due to the essentially sequential nature of time, parallel computers have been of limited use in solving this so-called timescale problem. Instead, over the years a large range of statistical mechanics based enhanced sampling approaches have been proposed for accelerating molecular dynamics, and accessing timescales that are well beyond the reach of the fastest computers. In this review we provide an overview of these approaches, including the underlying theory, typical applications, and publicly available software resources to implement them.
Experimentally modeling stochastic processes with less memory by the use of a quantum processor
Palsson, Matthew S.; Gu, Mile; Ho, Joseph; Wiseman, Howard M.; Pryde, Geoff J.
2017-01-01
Computer simulation of observable phenomena is an indispensable tool for engineering new technology, understanding the natural world, and studying human society. However, the most interesting systems are often so complex that simulating their future behavior demands storing immense amounts of information regarding how they have behaved in the past. For increasingly complex systems, simulation becomes increasingly difficult and is ultimately constrained by resources such as computer memory. Recent theoretical work shows that quantum theory can reduce this memory requirement beyond ultimate classical limits, as measured by a process’ statistical complexity, C. We experimentally demonstrate this quantum advantage in simulating stochastic processes. Our quantum implementation observes a memory requirement of Cq = 0.05 ± 0.01, far below the ultimate classical limit of C = 1. Scaling up this technique would substantially reduce the memory required in simulations of more complex systems. PMID:28168218
Kinetic theory of coupled oscillators.
Hildebrand, Eric J; Buice, Michael A; Chow, Carson C
2007-02-02
We present an approach for the description of fluctuations that are due to finite system size induced correlations in the Kuramoto model of coupled oscillators. We construct a hierarchy for the moments of the density of oscillators that is analogous to the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy in the kinetic theory of plasmas and gases. To calculate the lowest order system size effect, we truncate this hierarchy at second order and solve the resulting closed equations for the two-oscillator correlation function around the incoherent state. We use this correlation function to compute the fluctuations of the order parameter, including the effect of transients, and compare this computation with numerical simulations.
Stochastic simulation of spatially correlated geo-processes
Christakos, G.
1987-01-01
In this study, developments in the theory of stochastic simulation are discussed. The unifying element is the notion of Radon projection in Euclidean spaces. This notion provides a natural way of reconstructing the real process from a corresponding process observable on a reduced dimensionality space, where analysis is theoretically easier and computationally tractable. Within this framework, the concept of space transformation is defined and several of its properties, which are of significant importance within the context of spatially correlated processes, are explored. The turning bands operator is shown to follow from this. This strengthens considerably the theoretical background of the geostatistical method of simulation, and some new results are obtained in both the space and frequency domains. The inverse problem is solved generally and the applicability of the method is extended to anisotropic as well as integrated processes. Some ill-posed problems of the inverse operator are discussed. Effects of the measurement error and impulses at origin are examined. Important features of the simulated process as described by geomechanical laws, the morphology of the deposit, etc., may be incorporated in the analysis. The simulation may become a model-dependent procedure and this, in turn, may provide numerical solutions to spatial-temporal geologic models. Because the spatial simu??lation may be technically reduced to unidimensional simulations, various techniques of generating one-dimensional realizations are reviewed. To link theory and practice, an example is computed in detail. ?? 1987 International Association for Mathematical Geology.
Advanced capabilities for materials modelling with Quantum ESPRESSO
NASA Astrophysics Data System (ADS)
Giannozzi, P.; Andreussi, O.; Brumme, T.; Bunau, O.; Buongiorno Nardelli, M.; Calandra, M.; Car, R.; Cavazzoni, C.; Ceresoli, D.; Cococcioni, M.; Colonna, N.; Carnimeo, I.; Dal Corso, A.; de Gironcoli, S.; Delugas, P.; DiStasio, R. A., Jr.; Ferretti, A.; Floris, A.; Fratesi, G.; Fugallo, G.; Gebauer, R.; Gerstmann, U.; Giustino, F.; Gorni, T.; Jia, J.; Kawamura, M.; Ko, H.-Y.; Kokalj, A.; Küçükbenli, E.; Lazzeri, M.; Marsili, M.; Marzari, N.; Mauri, F.; Nguyen, N. L.; Nguyen, H.-V.; Otero-de-la-Roza, A.; Paulatto, L.; Poncé, S.; Rocca, D.; Sabatini, R.; Santra, B.; Schlipf, M.; Seitsonen, A. P.; Smogunov, A.; Timrov, I.; Thonhauser, T.; Umari, P.; Vast, N.; Wu, X.; Baroni, S.
2017-11-01
Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.
Advanced capabilities for materials modelling with Quantum ESPRESSO.
Giannozzi, P; Andreussi, O; Brumme, T; Bunau, O; Buongiorno Nardelli, M; Calandra, M; Car, R; Cavazzoni, C; Ceresoli, D; Cococcioni, M; Colonna, N; Carnimeo, I; Dal Corso, A; de Gironcoli, S; Delugas, P; DiStasio, R A; Ferretti, A; Floris, A; Fratesi, G; Fugallo, G; Gebauer, R; Gerstmann, U; Giustino, F; Gorni, T; Jia, J; Kawamura, M; Ko, H-Y; Kokalj, A; Küçükbenli, E; Lazzeri, M; Marsili, M; Marzari, N; Mauri, F; Nguyen, N L; Nguyen, H-V; Otero-de-la-Roza, A; Paulatto, L; Poncé, S; Rocca, D; Sabatini, R; Santra, B; Schlipf, M; Seitsonen, A P; Smogunov, A; Timrov, I; Thonhauser, T; Umari, P; Vast, N; Wu, X; Baroni, S
2017-10-24
Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.
Advanced capabilities for materials modelling with Quantum ESPRESSO.
Andreussi, Oliviero; Brumme, Thomas; Bunau, Oana; Buongiorno Nardelli, Marco; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Cococcioni, Matteo; Colonna, Nicola; Carnimeo, Ivan; Dal Corso, Andrea; de Gironcoli, Stefano; Delugas, Pietro; DiStasio, Robert; Ferretti, Andrea; Floris, Andrea; Fratesi, Guido; Fugallo, Giorgia; Gebauer, Ralph; Gerstmann, Uwe; Giustino, Feliciano; Gorni, Tommaso; Jia, Junteng; Kawamura, Mitsuaki; Ko, Hsin-Yu; Kokalj, Anton; Küçükbenli, Emine; Lazzeri, Michele; Marsili, Margherita; Marzari, Nicola; Mauri, Francesco; Nguyen, Ngoc Linh; Nguyen, Huy-Viet; Otero-de-la-Roza, Alberto; Paulatto, Lorenzo; Poncé, Samuel; Giannozzi, Paolo; Rocca, Dario; Sabatini, Riccardo; Santra, Biswajit; Schlipf, Martin; Seitsonen, Ari Paavo; Smogunov, Alexander; Timrov, Iurii; Thonhauser, Timo; Umari, Paolo; Vast, Nathalie; Wu, Xifan; Baroni, Stefano
2017-09-27
Quantum ESPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudo-potential and projector-augmented-wave approaches. Quantum ESPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement theirs ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software. © 2017 IOP Publishing Ltd.
The dynamics of team cognition: A process-oriented theory of knowledge emergence in teams.
Grand, James A; Braun, Michael T; Kuljanin, Goran; Kozlowski, Steve W J; Chao, Georgia T
2016-10-01
Team cognition has been identified as a critical component of team performance and decision-making. However, theory and research in this domain continues to remain largely static; articulation and examination of the dynamic processes through which collectively held knowledge emerges from the individual- to the team-level is lacking. To address this gap, we advance and systematically evaluate a process-oriented theory of team knowledge emergence. First, we summarize the core concepts and dynamic mechanisms that underlie team knowledge-building and represent our theory of team knowledge emergence (Step 1). We then translate this narrative theory into a formal computational model that provides an explicit specification of how these core concepts and mechanisms interact to produce emergent team knowledge (Step 2). The computational model is next instantiated into an agent-based simulation to explore how the key generative process mechanisms described in our theory contribute to improved knowledge emergence in teams (Step 3). Results from the simulations demonstrate that agent teams generate collectively shared knowledge more effectively when members are capable of processing information more efficiently and when teams follow communication strategies that promote equal rates of information sharing across members. Lastly, we conduct an empirical experiment with real teams participating in a collective knowledge-building task to verify that promoting these processes in human teams also leads to improved team knowledge emergence (Step 4). Discussion focuses on implications of the theory for examining team cognition processes and dynamics as well as directions for future research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Spectral Rate Theory for Two-State Kinetics
NASA Astrophysics Data System (ADS)
Prinz, Jan-Hendrik; Chodera, John D.; Noé, Frank
2014-02-01
Classical rate theories often fail in cases where the observable(s) or order parameter(s) used is a poor reaction coordinate or the observed signal is deteriorated by noise, such that no clear separation between reactants and products is possible. Here, we present a general spectral two-state rate theory for ergodic dynamical systems in thermal equilibrium that explicitly takes into account how the system is observed. The theory allows the systematic estimation errors made by standard rate theories to be understood and quantified. We also elucidate the connection of spectral rate theory with the popular Markov state modeling approach for molecular simulation studies. An optimal rate estimator is formulated that gives robust and unbiased results even for poor reaction coordinates and can be applied to both computer simulations and single-molecule experiments. No definition of a dividing surface is required. Another result of the theory is a model-free definition of the reaction coordinate quality. The reaction coordinate quality can be bounded from below by the directly computable observation quality, thus providing a measure allowing the reaction coordinate quality to be optimized by tuning the experimental setup. Additionally, the respective partial probability distributions can be obtained for the reactant and product states along the observed order parameter, even when these strongly overlap. The effects of both filtering (averaging) and uncorrelated noise are also examined. The approach is demonstrated on numerical examples and experimental single-molecule force-probe data of the p5ab RNA hairpin and the apo-myoglobin protein at low pH, focusing here on the case of two-state kinetics.
A non-local mixing-length theory able to compute core overshooting
NASA Astrophysics Data System (ADS)
Gabriel, M.; Belkacem, K.
2018-04-01
Turbulent convection is certainly one of the most important and thorny issues in stellar physics. Our deficient knowledge of this crucial physical process introduces a fairly large uncertainty concerning the internal structure and evolution of stars. A striking example is overshoot at the edge of convective cores. Indeed, nearly all stellar evolutionary codes treat the overshooting zones in a very approximative way that considers both its extent and the profile of the temperature gradient as free parameters. There are only a few sophisticated theories of stellar convection such as Reynolds stress approaches, but they also require the adjustment of a non-negligible number of free parameters. We present here a theory, based on the plume theory as well as on the mean-field equations, but without relying on the usual Taylor's closure hypothesis. It leads us to a set of eight differential equations plus a few algebraic ones. Our theory is essentially a non-mixing length theory. It enables us to compute the temperature gradient in a shrinking convective core and its overshooting zone. The case of an expanding convective core is also discussed, though more briefly. Numerical simulations have quickly improved during recent years and enabling us to foresee that they will probably soon provide a model of convection adapted to the computation of 1D stellar models.
Local rules simulation of the kinetics of virus capsid self-assembly.
Schwartz, R; Shor, P W; Prevelige, P E; Berger, B
1998-12-01
A computer model is described for studying the kinetics of the self-assembly of icosahedral viral capsids. Solution of this problem is crucial to an understanding of the viral life cycle, which currently cannot be adequately addressed through laboratory techniques. The abstract simulation model employed to address this is based on the local rules theory of. Proc. Natl. Acad. Sci. USA. 91:7732-7736). It is shown that the principle of local rules, generalized with a model of kinetics and other extensions, can be used to simulate complicated problems in self-assembly. This approach allows for a computationally tractable molecular dynamics-like simulation of coat protein interactions while retaining many relevant features of capsid self-assembly. Three simple simulation experiments are presented to illustrate the use of this model. These show the dependence of growth and malformation rates on the energetics of binding interactions, the tolerance of errors in binding positions, and the concentration of subunits in the examples. These experiments demonstrate a tradeoff within the model between growth rate and fidelity of assembly for the three parameters. A detailed discussion of the computational model is also provided.
Scalable nuclear density functional theory with Sky3D
NASA Astrophysics Data System (ADS)
Afibuzzaman, Md; Schuetrumpf, Bastian; Aktulga, Hasan Metin
2018-02-01
In nuclear astrophysics, quantum simulations of large inhomogeneous dense systems as they appear in the crusts of neutron stars present big challenges. The number of particles in a simulation with periodic boundary conditions is strongly limited due to the immense computational cost of the quantum methods. In this paper, we describe techniques for an efficient and scalable parallel implementation of Sky3D, a nuclear density functional theory solver that operates on an equidistant grid. Presented techniques allow Sky3D to achieve good scaling and high performance on a large number of cores, as demonstrated through detailed performance analysis on a Cray XC40 supercomputer.
NASA Astrophysics Data System (ADS)
Langenbach, K.; Heilig, M.; Horsch, M.; Hasse, H.
2018-03-01
A new method for predicting homogeneous bubble nucleation rates of pure compounds from vapor-liquid equilibrium (VLE) data is presented. It combines molecular dynamics simulation on the one side with density gradient theory using an equation of state (EOS) on the other. The new method is applied here to predict bubble nucleation rates in metastable liquid carbon dioxide (CO2). The molecular model of CO2 is taken from previous work of our group. PC-SAFT is used as an EOS. The consistency between the molecular model and the EOS is achieved by adjusting the PC-SAFT parameters to VLE data obtained from the molecular model. The influence parameter of density gradient theory is fitted to the surface tension of the molecular model. Massively parallel molecular dynamics simulations are performed close to the spinodal to compute bubble nucleation rates. From these simulations, the kinetic prefactor of the hybrid nucleation theory is estimated, whereas the nucleation barrier is calculated from density gradient theory. This enables the extrapolation of molecular simulation data to the whole metastable range including technically relevant densities. The results are tested against available experimental data and found to be in good agreement. The new method does not suffer from typical deficiencies of classical nucleation theory concerning the thermodynamic barrier at the spinodal and the bubble size dependence of surface tension, which is typically neglected in classical nucleation theory. In addition, the density in the center of critical bubbles and their surface tension is determined as a function of their radius. The usual linear Tolman correction to the capillarity approximation is found to be invalid.
Langenbach, K; Heilig, M; Horsch, M; Hasse, H
2018-03-28
A new method for predicting homogeneous bubble nucleation rates of pure compounds from vapor-liquid equilibrium (VLE) data is presented. It combines molecular dynamics simulation on the one side with density gradient theory using an equation of state (EOS) on the other. The new method is applied here to predict bubble nucleation rates in metastable liquid carbon dioxide (CO 2 ). The molecular model of CO 2 is taken from previous work of our group. PC-SAFT is used as an EOS. The consistency between the molecular model and the EOS is achieved by adjusting the PC-SAFT parameters to VLE data obtained from the molecular model. The influence parameter of density gradient theory is fitted to the surface tension of the molecular model. Massively parallel molecular dynamics simulations are performed close to the spinodal to compute bubble nucleation rates. From these simulations, the kinetic prefactor of the hybrid nucleation theory is estimated, whereas the nucleation barrier is calculated from density gradient theory. This enables the extrapolation of molecular simulation data to the whole metastable range including technically relevant densities. The results are tested against available experimental data and found to be in good agreement. The new method does not suffer from typical deficiencies of classical nucleation theory concerning the thermodynamic barrier at the spinodal and the bubble size dependence of surface tension, which is typically neglected in classical nucleation theory. In addition, the density in the center of critical bubbles and their surface tension is determined as a function of their radius. The usual linear Tolman correction to the capillarity approximation is found to be invalid.
NASA Astrophysics Data System (ADS)
Voelz, David; Wijerathna, Erandi; Xiao, Xifeng; Muschinski, Andreas
2017-09-01
The analysis of optical propagation through both deterministic and stochastic refractive-index fields may be substantially simplified if diffraction effects can be neglected. With regard to simplification, it is known that certain geometricaloptics predictions often agree well with field observations but it is not always clear why this is so. Here, a new investigation of this issue is presented involving wave optics and geometrical (ray) optics computer simulations of a beam of visible light propagating through fully turbulent, homogeneous and isotropic refractive-index fields. We compare the computationally simulated, aperture-averaged angle-of-arrival variances (for aperture diameters ranging from 0.5 to 13 Fresnel lengths) with theoretical predictions based on the Rytov theory.
Numerical simulation of supersonic inlets using a three-dimensional viscous flow analysis
NASA Technical Reports Server (NTRS)
Anderson, B. H.; Towne, C. E.
1980-01-01
A three dimensional fully viscous computer analysis was evaluated to determine its usefulness in the design of supersonic inlets. This procedure takes advantage of physical approximations to limit the high computer time and storage associated with complete Navier-Stokes solutions. Computed results are presented for a Mach 3.0 supersonic inlet with bleed and a Mach 7.4 hypersonic inlet. Good agreement was obtained between theory and data for both inlets. Results of a mesh sensitivity study are also shown.
A Fast Algorithm for Massively Parallel, Long-Term, Simulation of Complex Molecular Dynamics Systems
NASA Technical Reports Server (NTRS)
Jaramillo-Botero, Andres; Goddard, William A, III; Fijany, Amir
1997-01-01
The advances in theory and computing technology over the last decade have led to enormous progress in applying atomistic molecular dynamics (MD) methods to the characterization, prediction, and design of chemical, biological, and material systems,.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elber, Ron
Atomically detailed computer simulations of complex molecular events attracted the imagination of many researchers in the field as providing comprehensive information on chemical, biological, and physical processes. However, one of the greatest limitations of these simulations is of time scales. The physical time scales accessible to straightforward simulations are too short to address many interesting and important molecular events. In the last decade significant advances were made in different directions (theory, software, and hardware) that significantly expand the capabilities and accuracies of these techniques. This perspective describes and critically examines some of these advances.
NASA Technical Reports Server (NTRS)
Mikellides, Ioannis G.; Katz, Ira; Hofer, Richard R.; Goebel, Dan M.
2012-01-01
A proof-of-principle effort to demonstrate a technique by which erosion of the acceleration channel in Hall thrusters of the magnetic-layer type can be eliminated has been completed. The first principles of the technique, now known as "magnetic shielding," were derived based on the findings of numerical simulations in 2-D axisymmetric geometry. The simulations, in turn, guided the modification of an existing 6-kW laboratory Hall thruster. This magnetically shielded (MS) thruster was then built and tested. Because neither theory nor experiment alone can validate fully the first principles of the technique, the objective of the 2-yr effort was twofold: (1) to demonstrate in the laboratory that the erosion rates can be reduced by >order of magnitude, and (2) to demonstrate that the near-wall plasma properties can be altered according to the theoretical predictions. This paper concludes the demonstration of magnetic shielding by reporting on a wide range of comparisons between results from numerical simulations and laboratory diagnostics. Collectively, we find that the comparisons validate the theory. Near the walls of the MS thruster, theory and experiment agree: (1) the plasma potential has been sustained at values near the discharge voltage, and (2) the electron temperature has been lowered by at least 2.5-3 times compared to the unshielded (US) thruster. Also, based on carbon deposition measurements, the erosion rates at the inner and outer walls of the MS thruster are found to be lower by at least 2300 and 1875 times, respectively. Erosion was so low along these walls that the rates were below the resolution of the profilometer. Using a sputtering yield model with an energy threshold of 25 V, the simulations predict a reduction of 600 at the MS inner wall. At the outer wall ion energies are computed to be below 25 V, for which case we set the erosion to zero in the simulations. When a 50-V threshold is used the computed ion energies are below the threshold at both sides of the channel. Uncertainties, sensitivities and differences between theory and experiment are also discussed.
NASA Technical Reports Server (NTRS)
Willis, Jerry; Willis, Dee Anna; Walsh, Clare; Stephens, Elizabeth; Murphy, Timothy; Price, Jerry; Stevens, William; Jackson, Kevin; Villareal, James A.; Way, Bob
1994-01-01
An important part of NASA's mission involves the secondary application of its technologies in the public and private sectors. One current application under development is LiteraCity, a simulation-based instructional package for adults who do not have functional reading skills. Using fuzzy logic routines and other technologies developed by NASA's Information Systems Directorate and hypermedia sound, graphics, and animation technologies the project attempts to overcome the limited impact of adult literacy assessment and instruction by involving the adult in an interactive simulation of real-life literacy activities. The project uses a recursive instructional development model and authentic instruction theory. This paper describes one component of a project to design, develop, and produce a series of computer-based, multimedia instructional packages. The packages are being developed for use in adult literacy programs, particularly in correctional education centers. They use the concepts of authentic instruction and authentic assessment to guide development. All the packages to be developed are instructional simulations. The first is a simulation of 'finding a friend a job.'
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Siqin; Department of Chemistry, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon; Sheong, Fu Kit
Reference interaction site model (RISM) has recently become a popular approach in the study of thermodynamical and structural properties of the solvent around macromolecules. On the other hand, it was widely suggested that there exists water density depletion around large hydrophobic solutes (>1 nm), and this may pose a great challenge to the RISM theory. In this paper, we develop a new analytical theory, the Reference Interaction Site Model with Hydrophobicity induced density Inhomogeneity (RISM-HI), to compute solvent radial distribution function (RDF) around large hydrophobic solute in water as well as its mixture with other polyatomic organic solvents. To achievemore » this, we have explicitly considered the density inhomogeneity at the solute-solvent interface using the framework of the Yvon-Born-Green hierarchy, and the RISM theory is used to obtain the solute-solvent pair correlation. In order to efficiently solve the relevant equations while maintaining reasonable accuracy, we have also developed a new closure called the D2 closure. With this new theory, the solvent RDFs around a large hydrophobic particle in water and different water-acetonitrile mixtures could be computed, which agree well with the results of the molecular dynamics simulations. Furthermore, we show that our RISM-HI theory can also efficiently compute the solvation free energy of solute with a wide range of hydrophobicity in various water-acetonitrile solvent mixtures with a reasonable accuracy. We anticipate that our theory could be widely applied to compute the thermodynamic and structural properties for the solvation of hydrophobic solute.« less
NASA Technical Reports Server (NTRS)
Dlugach, Janna M.; Mishchenko, Michael I.; Liu, Li; Mackowski, Daniel W.
2011-01-01
Direct computer simulations of electromagnetic scattering by discrete random media have become an active area of research. In this progress review, we summarize and analyze our main results obtained by means of numerically exact computer solutions of the macroscopic Maxwell equations. We consider finite scattering volumes with size parameters in the range, composed of varying numbers of randomly distributed particles with different refractive indices. The main objective of our analysis is to examine whether all backscattering effects predicted by the low-density theory of coherent backscattering (CB) also take place in the case of densely packed media. Based on our extensive numerical data we arrive at the following conclusions: (i) all backscattering effects predicted by the asymptotic theory of CB can also take place in the case of densely packed media; (ii) in the case of very large particle packing density, scattering characteristics of discrete random media can exhibit behavior not predicted by the low-density theories of CB and radiative transfer; (iii) increasing the absorptivity of the constituent particles can either enhance or suppress typical manifestations of CB depending on the particle packing density and the real part of the refractive index. Our numerical data strongly suggest that spectacular backscattering effects identified in laboratory experiments and observed for a class of high-albedo Solar System objects are caused by CB.
Information Processing Capacity of Dynamical Systems
NASA Astrophysics Data System (ADS)
Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge
2012-07-01
Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory.
Information Processing Capacity of Dynamical Systems
Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge
2012-01-01
Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory. PMID:22816038
Geoid Recovery using Geophysical Inverse Theory Applied to Satellite to Satellite Tracking Data
NASA Technical Reports Server (NTRS)
Gaposchkin, E. M.; Frey, H. (Technical Monitor)
2000-01-01
This report describes a new method for determination of the geopotential. The analysis is aimed at the GRACE mission. This Satellite-to-Satellite Tracking (SST) mission is viewed as a mapping mission The result will be maps of the geoid. The elements of potential theory, celestial mechanics, and Geophysical Inverse Theory are integrated into a computation architecture, and the results of several simulations presented Centimeter accuracy geoids with 50 to 100 km resolution can be recovered with a 30 to 60 day mission.
New equation of state models for hydrodynamic applications
NASA Astrophysics Data System (ADS)
Young, David A.; Barbee, Troy W.; Rogers, Forrest J.
1998-07-01
Two new theoretical methods for computing the equation of state of hot, dense matter are discussed. The ab initio phonon theory gives a first-principles calculation of lattice frequencies, which can be used to compare theory and experiment for isothermal and shock compression of solids. The ACTEX dense plasma theory has been improved to allow it to be compared directly with ultrahigh pressure shock data on low-Z materials. The comparisons with experiment are good, suggesting that these models will be useful in generating global EOS tables for hydrodynamic simulations.
Photonic-Doppler-Velocimetry, Paraxial-Scalar Diffraction Theory and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ambrose, W. P.
2015-07-20
In this report I describe current progress on a paraxial, scalar-field theory suitable for simulating what is measured in Photonic Doppler Velocimetry (PDV) experiments in three dimensions. I have introduced a number of approximations in this work in order to bring the total computation time for one experiment down to around 20 hours. My goals were: to develop an approximate method of calculating the peak frequency in a spectral sideband at an instant of time based on an optical diffraction theory for a moving target, to compare the ‘measured’ velocity to the ‘input’ velocity to gain insights into how andmore » to what precision PDV measures the component of the mass velocity along the optical axis, and to investigate the effects of small amounts of roughness on the measured velocity. This report illustrates the progress I have made in describing how to perform such calculations with a full three dimensional picture including tilted target, tilted mass velocity (not necessarily in the same direction), and small amounts of surface roughness. With the method established for a calculation at one instant of time, measured velocities can be simulated for a sequence of times, similar to the process of sampling velocities in experiments. Improvements in these methods are certainly possible at hugely increased computational cost. I am hopeful that readers appreciate the insights possible at the current level of approximation.« less
Hydrodynamic theory of diffusion in two-temperature multicomponent plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramshaw, J.D.; Chang, C.H.
Detailed numerical simulations of multicomponent plasmas require tractable expressions for species diffusion fluxes, which must be consistent with the given plasma current density J{sub q} to preserve local charge neutrality. The common situation in which J{sub q} = 0 is referred to as ambipolar diffusion. The use of formal kinetic theory in this context leads to results of formidable complexity. We derive simple tractable approximations for the diffusion fluxes in two-temperature multicomponent plasmas by means of a generalization of the hydrodynamical approach used by Maxwell, Stefan, Furry, and Williams. The resulting diffusion fluxes obey generalized Stefan-Maxwell equations that contain drivingmore » forces corresponding to ordinary, forced, pressure, and thermal diffusion. The ordinary diffusion fluxes are driven by gradients in pressure fractions rather than mole fractions. Simplifications due to the small electron mass are systematically exploited and lead to a general expression for the ambipolar electric field in the limit of infinite electrical conductivity. We present a self-consistent effective binary diffusion approximation for the diffusion fluxes. This approximation is well suited to numerical implementation and is currently in use in our LAVA computer code for simulating multicomponent thermal plasmas. Applications to date include a successful simulation of demixing effects in an argon-helium plasma jet, for which selected computational results are presented. Generalizations of the diffusion theory to finite electrical conductivity and nonzero magnetic field are currently in progress.« less
NASA Astrophysics Data System (ADS)
Luckhurst, G. R.; Saielli, G.
2000-03-01
Molecular field theory predicts the induction of a smectic A phase by the application of a field, either magnetic or electric, to a nematic phase. This intriguing behavior results from an enhancement of the orientational order which is coupled to the translational order and so shifts the smectic A-nematic transition. To test this prediction we have investigated a system of Gay-Berne mesogenic molecules subject to an applied field of second rank using isothermal-isobaric Monte Carlo simulations. The results of our calculations are compared with the Kventsel-Luckhurst-Zewdie molecular field theory of smectogens, modified to include the effect of an external field. We have also used the simulations to explore the possibility of inducing more ordered smectic phases with stronger fields.
Haskins, Justin B; Bauschlicher, Charles W; Lawson, John W
2015-11-19
Density functional theory (DFT), density functional theory molecular dynamics (DFT-MD), and classical molecular dynamics using polarizable force fields (PFF-MD) are employed to evaluate the influence of Li(+) on the structure, transport, and electrochemical stability of three potential ionic liquid electrolytes: N-methyl-N-butylpyrrolidinium bis(trifluoromethanesulfonyl)imide ([pyr14][TFSI]), N-methyl-N-propylpyrrolidinium bis(fluorosulfonyl)imide ([pyr13][FSI]), and 1-ethyl-3-methylimidazolium boron tetrafluoride ([EMIM][BF4]). We characterize the Li(+) solvation shell through DFT computations of [Li(Anion)n]((n-1)-) clusters, DFT-MD simulations of isolated Li(+) in small ionic liquid systems, and PFF-MD simulations with high Li-doping levels in large ionic liquid systems. At low levels of Li-salt doping, highly stable solvation shells having two to three anions are seen in both [pyr14][TFSI] and [pyr13][FSI], whereas solvation shells with four anions dominate in [EMIM][BF4]. At higher levels of doping, we find the formation of complex Li-network structures that increase the frequency of four anion-coordinated solvation shells. A comparison of computational and experimental Raman spectra for a wide range of [Li(Anion)n]((n-1)-) clusters shows that our proposed structures are consistent with experiment. We then compute the ion diffusion coefficients and find measures from small-cell DFT-MD simulations to be the correct order of magnitude, but influenced by small system size and short simulation length. Correcting for these errors with complementary PFF-MD simulations, we find DFT-MD measures to be in close agreement with experiment. Finally, we compute electrochemical windows from DFT computations on isolated ions, interacting cation/anion pairs, and liquid-phase systems with Li-doping. For the molecular-level computations, we generally find the difference between ionization energy and electron affinity from isolated ions and interacting cation/anion pairs to provide upper and lower bounds, respectively, to experiment. In the liquid phase, we find the difference between the lowest unoccupied and highest occupied electronic levels in pure and hybrid functionals to provide lower and upper bounds, respectively, to experiment. Li-doping in the liquid-phase systems results in electrochemical windows little changed from the neat systems.
Electrostatics of proteins in dielectric solvent continua. I. Newton's third law marries qE forces
NASA Astrophysics Data System (ADS)
Stork, Martina; Tavan, Paul
2007-04-01
The authors reformulate and revise an electrostatic theory treating proteins surrounded by dielectric solvent continua [B. Egwolf and P. Tavan, J. Chem. Phys. 118, 2039 (2003)] to make the resulting reaction field (RF) forces compatible with Newton's third law. Such a compatibility is required for their use in molecular dynamics (MD) simulations, in which the proteins are modeled by all-atom molecular mechanics force fields. According to the original theory the RF forces, which are due to the electric field generated by the solvent polarization and act on the partial charges of a protein, i.e., the so-called qE forces, can be quite accurately computed from Gaussian RF dipoles localized at the protein atoms. Using a slightly different approximation scheme also the RF energies of given protein configurations are obtained. However, because the qE forces do not account for the dielectric boundary pressure exerted by the solvent continuum on the protein, they do not obey the principle that actio equals reactio as required by Newton's third law. Therefore, their use in MD simulations is severely hampered. An analysis of the original theory has led the authors now to a reformulation removing the main difficulties. By considering the RF energy, which represents the dominant electrostatic contribution to the free energy of solvation for a given protein configuration, they show that its negative configurational gradient yields mean RF forces obeying the reactio principle. Because the evaluation of these mean forces is computationally much more demanding than that of the qE forces, they derive a suggestion how the qE forces can be modified to obey Newton's third law. Various properties of the thus established theory, particularly issues of accuracy and of computational efficiency, are discussed. A sample application to a MD simulation of a peptide in solution is described in the following paper [M. Stork and P. Tavan, J. Chem. Phys., 126, 165106 (2007).
Phase transformations at interfaces: Observations from atomistic modeling
Frolov, T.; Asta, M.; Mishin, Y.
2016-10-01
Here, we review the recent progress in theoretical understanding and atomistic computer simulations of phase transformations in materials interfaces, focusing on grain boundaries (GBs) in metallic systems. Recently developed simulation approaches enable the search and structural characterization of GB phases in single-component metals and binary alloys, calculation of thermodynamic properties of individual GB phases, and modeling of the effect of the GB phase transformations on GB kinetics. Atomistic simulations demonstrate that the GB transformations can be induced by varying the temperature, loading the GB with point defects, or varying the amount of solute segregation. The atomic-level understanding obtained from suchmore » simulations can provide input for further development of thermodynamics theories and continuous models of interface phase transformations while simultaneously serving as a testing ground for validation of theories and models. They can also help interpret and guide experimental work in this field.« less
Weems, Scott A; Reggia, James A
2006-09-01
The Wernicke-Lichtheim-Geschwind (WLG) theory of the neurobiological basis of language is of great historical importance, and it continues to exert a substantial influence on most contemporary theories of language in spite of its widely recognized limitations. Here, we suggest that neurobiologically grounded computational models based on the WLG theory can provide a deeper understanding of which of its features are plausible and where the theory fails. As a first step in this direction, we created a model of the interconnected left and right neocortical areas that are most relevant to the WLG theory, and used it to study visual-confrontation naming, auditory repetition, and auditory comprehension performance. No specific functionality is assigned a priori to model cortical regions, other than that implicitly present due to their locations in the cortical network and a higher learning rate in left hemisphere regions. Following learning, the model successfully simulates confrontation naming and word repetition, and acquires a unique internal representation in parietal regions for each named object. Simulated lesions to the language-dominant cortical regions produce patterns of single word processing impairment reminiscent of those postulated historically in the classic aphasia syndromes. These results indicate that WLG theory, instantiated as a simple interconnected network of model neocortical regions familiar to any neuropsychologist/neurologist, captures several fundamental "low-level" aspects of neurobiological word processing and their impairment in aphasia.
Brønsted acidity of protic ionic liquids: a modern ab initio valence bond theory perspective.
Patil, Amol Baliram; Mahadeo Bhanage, Bhalchandra
2016-09-21
Room temperature ionic liquids (ILs), especially protic ionic liquids (PILs), are used in many areas of the chemical sciences. Ionicity, the extent of proton transfer, is a key parameter which determines many physicochemical properties and in turn the suitability of PILs for various applications. The spectrum of computational chemistry techniques applied to investigate ionic liquids includes classical molecular dynamics, Monte Carlo simulations, ab initio molecular dynamics, Density Functional Theory (DFT), CCSD(t) etc. At the other end of the spectrum is another computational approach: modern ab initio Valence Bond Theory (VBT). VBT differs from molecular orbital theory based methods in the expression of the molecular wave function. The molecular wave function in the valence bond ansatz is expressed as a linear combination of valence bond structures. These structures include covalent and ionic structures explicitly. Modern ab initio valence bond theory calculations of representative primary and tertiary ammonium protic ionic liquids indicate that modern ab initio valence bond theory can be employed to assess the acidity and ionicity of protic ionic liquids a priori.
Adaptive Core Simulation Employing Discrete Inverse Theory - Part II: Numerical Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Khalik, Hany S.; Turinsky, Paul J.
2005-07-15
Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. The companion paper, ''Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory,'' describes in detail the theoretical background of the proposed adaptive techniques. This paper, Part II, demonstrates several computational experiments conducted to assess the fidelity and robustness of the proposed techniques. The intentmore » is to check the ability of the adapted core simulator model to predict future core observables that are not included in the adaption or core observables that are recorded at core conditions that differ from those at which adaption is completed. Also, this paper demonstrates successful utilization of an efficient sensitivity analysis approach to calculate the sensitivity information required to perform the adaption for millions of input core parameters. Finally, this paper illustrates a useful application for adaptive simulation - reducing the inconsistencies between two different core simulator code systems, where the multitudes of input data to one code are adjusted to enhance the agreement between both codes for important core attributes, i.e., core reactivity and power distribution. Also demonstrated is the robustness of such an application.« less
Gleadall, Andrew; Pan, Jingzhe; Kruft, Marc-Anton
2015-11-01
Atomic simulations were undertaken to analyse the effect of polymer chain scission on amorphous poly(lactide) during degradation. Many experimental studies have analysed mechanical properties degradation but relatively few computation studies have been conducted. Such studies are valuable for supporting the design of bioresorbable medical devices. Hence in this paper, an Effective Cavity Theory for the degradation of Young's modulus was developed. Atomic simulations indicated that a volume of reduced-stiffness polymer may exist around chain scissions. In the Effective Cavity Theory, each chain scission is considered to instantiate an effective cavity. Finite Element Analysis simulations were conducted to model the effect of the cavities on Young's modulus. Since polymer crystallinity affects mechanical properties, the effect of increases in crystallinity during degradation on Young's modulus is also considered. To demonstrate the ability of the Effective Cavity Theory, it was fitted to several sets of experimental data for Young's modulus in the literature. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sathasivam, Saratha
New activation function is examined for its ability to accelerate the performance of doing logic programming in Hopfield network. This method has a higher capacity and upgrades the neuro symbolic integration. Computer simulations are carried out to validate the effectiveness of the new activation function. Empirical results obtained support our theory.
On the Use of Linearized Euler Equations in the Prediction of Jet Noise
NASA Technical Reports Server (NTRS)
Mankbadi, Reda R.; Hixon, R.; Shih, S.-H.; Povinelli, L. A.
1995-01-01
Linearized Euler equations are used to simulate supersonic jet noise generation and propagation. Special attention is given to boundary treatment. The resulting solution is stable and nearly free from boundary reflections without the need for artificial dissipation, filtering, or a sponge layer. The computed solution is in good agreement with theory and observation and is much less CPU-intensive as compared to large-eddy simulations.
Passive motion paradigm: an alternative to optimal control.
Mohan, Vishwanathan; Morasso, Pietro
2011-01-01
IN THE LAST YEARS, OPTIMAL CONTROL THEORY (OCT) HAS EMERGED AS THE LEADING APPROACH FOR INVESTIGATING NEURAL CONTROL OF MOVEMENT AND MOTOR COGNITION FOR TWO COMPLEMENTARY RESEARCH LINES: behavioral neuroscience and humanoid robotics. In both cases, there are general problems that need to be addressed, such as the "degrees of freedom (DoFs) problem," the common core of production, observation, reasoning, and learning of "actions." OCT, directly derived from engineering design techniques of control systems quantifies task goals as "cost functions" and uses the sophisticated formal tools of optimal control to obtain desired behavior (and predictions). We propose an alternative "softer" approach passive motion paradigm (PMP) that we believe is closer to the biomechanics and cybernetics of action. The basic idea is that actions (overt as well as covert) are the consequences of an internal simulation process that "animates" the body schema with the attractor dynamics of force fields induced by the goal and task-specific constraints. This internal simulation offers the brain a way to dynamically link motor redundancy with task-oriented constraints "at runtime," hence solving the "DoFs problem" without explicit kinematic inversion and cost function computation. We argue that the function of such computational machinery is not only restricted to shaping motor output during action execution but also to provide the self with information on the feasibility, consequence, understanding and meaning of "potential actions." In this sense, taking into account recent developments in neuroscience (motor imagery, simulation theory of covert actions, mirror neuron system) and in embodied robotics, PMP offers a novel framework for understanding motor cognition that goes beyond the engineering control paradigm provided by OCT. Therefore, the paper is at the same time a review of the PMP rationale, as a computational theory, and a perspective presentation of how to develop it for designing better cognitive architectures.
Electron and ion heating by whistler turbulence: Three-dimensional particle-in-cell simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, R. Scott; Gary, S. Peter; Wang, Joseph
2014-12-17
Three-dimensional particle-in-cell simulations of decaying whistler turbulence are carried out on a collisionless, homogeneous, magnetized, electron-ion plasma model. In addition, the simulations use an initial ensemble of relatively long wavelength whistler modes with a broad range of initial propagation directions with an initial electron beta β e = 0.05. The computations follow the temporal evolution of the fluctuations as they cascade into broadband turbulent spectra at shorter wavelengths. Three simulations correspond to successively larger simulation boxes and successively longer wavelengths of the initial fluctuations. The computations confirm previous results showing electron heating is preferentially parallel to the background magnetic fieldmore » B o, and ion heating is preferentially perpendicular to B o. The new results here are that larger simulation boxes and longer initial whistler wavelengths yield weaker overall dissipation, consistent with linear dispersion theory predictions of decreased damping, stronger ion heating, consistent with a stronger ion Landau resonance, and weaker electron heating.« less
The Kolmogorov-Obukhov Statistical Theory of Turbulence
NASA Astrophysics Data System (ADS)
Birnir, Björn
2013-08-01
In 1941 Kolmogorov and Obukhov postulated the existence of a statistical theory of turbulence, which allows the computation of statistical quantities that can be simulated and measured in a turbulent system. These are quantities such as the moments, the structure functions and the probability density functions (PDFs) of the turbulent velocity field. In this paper we will outline how to construct this statistical theory from the stochastic Navier-Stokes equation. The additive noise in the stochastic Navier-Stokes equation is generic noise given by the central limit theorem and the large deviation principle. The multiplicative noise consists of jumps multiplying the velocity, modeling jumps in the velocity gradient. We first estimate the structure functions of turbulence and establish the Kolmogorov-Obukhov 1962 scaling hypothesis with the She-Leveque intermittency corrections. Then we compute the invariant measure of turbulence, writing the stochastic Navier-Stokes equation as an infinite-dimensional Ito process, and solving the linear Kolmogorov-Hopf functional differential equation for the invariant measure. Finally we project the invariant measure onto the PDF. The PDFs turn out to be the normalized inverse Gaussian (NIG) distributions of Barndorff-Nilsen, and compare well with PDFs from simulations and experiments.
Bello-Rivas, Juan M.; Elber, Ron
2015-01-01
A new theory and an exact computer algorithm for calculating kinetics and thermodynamic properties of a particle system are described. The algorithm avoids trapping in metastable states, which are typical challenges for Molecular Dynamics (MD) simulations on rough energy landscapes. It is based on the division of the full space into Voronoi cells. Prior knowledge or coarse sampling of space points provides the centers of the Voronoi cells. Short time trajectories are computed between the boundaries of the cells that we call milestones and are used to determine fluxes at the milestones. The flux function, an essential component of the new theory, provides a complete description of the statistical mechanics of the system at the resolution of the milestones. We illustrate the accuracy and efficiency of the exact Milestoning approach by comparing numerical results obtained on a model system using exact Milestoning with the results of long trajectories and with a solution of the corresponding Fokker-Planck equation. The theory uses an equation that resembles the approximate Milestoning method that was introduced in 2004 [A. K. Faradjian and R. Elber, J. Chem. Phys. 120(23), 10880-10889 (2004)]. However, the current formulation is exact and is still significantly more efficient than straightforward MD simulations on the system studied. PMID:25747056
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bello-Rivas, Juan M.; Elber, Ron; Department of Chemistry, University of Texas at Austin, Austin, Texas 78712
A new theory and an exact computer algorithm for calculating kinetics and thermodynamic properties of a particle system are described. The algorithm avoids trapping in metastable states, which are typical challenges for Molecular Dynamics (MD) simulations on rough energy landscapes. It is based on the division of the full space into Voronoi cells. Prior knowledge or coarse sampling of space points provides the centers of the Voronoi cells. Short time trajectories are computed between the boundaries of the cells that we call milestones and are used to determine fluxes at the milestones. The flux function, an essential component of themore » new theory, provides a complete description of the statistical mechanics of the system at the resolution of the milestones. We illustrate the accuracy and efficiency of the exact Milestoning approach by comparing numerical results obtained on a model system using exact Milestoning with the results of long trajectories and with a solution of the corresponding Fokker-Planck equation. The theory uses an equation that resembles the approximate Milestoning method that was introduced in 2004 [A. K. Faradjian and R. Elber, J. Chem. Phys. 120(23), 10880-10889 (2004)]. However, the current formulation is exact and is still significantly more efficient than straightforward MD simulations on the system studied.« less
Theory for the three-dimensional Mercedes-Benz model of water.
Bizjak, Alan; Urbic, Tomaz; Vlachy, Vojko; Dill, Ken A
2009-11-21
The two-dimensional Mercedes-Benz (MB) model of water has been widely studied, both by Monte Carlo simulations and by integral equation methods. Here, we study the three-dimensional (3D) MB model. We treat water as spheres that interact through Lennard-Jones potentials and through a tetrahedral Gaussian hydrogen bonding function. As the "right answer," we perform isothermal-isobaric Monte Carlo simulations on the 3D MB model for different pressures and temperatures. The purpose of this work is to develop and test Wertheim's Ornstein-Zernike integral equation and thermodynamic perturbation theories. The two analytical approaches are orders of magnitude more efficient than the Monte Carlo simulations. The ultimate goal is to find statistical mechanical theories that can efficiently predict the properties of orientationally complex molecules, such as water. Also, here, the 3D MB model simply serves as a useful workbench for testing such analytical approaches. For hot water, the analytical theories give accurate agreement with the computer simulations. For cold water, the agreement is not as good. Nevertheless, these approaches are qualitatively consistent with energies, volumes, heat capacities, compressibilities, and thermal expansion coefficients versus temperature and pressure. Such analytical approaches offer a promising route to a better understanding of water and also the aqueous solvation.
Theory for the three-dimensional Mercedes-Benz model of water
Bizjak, Alan; Urbic, Tomaz; Vlachy, Vojko; Dill, Ken A.
2009-01-01
The two-dimensional Mercedes-Benz (MB) model of water has been widely studied, both by Monte Carlo simulations and by integral equation methods. Here, we study the three-dimensional (3D) MB model. We treat water as spheres that interact through Lennard-Jones potentials and through a tetrahedral Gaussian hydrogen bonding function. As the “right answer,” we perform isothermal-isobaric Monte Carlo simulations on the 3D MB model for different pressures and temperatures. The purpose of this work is to develop and test Wertheim’s Ornstein–Zernike integral equation and thermodynamic perturbation theories. The two analytical approaches are orders of magnitude more efficient than the Monte Carlo simulations. The ultimate goal is to find statistical mechanical theories that can efficiently predict the properties of orientationally complex molecules, such as water. Also, here, the 3D MB model simply serves as a useful workbench for testing such analytical approaches. For hot water, the analytical theories give accurate agreement with the computer simulations. For cold water, the agreement is not as good. Nevertheless, these approaches are qualitatively consistent with energies, volumes, heat capacities, compressibilities, and thermal expansion coefficients versus temperature and pressure. Such analytical approaches offer a promising route to a better understanding of water and also the aqueous solvation. PMID:19929057
Theory for the three-dimensional Mercedes-Benz model of water
NASA Astrophysics Data System (ADS)
Bizjak, Alan; Urbic, Tomaz; Vlachy, Vojko; Dill, Ken A.
2009-11-01
The two-dimensional Mercedes-Benz (MB) model of water has been widely studied, both by Monte Carlo simulations and by integral equation methods. Here, we study the three-dimensional (3D) MB model. We treat water as spheres that interact through Lennard-Jones potentials and through a tetrahedral Gaussian hydrogen bonding function. As the "right answer," we perform isothermal-isobaric Monte Carlo simulations on the 3D MB model for different pressures and temperatures. The purpose of this work is to develop and test Wertheim's Ornstein-Zernike integral equation and thermodynamic perturbation theories. The two analytical approaches are orders of magnitude more efficient than the Monte Carlo simulations. The ultimate goal is to find statistical mechanical theories that can efficiently predict the properties of orientationally complex molecules, such as water. Also, here, the 3D MB model simply serves as a useful workbench for testing such analytical approaches. For hot water, the analytical theories give accurate agreement with the computer simulations. For cold water, the agreement is not as good. Nevertheless, these approaches are qualitatively consistent with energies, volumes, heat capacities, compressibilities, and thermal expansion coefficients versus temperature and pressure. Such analytical approaches offer a promising route to a better understanding of water and also the aqueous solvation.
Asakura, Nobuhiko; Inui, Toshio
2016-01-01
Two apparently contrasting theories have been proposed to account for the development of children's theory of mind (ToM): theory-theory and simulation theory. We present a Bayesian framework that rationally integrates both theories for false belief reasoning. This framework exploits two internal models for predicting the belief states of others: one of self and one of others. These internal models are responsible for simulation-based and theory-based reasoning, respectively. The framework further takes into account empirical studies of a developmental ToM scale (e.g., Wellman and Liu, 2004): developmental progressions of various mental state understandings leading up to false belief understanding. By representing the internal models and their interactions as a causal Bayesian network, we formalize the model of children's false belief reasoning as probabilistic computations on the Bayesian network. This model probabilistically weighs and combines the two internal models and predicts children's false belief ability as a multiplicative effect of their early-developed abilities to understand the mental concepts of diverse beliefs and knowledge access. Specifically, the model predicts that children's proportion of correct responses on a false belief task can be closely approximated as the product of their proportions correct on the diverse belief and knowledge access tasks. To validate this prediction, we illustrate that our model provides good fits to a variety of ToM scale data for preschool children. We discuss the implications and extensions of our model for a deeper understanding of developmental progressions of children's ToM abilities. PMID:28082941
Asakura, Nobuhiko; Inui, Toshio
2016-01-01
Two apparently contrasting theories have been proposed to account for the development of children's theory of mind (ToM): theory-theory and simulation theory. We present a Bayesian framework that rationally integrates both theories for false belief reasoning. This framework exploits two internal models for predicting the belief states of others: one of self and one of others. These internal models are responsible for simulation-based and theory-based reasoning, respectively. The framework further takes into account empirical studies of a developmental ToM scale (e.g., Wellman and Liu, 2004): developmental progressions of various mental state understandings leading up to false belief understanding. By representing the internal models and their interactions as a causal Bayesian network, we formalize the model of children's false belief reasoning as probabilistic computations on the Bayesian network. This model probabilistically weighs and combines the two internal models and predicts children's false belief ability as a multiplicative effect of their early-developed abilities to understand the mental concepts of diverse beliefs and knowledge access. Specifically, the model predicts that children's proportion of correct responses on a false belief task can be closely approximated as the product of their proportions correct on the diverse belief and knowledge access tasks. To validate this prediction, we illustrate that our model provides good fits to a variety of ToM scale data for preschool children. We discuss the implications and extensions of our model for a deeper understanding of developmental progressions of children's ToM abilities.
Next Generation Extended Lagrangian Quantum-based Molecular Dynamics
NASA Astrophysics Data System (ADS)
Negre, Christian
2017-06-01
A new framework for extended Lagrangian first-principles molecular dynamics simulations is presented, which overcomes shortcomings of regular, direct Born-Oppenheimer molecular dynamics, while maintaining important advantages of the unified extended Lagrangian formulation of density functional theory pioneered by Car and Parrinello three decades ago. The new framework allows, for the first time, energy conserving, linear-scaling Born-Oppenheimer molecular dynamics simulations, which is necessary to study larger and more realistic systems over longer simulation times than previously possible. Expensive, self-consinstent-field optimizations are avoided and normal integration time steps of regular, direct Born-Oppenheimer molecular dynamics can be used. Linear scaling electronic structure theory is presented using a graph-based approach that is ideal for parallel calculations on hybrid computer platforms. For the first time, quantum based Born-Oppenheimer molecular dynamics simulation is becoming a practically feasible approach in simulations of +100,000 atoms-representing a competitive alternative to classical polarizable force field methods. In collaboration with: Anders Niklasson, Los Alamos National Laboratory.
A System for Natural Language Sentence Generation.
ERIC Educational Resources Information Center
Levison, Michael; Lessard, Gregory
1992-01-01
Describes the natural language computer program, "Vinci." Explains that using an attribute grammar formalism, Vinci can simulate components of several current linguistic theories. Considers the design of the system and its applications in linguistic modelling and second language acquisition research. Notes Vinci's uses in linguistics…
ERIC Educational Resources Information Center
Spain, James D.; Soldan, Theodore
1983-01-01
Describes two computer simulations of the predator-prey interaction in which students explore theories and mathematical equations involved in this biological process. The programs (for Apple II), designed for college level ecology, may be used in lecture/demonstrations or as a basis for laboratory assignments. A list of student objectives is…
The World According to Malthus and Volterra: The Mathematical Theory of the Struggle for Existence.
ERIC Educational Resources Information Center
Bogdanov, Constantine
1992-01-01
Discusses the mathematical model presented by Vito Volterra to describe the dynamics of population density. Discusses the predator prey relationship, presents an computer simulated model from marine life involving sharks and mackerels, and discusses ecological chaos. (MDH)
Analysis of vibrational-translational energy transfer using the direct simulation Monte Carlo method
NASA Technical Reports Server (NTRS)
Boyd, Iain D.
1991-01-01
A new model is proposed for energy transfer between the vibrational and translational modes for use in the direct simulation Monte Carlo method (DSMC). The model modifies the Landau-Teller theory for a harmonic oscillator and the rate transition is related to an experimental correlation for the vibrational relaxation time. Assessment of the model is made with respect to three different computations: relaxation in a heat bath, a one-dimensional shock wave, and hypersonic flow over a two-dimensional wedge. These studies verify that the model achieves detailed balance, and excellent agreement with experimental data is obtained in the shock wave calculation. The wedge flow computation reveals that the usual phenomenological method for simulating vibrational nonequilibrium in the DSMC technique predicts much higher vibrational temperatures in the wake region.
fissioncore: A desktop-computer simulation of a fission-bomb core
NASA Astrophysics Data System (ADS)
Cameron Reed, B.; Rohe, Klaus
2014-10-01
A computer program, fissioncore, has been developed to deterministically simulate the growth of the number of neutrons within an exploding fission-bomb core. The program allows users to explore the dependence of criticality conditions on parameters such as nuclear cross-sections, core radius, number of secondary neutrons liberated per fission, and the distance between nuclei. Simulations clearly illustrate the existence of a critical radius given a particular set of parameter values, as well as how the exponential growth of the neutron population (the condition that characterizes criticality) depends on these parameters. No understanding of neutron diffusion theory is necessary to appreciate the logic of the program or the results. The code is freely available in FORTRAN, C, and Java and is configured so that modifications to accommodate more refined physical conditions are possible.
An immersed boundary method for modeling a dirty geometry data
NASA Astrophysics Data System (ADS)
Onishi, Keiji; Tsubokura, Makoto
2017-11-01
We present a robust, fast, and low preparation cost immersed boundary method (IBM) for simulating an incompressible high Re flow around highly complex geometries. The method is achieved by the dispersion of the momentum by the axial linear projection and the approximate domain assumption satisfying the mass conservation around the wall including cells. This methodology has been verified against an analytical theory and wind tunnel experiment data. Next, we simulate the problem of flow around a rotating object and demonstrate the ability of this methodology to the moving geometry problem. This methodology provides the possibility as a method for obtaining a quick solution at a next large scale supercomputer. This research was supported by MEXT as ``Priority Issue on Post-K computer'' (Development of innovative design and production processes) and used computational resources of the K computer provided by the RIKEN Advanced Institute for Computational Science.
Freed, Karl F
2014-10-14
A general theory of the long time, low temperature dynamics of glass-forming fluids remains elusive despite the almost 20 years since the famous pronouncement by the Nobel Laureate P. W. Anderson, "The deepest and most interesting unsolved problem in solid state theory is probably the theory of the nature of glass and the glass transition" [Science 267, 1615 (1995)]. While recent work indicates that Adam-Gibbs theory (AGT) provides a framework for computing the structural relaxation time of supercooled fluids and for analyzing the properties of the cooperatively rearranging dynamical strings observed in low temperature molecular dynamics simulations, the heuristic nature of AGT has impeded general acceptance due to the lack of a first principles derivation [G. Adam and J. H. Gibbs, J. Chem. Phys. 43, 139 (1965)]. This deficiency is rectified here by a statistical mechanical derivation of AGT that uses transition state theory and the assumption that the transition state is composed of elementary excitations of a string-like form. The strings are assumed to form in equilibrium with the mobile particles in the fluid. Hence, transition state theory requires the strings to be in mutual equilibrium and thus to have the size distribution of a self-assembling system, in accord with the simulations and analyses of Douglas and co-workers. The average relaxation rate is computed as a grand canonical ensemble average over all string sizes, and use of the previously determined relation between configurational entropy and the average cluster size in several model equilibrium self-associating systems produces the AGT expression in a manner enabling further extensions and more fundamental tests of the assumptions.
NASA Astrophysics Data System (ADS)
Freed, Karl F.
2014-10-01
A general theory of the long time, low temperature dynamics of glass-forming fluids remains elusive despite the almost 20 years since the famous pronouncement by the Nobel Laureate P. W. Anderson, "The deepest and most interesting unsolved problem in solid state theory is probably the theory of the nature of glass and the glass transition" [Science 267, 1615 (1995)]. While recent work indicates that Adam-Gibbs theory (AGT) provides a framework for computing the structural relaxation time of supercooled fluids and for analyzing the properties of the cooperatively rearranging dynamical strings observed in low temperature molecular dynamics simulations, the heuristic nature of AGT has impeded general acceptance due to the lack of a first principles derivation [G. Adam and J. H. Gibbs, J. Chem. Phys. 43, 139 (1965)]. This deficiency is rectified here by a statistical mechanical derivation of AGT that uses transition state theory and the assumption that the transition state is composed of elementary excitations of a string-like form. The strings are assumed to form in equilibrium with the mobile particles in the fluid. Hence, transition state theory requires the strings to be in mutual equilibrium and thus to have the size distribution of a self-assembling system, in accord with the simulations and analyses of Douglas and co-workers. The average relaxation rate is computed as a grand canonical ensemble average over all string sizes, and use of the previously determined relation between configurational entropy and the average cluster size in several model equilibrium self-associating systems produces the AGT expression in a manner enabling further extensions and more fundamental tests of the assumptions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freed, Karl F., E-mail: freed@uchicago.edu
A general theory of the long time, low temperature dynamics of glass-forming fluids remains elusive despite the almost 20 years since the famous pronouncement by the Nobel Laureate P. W. Anderson, “The deepest and most interesting unsolved problem in solid state theory is probably the theory of the nature of glass and the glass transition” [Science 267, 1615 (1995)]. While recent work indicates that Adam-Gibbs theory (AGT) provides a framework for computing the structural relaxation time of supercooled fluids and for analyzing the properties of the cooperatively rearranging dynamical strings observed in low temperature molecular dynamics simulations, the heuristic naturemore » of AGT has impeded general acceptance due to the lack of a first principles derivation [G. Adam and J. H. Gibbs, J. Chem. Phys. 43, 139 (1965)]. This deficiency is rectified here by a statistical mechanical derivation of AGT that uses transition state theory and the assumption that the transition state is composed of elementary excitations of a string-like form. The strings are assumed to form in equilibrium with the mobile particles in the fluid. Hence, transition state theory requires the strings to be in mutual equilibrium and thus to have the size distribution of a self-assembling system, in accord with the simulations and analyses of Douglas and co-workers. The average relaxation rate is computed as a grand canonical ensemble average over all string sizes, and use of the previously determined relation between configurational entropy and the average cluster size in several model equilibrium self-associating systems produces the AGT expression in a manner enabling further extensions and more fundamental tests of the assumptions.« less
Density functional theory in the solid state
Hasnip, Philip J.; Refson, Keith; Probert, Matt I. J.; Yates, Jonathan R.; Clark, Stewart J.; Pickard, Chris J.
2014-01-01
Density functional theory (DFT) has been used in many fields of the physical sciences, but none so successfully as in the solid state. From its origins in condensed matter physics, it has expanded into materials science, high-pressure physics and mineralogy, solid-state chemistry and more, powering entire computational subdisciplines. Modern DFT simulation codes can calculate a vast range of structural, chemical, optical, spectroscopic, elastic, vibrational and thermodynamic phenomena. The ability to predict structure–property relationships has revolutionized experimental fields, such as vibrational and solid-state NMR spectroscopy, where it is the primary method to analyse and interpret experimental spectra. In semiconductor physics, great progress has been made in the electronic structure of bulk and defect states despite the severe challenges presented by the description of excited states. Studies are no longer restricted to known crystallographic structures. DFT is increasingly used as an exploratory tool for materials discovery and computational experiments, culminating in ex nihilo crystal structure prediction, which addresses the long-standing difficult problem of how to predict crystal structure polymorphs from nothing but a specified chemical composition. We present an overview of the capabilities of solid-state DFT simulations in all of these topics, illustrated with recent examples using the CASTEP computer program. PMID:24516184
Adaptive-Grid Methods for Phase Field Models of Microstructure Development
NASA Technical Reports Server (NTRS)
Provatas, Nikolas; Goldenfeld, Nigel; Dantzig, Jonathan A.
1999-01-01
In this work the authors show how the phase field model can be solved in a computationally efficient manner that opens a new large-scale simulational window on solidification physics. Our method uses a finite element, adaptive-grid formulation, and exploits the fact that the phase and temperature fields vary significantly only near the interface. We illustrate how our method allows efficient simulation of phase-field models in very large systems, and verify the predictions of solvability theory at intermediate undercooling. We then present new results at low undercoolings that suggest that solvability theory may not give the correct tip speed in that regime. We model solidification using the phase-field model used by Karma and Rappel.
Simulating the formation of cosmic structure.
Frenk, C S
2002-06-15
A timely combination of new theoretical ideas and observational discoveries has brought about significant advances in our understanding of cosmic evolution. Computer simulations have played a key role in these developments by providing the means to interpret astronomical data in the context of physical and cosmological theory. In the current paradigm, our Universe has a flat geometry, is undergoing accelerated expansion and is gravitationally dominated by elementary particles that make up cold dark matter. Within this framework, it is possible to simulate in a computer the emergence of galaxies and other structures from small quantum fluctuations imprinted during an epoch of inflationary expansion shortly after the Big Bang. The simulations must take into account the evolution of the dark matter as well as the gaseous processes involved in the formation of stars and other visible components. Although many unresolved questions remain, a coherent picture for the formation of cosmic structure is now beginning to emerge.
High temperature phonon dispersion in graphene using classical molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anees, P., E-mail: anees@igcar.gov.in; Panigrahi, B. K.; Valsakumar, M. C., E-mail: anees@igcar.gov.in
2014-04-24
Phonon dispersion and phonon density of states of graphene are calculated using classical molecular dynamics simulations. In this method, the dynamical matrix is constructed based on linear response theory by computing the displacement of atoms during the simulations. The computed phonon dispersions show excellent agreement with experiments. The simulations are done in both NVT and NPT ensembles at 300 K and found that the LO/TO modes are getting hardened at the Γ point. The NPT ensemble simulations capture the anharmonicity of the crystal accurately and the hardening of LO/TO modes is more pronounced. We also found that at 300 Kmore » the C-C bond length reduces below the equilibrium value and the ZA bending mode frequency becomes imaginary close to Γ along K-Γ direction, which indicates instability of the flat 2D graphene sheets.« less
NASA Technical Reports Server (NTRS)
Gotsis, Pascal K.; Chamis, Christos C.
1992-01-01
The nonlinear behavior of a high-temperature metal-matrix composite (HT-MMC) was simulated by using the metal matrix composite analyzer (METCAN) computer code. The simulation started with the fabrication process, proceeded to thermomechanical cyclic loading, and ended with the application of a monotonic load. Classical laminate theory and composite micromechanics and macromechanics are used in METCAN, along with a multifactor interaction model for the constituents behavior. The simulation of the stress-strain behavior from the macromechanical and the micromechanical points of view, as well as the initiation and final failure of the constituents and the plies in the composite, were examined in detail. It was shown that, when the fibers and the matrix were perfectly bonded, the fracture started in the matrix and then propagated with increasing load to the fibers. After the fibers fractured, the composite lost its capacity to carry additional load and fractured.
NASA Astrophysics Data System (ADS)
Pazmino, John
2007-02-01
Many concepts of chaotic action in astrodynamics can be appreciated through simulations with home computers and software. Many astrodynamical cases are illustrated. Although chaos theory is now applied to spaceflight trajectories, this presentation employs only inert bodies with no onboard impulse, e.g., from rockets or outgassing. Other nongravitational effects are also ignored, such as atmosphere drag, solar pressure, and radiation. The ability to simulate gravity behavior, even if not completely rigorous, on small mass-market computers allows a fuller understanding of the new approach to astrodynamics by home astronomers, scientists outside orbital mechanics, and students in middle and high school. The simulations can also help a lay audience visualize gravity behavior during press conferences, briefings, and public lectures. No review, evaluation, critique of the programs shown in this presentation is intended. The results from these simulations are not valid for - and must not be used for - making earth-colliding predictions.
NASA Technical Reports Server (NTRS)
Gotsis, Pascal K.
1991-01-01
The nonlinear behavior of a high-temperature metal-matrix composite (HT-MMC) was simulated by using the metal matrix composite analyzer (METCAN) computer code. The simulation started with the fabrication process, proceeded to thermomechanical cyclic loading, and ended with the application of a monotonic load. Classical laminate theory and composite micromechanics and macromechanics are used in METCAN, along with a multifactor interaction model for the constituents behavior. The simulation of the stress-strain behavior from the macromechanical and the micromechanical points of view, as well as the initiation and final failure of the constituents and the plies in the composite, were examined in detail. It was shown that, when the fibers and the matrix were perfectly bonded, the fracture started in the matrix and then propagated with increasing load to the fibers. After the fibers fractured, the composite lost its capacity to carry additional load and fractured.
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less
Zobač, Vladimír; Lewis, James P; Abad, Enrique; Mendieta-Moreno, Jesús I; Hapala, Prokop; Jelínek, Pavel; Ortega, José
2015-05-08
The computational simulation of photo-induced processes in large molecular systems is a very challenging problem. Firstly, to properly simulate photo-induced reactions the potential energy surfaces corresponding to excited states must be appropriately accessed; secondly, understanding the mechanisms of these processes requires the exploration of complex configurational spaces and the localization of conical intersections; finally, photo-induced reactions are probability events, that require the simulation of hundreds of trajectories to obtain the statistical information for the analysis of the reaction profiles. Here, we present a detailed description of our implementation of a molecular dynamics with electronic transitions algorithm within the local-orbital density functional theory code FIREBALL, suitable for the computational study of these problems. As an example of the application of this approach, we also report results on the [2 + 2] cycloaddition of ethylene with maleic anhydride and on the [2 + 2] photo-induced polymerization reaction of two C60 molecules. We identify different deactivation channels of the initial electron excitation, depending on the time of the electronic transition from LUMO to HOMO, and the character of the HOMO after the transition.
Computational Insights into Materials and Interfaces for Capacitive Energy Storage
Zhan, Cheng; Lian, Cheng; Zhang, Yu; Thompson, Matthew W.; Xie, Yu; Wu, Jianzhong; Kent, Paul R. C.; Cummings, Peter T.; Wesolowski, David J.
2017-01-01
Supercapacitors such as electric double‐layer capacitors (EDLCs) and pseudocapacitors are becoming increasingly important in the field of electrical energy storage. Theoretical study of energy storage in EDLCs focuses on solving for the electric double‐layer structure in different electrode geometries and electrolyte components, which can be achieved by molecular simulations such as classical molecular dynamics (MD), classical density functional theory (classical DFT), and Monte‐Carlo (MC) methods. In recent years, combining first‐principles and classical simulations to investigate the carbon‐based EDLCs has shed light on the importance of quantum capacitance in graphene‐like 2D systems. More recently, the development of joint density functional theory (JDFT) enables self‐consistent electronic‐structure calculation for an electrode being solvated by an electrolyte. In contrast with the large amount of theoretical and computational effort on EDLCs, theoretical understanding of pseudocapacitance is very limited. In this review, we first introduce popular modeling methods and then focus on several important aspects of EDLCs including nanoconfinement, quantum capacitance, dielectric screening, and novel 2D electrode design; we also briefly touch upon pseudocapactive mechanism in RuO2. We summarize and conclude with an outlook for the future of materials simulation and design for capacitive energy storage. PMID:28725531
Ab initio molecular simulations with numeric atom-centered orbitals
NASA Astrophysics Data System (ADS)
Blum, Volker; Gehrke, Ralf; Hanke, Felix; Havu, Paula; Havu, Ville; Ren, Xinguo; Reuter, Karsten; Scheffler, Matthias
2009-11-01
We describe a complete set of algorithms for ab initio molecular simulations based on numerically tabulated atom-centered orbitals (NAOs) to capture a wide range of molecular and materials properties from quantum-mechanical first principles. The full algorithmic framework described here is embodied in the Fritz Haber Institute "ab initio molecular simulations" (FHI-aims) computer program package. Its comprehensive description should be relevant to any other first-principles implementation based on NAOs. The focus here is on density-functional theory (DFT) in the local and semilocal (generalized gradient) approximations, but an extension to hybrid functionals, Hartree-Fock theory, and MP2/GW electron self-energies for total energies and excited states is possible within the same underlying algorithms. An all-electron/full-potential treatment that is both computationally efficient and accurate is achieved for periodic and cluster geometries on equal footing, including relaxation and ab initio molecular dynamics. We demonstrate the construction of transferable, hierarchical basis sets, allowing the calculation to range from qualitative tight-binding like accuracy to meV-level total energy convergence with the basis set. Since all basis functions are strictly localized, the otherwise computationally dominant grid-based operations scale as O(N) with system size N. Together with a scalar-relativistic treatment, the basis sets provide access to all elements from light to heavy. Both low-communication parallelization of all real-space grid based algorithms and a ScaLapack-based, customized handling of the linear algebra for all matrix operations are possible, guaranteeing efficient scaling (CPU time and memory) up to massively parallel computer systems with thousands of CPUs.
Importance of Vibronic Effects in the UV-Vis Spectrum of the 7,7,8,8-Tetracyanoquinodimethane Anion.
Tapavicza, Enrico; Furche, Filipp; Sundholm, Dage
2016-10-11
We present a computational method for simulating vibronic absorption spectra in the ultraviolet-visible (UV-vis) range and apply it to the 7,7,8,8-tetracyanoquinodimethane anion (TCNQ - ), which has been used as a ligand in black absorbers. Gaussian broadening of vertical electronic excitation energies of TCNQ - from linear-response time-dependent density functional theory produces only one band, which is qualitatively incorrect. Thus, the harmonic vibrational modes of the two lowest doublet states were computed, and the vibronic UV-vis spectrum was simulated using the displaced harmonic oscillator approximation, the frequency-shifted harmonic oscillator approximation, and the full Duschinsky formalism. An efficient real-time generating function method was implemented to avoid the exponential complexity of conventional Franck-Condon approaches to vibronic spectra. The obtained UV-vis spectra for TCNQ - agree well with experiment; the Duschinsky rotation is found to have only a minor effect on the spectrum. Born-Oppenheimer molecular dynamics simulations combined with calculations of the electronic excitation energies for a large number of molecular structures were also used for simulating the UV-vis spectrum. The Born-Oppenheimer molecular dynamics simulations yield a broadening of the energetically lowest peak in the absorption spectrum, but additional vibrational bands present in the experimental and simulated quantum harmonic oscillator spectra are not observed in the molecular dynamics simulations. Our results underline the importance of vibronic effects for the UV-vis spectrum of TCNQ - , and they establish an efficient method for obtaining vibronic spectra using a combination of linear-response time-dependent density functional theory and a real-time generating function approach.
Renormalization group analysis of anisotropic diffusion in turbulent shear flows
NASA Technical Reports Server (NTRS)
Rubinstein, Robert; Barton, J. Michael
1991-01-01
The renormalization group is applied to compute anisotropic corrections to the scalar eddy diffusivity representation of turbulent diffusion of a passive scalar. The corrections are linear in the mean velocity gradients. All model constants are computed theoretically. A form of the theory valid at arbitrary Reynolds number is derived. The theory applies only when convection of the velocity-scalar correlation can be neglected. A ratio of diffusivity components, found experimentally to have a nearly constant value in a variety of shear flows, is computed theoretically for flows in a certain state of equilibrium. The theoretical value is well within the fairly narrow range of experimentally observed values. Theoretical predictions of this diffusivity ratio are also compared with data from experiments and direct numerical simulations of homogeneous shear flows with constant velocity and scalar gradients.
Pirolli, Peter
2016-08-01
Computational models were developed in the ACT-R neurocognitive architecture to address some aspects of the dynamics of behavior change. The simulations aim to address the day-to-day goal achievement data available from mobile health systems. The models refine current psychological theories of self-efficacy, intended effort, and habit formation, and provide an account for the mechanisms by which goal personalization, implementation intentions, and remindings work.
Cosmological N-body Simulation
NASA Astrophysics Data System (ADS)
Lake, George
1994-05-01
.90ex> }}} The ``N'' in N-body calculations has doubled every year for the last two decades. To continue this trend, the UW N-body group is working on algorithms for the fast evaluation of gravitational forces on parallel computers and establishing rigorous standards for the computations. In these algorithms, the computational cost per time step is ~ 10(3) pairwise forces per particle. A new adaptive time integrator enables us to perform high quality integrations that are fully temporally and spatially adaptive. SPH--smoothed particle hydrodynamics will be added to simulate the effects of dissipating gas and magnetic fields. The importance of these calculations is two-fold. First, they determine the nonlinear consequences of theories for the structure of the Universe. Second, they are essential for the interpretation of observations. Every galaxy has six coordinates of velocity and position. Observations determine two sky coordinates and a line of sight velocity that bundles universal expansion (distance) together with a random velocity created by the mass distribution. Simulations are needed to determine the underlying structure and masses. The importance of simulations has moved from ex post facto explanation to an integral part of planning large observational programs. I will show why high quality simulations with ``large N'' are essential to accomplish our scientific goals. This year, our simulations have N >~ 10(7) . This is sufficient to tackle some niche problems, but well short of our 5 year goal--simulating The Sloan Digital Sky Survey using a few Billion particles (a Teraflop-year simulation). Extrapolating past trends, we would have to ``wait'' 7 years for this hundred-fold improvement. Like past gains, significant changes in the computational methods are required for these advances. I will describe new algorithms, algorithmic hacks and a dedicated computer to perform Billion particle simulations. Finally, I will describe research that can be enabled by Petaflop computers. This research is supported by the NASA HPCC/ESS program.
Navier-Stokes simulations of slender axisymmetric shapes in supersonic, turbulent flow
NASA Astrophysics Data System (ADS)
Moran, Kenneth J.; Beran, Philip S.
1994-07-01
Computational fluid dynamics is used to study flows about slender, axisymmetric bodies at very high speeds. Numerical experiments are conducted to simulate a broad range of flight conditions. Mach number is varied from 1.5 to 8 and Reynolds number is varied from 1 X 10(exp 6)/m to 10(exp 8)/m. The primary objective is to develop and validate a computational and methodology for the accurate simulation of a wide variety of flow structures. Accurate results are obtained for detached bow shocks, recompression shocks, corner-point expansions, base-flow recirculations, and turbulent boundary layers. Accuracy is assessed through comparison with theory and experimental data; computed surface pressure, shock structure, base-flow structure, and velocity profiles are within measurement accuracy throughout the range of conditions tested. The methodology is both practical and general: general in its applicability, and practicaal in its performance. To achieve high accuracy, modifications to previously reported techniques are implemented in the scheme. These modifications improve computed results in the vicinity of symmetry lines and in the base flow region, including the turbulent wake.
1988-12-01
the effects of two formal training methods in the retail sales arena was reported by Ivancevich and Smith (1981). The methods involved (a) role...now established and that his work has extended the theory. Oligopoly theory also was the focus of the work reported by Lyons (1982). Oligopoly refers...benefit for the group while providing a fair share for each group member. Lyons described a computer-based game designed primarily for management
Zendehrouh, Sareh
2015-11-01
Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Freniere, Cole; Pathak, Ashish; Raessi, Mehdi
2016-11-01
Ocean Wave Energy Converters (WECs) are devices that convert energy from ocean waves into electricity. To aid in the design of WECs, an advanced computational framework has been developed which has advantages over conventional methods. The computational framework simulates the performance of WECs in a virtual wave tank by solving the full Navier-Stokes equations in 3D, capturing the fluid-structure interaction, nonlinear and viscous effects. In this work, we present simulations of the performance of pitching cylinder-type WECs and compare against experimental data. WECs are simulated at both model and full scales. The results are used to determine the role of the Keulegan-Carpenter (KC) number. The KC number is representative of viscous drag behavior on a bluff body in an oscillating flow, and is considered an important indicator of the dynamics of a WEC. Studying the effects of the KC number is important for determining the validity of the Froude scaling and the inviscid potential flow theory, which are heavily relied on in the conventional approaches to modeling WECs. Support from the National Science Foundation is gratefully acknowledged.
2015-05-01
CIRCUITSCAPE (McRae 2006). CIRCUITSCAPE uses circuit theory to simulate gene flow (i.e., “current”) through a resistance surface in which landscape ...2010. Utility of computer simulations in landscape genetics. Mol Ecol 19: 3549–64. Erös T, Schmera D, Schick RS. 2011. Network thinking in...FINAL REPORT Hydroecology of Intermittent and Ephemeral Streams: Will Landscape Connectivity Sustain Aquatic Organisms in a Changing Climate
Testing trivializing maps in the Hybrid Monte Carlo algorithm
Engel, Georg P.; Schaefer, Stefan
2011-01-01
We test a recent proposal to use approximate trivializing maps in a field theory to speed up Hybrid Monte Carlo simulations. Simulating the CPN−1 model, we find a small improvement with the leading order transformation, which is however compensated by the additional computational overhead. The scaling of the algorithm towards the continuum is not changed. In particular, the effect of the topological modes on the autocorrelation times is studied. PMID:21969733
Robust flow stability: Theory, computations and experiments in near wall turbulence
NASA Astrophysics Data System (ADS)
Bobba, Kumar Manoj
Helmholtz established the field of hydrodynamic stability with his pioneering work in 1868. From then on, hydrodynamic stability became an important tool in understanding various fundamental fluid flow phenomena in engineering (mechanical, aeronautics, chemical, materials, civil, etc.) and science (astrophysics, geophysics, biophysics, etc.), and turbulence in particular. However, there are many discrepancies between classical hydrodynamic stability theory and experiments. In this thesis, the limitations of traditional hydrodynamic stability theory are shown and a framework for robust flow stability theory is formulated. A host of new techniques like gramians, singular values, operator norms, etc. are introduced to understand the role of various kinds of uncertainty. An interesting feature of this framework is the close interplay between theory and computations. It is shown that a subset of Navier-Stokes equations are globally, non-nonlinearly stable for all Reynolds number. Yet, invoking this new theory, it is shown that these equations produce structures (vortices and streaks) as seen in the experiments. The experiments are done in zero pressure gradient transiting boundary layer on a flat plate in free surface tunnel. Digital particle image velocimetry, and MEMS based laser Doppler velocimeter and shear stress sensors have been used to make quantitative measurements of the flow. Various theoretical and computational predictions are in excellent agreement with the experimental data. A closely related topic of modeling, simulation and complexity reduction of large mechanics problems with multiple spatial and temporal scales is also studied. A nice method that rigorously quantifies the important scales and automatically gives models of the problem to various levels of accuracy is introduced. Computations done using spectral methods are presented.
Delivering Insight The History of the Accelerated Strategic Computing Initiative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larzelere II, A R
2007-01-03
The history of the Accelerated Strategic Computing Initiative (ASCI) tells of the development of computational simulation into a third fundamental piece of the scientific method, on a par with theory and experiment. ASCI did not invent the idea, nor was it alone in bringing it to fruition. But ASCI provided the wherewithal - hardware, software, environment, funding, and, most of all, the urgency - that made it happen. On October 1, 2005, the Initiative completed its tenth year of funding. The advances made by ASCI over its first decade are truly incredible. Lawrence Livermore, Los Alamos, and Sandia National Laboratories,more » along with leadership provided by the Department of Energy's Defense Programs Headquarters, fundamentally changed computational simulation and how it is used to enable scientific insight. To do this, astounding advances were made in simulation applications, computing platforms, and user environments. ASCI dramatically changed existing - and forged new - relationships, both among the Laboratories and with outside partners. By its tenth anniversary, despite daunting challenges, ASCI had accomplished all of the major goals set at its beginning. The history of ASCI is about the vision, leadership, endurance, and partnerships that made these advances possible.« less
CulSim: A simulator of emergence and resilience of cultural diversity
NASA Astrophysics Data System (ADS)
Ulloa, Roberto
CulSim is an agent-based computer simulation software that allows further exploration of influential and recent models of emergence of cultural groups grounded in sociological theories. CulSim provides a collection of tools to analyze resilience of cultural diversity when events affect agents, institutions or global parameters of the simulations; upon combination, events can be used to approximate historical circumstances. The software provides a graphical and text-based user interface, and so makes this agent-based modeling methodology accessible to a variety of users from different research fields.
Kreula, J. M.; Clark, S. R.; Jaksch, D.
2016-01-01
We propose a non-linear, hybrid quantum-classical scheme for simulating non-equilibrium dynamics of strongly correlated fermions described by the Hubbard model in a Bethe lattice in the thermodynamic limit. Our scheme implements non-equilibrium dynamical mean field theory (DMFT) and uses a digital quantum simulator to solve a quantum impurity problem whose parameters are iterated to self-consistency via a classically computed feedback loop where quantum gate errors can be partly accounted for. We analyse the performance of the scheme in an example case. PMID:27609673
Cognitive Dissonance Reduction as Constraint Satisfaction.
ERIC Educational Resources Information Center
Shultz, Thomas R.; Lepper, Mark R.
1996-01-01
It is argued that the reduction of cognitive dissonance can be viewed as a constraint satisfaction problem, and a computational model of the process of consonance seeking is proposed. Simulations from this model matched psychological findings from the insufficient justification and free-choice paradigms of cognitive dissonance theory. (SLD)
ERIC Educational Resources Information Center
Hunt, Shelby D.; Madhavaram, Sreedhar
2006-01-01
Knowledge of marketing strategy is essential for marketing majors. To supplement and/or replace the traditional lecture-discussion approach, several pedagogical vehicles have been recommended to teach marketing strategy, including the analytic hierarchy process; career-planning cases; computer-assisted, simulated marketing cases; experiential…
A new eddy current model for magnetic bearing control system design
NASA Technical Reports Server (NTRS)
Feeley, Joseph J.; Ahlstrom, Daniel J.
1992-01-01
This paper describes a new VLSI-based controller for the implementation of a Linear-Quadratic-Gaussian (LQG) theory-based control system. Use of the controller is demonstrated by design of a controller for a magnetic bearing and its performance is evaluated by computer simulation.
NASA Technical Reports Server (NTRS)
Joslin, R. D.; Streett, C. L.; Chang, C.-L.
1991-01-01
A study of instabilities in incompressible boundary-layer flow on a flat plate is conducted by spatial direct numerical simulation (DNS) of the Navier-Stokes equations. Here, the DNS results are used to critically evaluate the results obtained using parabolized stability equations (PSE) theory and to study mechanisms associated with breakdown from laminar to turbulent flow. Three test cases are considered: two-dimensional Tollmien-Schlichting wave propagation, subharmonic instability breakdown, and oblique-wave break-down. The instability modes predicted by PSE theory are in good quantitative agreement with the DNS results, except a small discrepancy is evident in the mean-flow distortion component of the 2-D test problem. This discrepancy is attributed to far-field boundary- condition differences. Both DNS and PSE theory results show several modal discrepancies when compared with the experiments of subharmonic breakdown. Computations that allow for a small adverse pressure gradient in the basic flow and a variation of the disturbance frequency result in better agreement with the experiments.
Learning control system design based on 2-D theory - An application to parallel link manipulator
NASA Technical Reports Server (NTRS)
Geng, Z.; Carroll, R. L.; Lee, J. D.; Haynes, L. H.
1990-01-01
An approach to iterative learning control system design based on two-dimensional system theory is presented. A two-dimensional model for the iterative learning control system which reveals the connections between learning control systems and two-dimensional system theory is established. A learning control algorithm is proposed, and the convergence of learning using this algorithm is guaranteed by two-dimensional stability. The learning algorithm is applied successfully to the trajectory tracking control problem for a parallel link robot manipulator. The excellent performance of this learning algorithm is demonstrated by the computer simulation results.
Direct simulations of chemically reacting turbulent mixing layers
NASA Technical Reports Server (NTRS)
Riley, J. J.; Metcalfe, R. W.
1984-01-01
The report presents the results of direct numerical simulations of chemically reacting turbulent mixing layers. The work consists of two parts: (1) the development and testing of a spectral numerical computer code that treats the diffusion reaction equations; and (2) the simulation of a series of cases of chemical reactions occurring on mixing layers. The reaction considered is a binary, irreversible reaction with no heat release. The reacting species are nonpremixed. The results of the numerical tests indicate that the high accuracy of the spectral methods observed for rigid body rotation are also obtained when diffusion, reaction, and more complex flows are considered. In the simulations, the effects of vortex rollup and smaller scale turbulence on the overall reaction rates are investigated. The simulation results are found to be in approximate agreement with similarity theory. Comparisons of simulation results with certain modeling hypotheses indicate limitations in these hypotheses. The nondimensional product thickness computed from the simulations is compared with laboratory values and is found to be in reasonable agreement, especially since there are no adjustable constants in the method.
Khakhaleva-Li, Zimu; Gnedin, Nickolay Y.
2016-03-30
In this study, we compare the properties of stellar populations of model galaxies from the Cosmic Reionization On Computers (CROC) project with the exiting UV and IR data. Since CROC simulations do not follow cosmic dust directly, we adopt two variants of the dust-follows-metals ansatz to populate model galaxies with dust. Using the dust radiative transfer code Hyperion, we compute synthetic stellar spectra, UV continuum slopes, and IR fluxes for simulated galaxies. We find that the simulation results generally match observational measurements, but, perhaps, not in full detail. The differences seem to indicate that our adopted dust-follows-metals ansatzes are notmore » fully sufficient. While the discrepancies with the exiting data are marginal, the future JWST data will be of much higher precision, rendering highly significant any tentative difference between theory and observations. It is, therefore, likely, that in order to fully utilize the precision of JWST observations, fully dynamical modeling of dust formation, evolution, and destruction may be required.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khakhaleva-Li, Zimu; Gnedin, Nickolay Y.
In this study, we compare the properties of stellar populations of model galaxies from the Cosmic Reionization On Computers (CROC) project with the exiting UV and IR data. Since CROC simulations do not follow cosmic dust directly, we adopt two variants of the dust-follows-metals ansatz to populate model galaxies with dust. Using the dust radiative transfer code Hyperion, we compute synthetic stellar spectra, UV continuum slopes, and IR fluxes for simulated galaxies. We find that the simulation results generally match observational measurements, but, perhaps, not in full detail. The differences seem to indicate that our adopted dust-follows-metals ansatzes are notmore » fully sufficient. While the discrepancies with the exiting data are marginal, the future JWST data will be of much higher precision, rendering highly significant any tentative difference between theory and observations. It is, therefore, likely, that in order to fully utilize the precision of JWST observations, fully dynamical modeling of dust formation, evolution, and destruction may be required.« less
Costa, Paulo R; Caldas, Linda V E
2002-01-01
This work presents the development and evaluation using modern techniques to calculate radiation protection barriers in clinical radiographic facilities. Our methodology uses realistic primary and scattered spectra. The primary spectra were computer simulated using a waveform generalization and a semiempirical model (the Tucker-Barnes-Chakraborty model). The scattered spectra were obtained from published data. An analytical function was used to produce attenuation curves from polychromatic radiation for specified kVp, waveform, and filtration. The results of this analytical function are given in ambient dose equivalent units. The attenuation curves were obtained by application of Archer's model to computer simulation data. The parameters for the best fit to the model using primary and secondary radiation data from different radiographic procedures were determined. They resulted in an optimized model for shielding calculation for any radiographic room. The shielding costs were about 50% lower than those calculated using the traditional method based on Report No. 49 of the National Council on Radiation Protection and Measurements.
A two-dimensional model of water: Solvation of nonpolar solutes
NASA Astrophysics Data System (ADS)
Urbič, T.; Vlachy, V.; Kalyuzhnyi, Yu. V.; Southall, N. T.; Dill, K. A.
2002-01-01
We recently applied a Wertheim integral equation theory (IET) and a thermodynamic perturbation theory (TPT) to the Mercedes-Benz (MB) model of pure water. These analytical theories offer the advantage of being computationally less intensive than the Monte Carlo simulations by orders of magnitudes. The long-term goal of this work is to develop analytical theories of water that can handle orientation-dependent interactions and the MB model serves as a simple workbench for this development. Here we apply the IET and TPT to the hydrophobic effect, the transfer of a nonpopular solute into MB water. As before, we find that the theories reproduce the Monte Carlo results quite accurately at higher temperatures, while they predict the qualitative trends in cold water.
Mökkönen, Harri; Ala-Nissila, Tapio; Jónsson, Hannes
2016-09-07
The recrossing correction to the transition state theory estimate of a thermal rate can be difficult to calculate when the energy barrier is flat. This problem arises, for example, in polymer escape if the polymer is long enough to stretch between the initial and final state energy wells while the polymer beads undergo diffusive motion back and forth over the barrier. We present an efficient method for evaluating the correction factor by constructing a sequence of hyperplanes starting at the transition state and calculating the probability that the system advances from one hyperplane to another towards the product. This is analogous to what is done in forward flux sampling except that there the hyperplane sequence starts at the initial state. The method is applied to the escape of polymers with up to 64 beads from a potential well. For high temperature, the results are compared with direct Langevin dynamics simulations as well as forward flux sampling and excellent agreement between the three rate estimates is found. The use of a sequence of hyperplanes in the evaluation of the recrossing correction speeds up the calculation by an order of magnitude as compared with the traditional approach. As the temperature is lowered, the direct Langevin dynamics simulations as well as the forward flux simulations become computationally too demanding, while the harmonic transition state theory estimate corrected for recrossings can be calculated without significant increase in the computational effort.
NASA Astrophysics Data System (ADS)
Bouchet, F.; Laurie, J.; Zaboronski, O.
2012-12-01
We describe transitions between attractors with either one, two or more zonal jets in models of turbulent atmosphere dynamics. Those transitions are extremely rare, and occur over times scales of centuries or millennia. They are extremely hard to observe in direct numerical simulations, because they require on one hand an extremely good resolution in order to simulate accurately the turbulence and on the other hand simulations performed over an extremely long time. Those conditions are usually not met together in any realistic models. However many examples of transitions between turbulent attractors in geophysical flows are known to exist (paths of the Kuroshio, Earth's magnetic field reversal, atmospheric flows, and so on). Their study through numerical computations is inaccessible using conventional means. We present an alternative approach, based on instanton theory and large deviations. Instanton theory provides a way to compute (both numerically and theoretically) extremely rare transitions between turbulent attractors. This tool, developed in field theory, and justified in some cases through the large deviation theory in mathematics, can be applied to models of turbulent atmosphere dynamics. It provides both new theoretical insights and new type of numerical algorithms. Those algorithms can predict transition histories and transition rates using numerical simulations run over only hundreds of typical model dynamical time, which is several order of magnitude lower than the typical transition time. We illustrate the power of those tools in the framework of quasi-geostrophic models. We show regimes where two or more attractors coexist. Those attractors corresponds to turbulent flows dominated by either one or more zonal jets similar to midlatitude atmosphere jets. Among the trajectories connecting two non-equilibrium attractors, we determine the most probable ones. Moreover, we also determine the transition rates, which are several of magnitude larger than a typical time determined from the jet structure. We discuss the medium-term generalization of those results to models with more complexity, like primitive equations or GCMs.
Coupling-parameter expansion in thermodynamic perturbation theory.
Ramana, A Sai Venkata; Menon, S V G
2013-02-01
An approach to the coupling-parameter expansion in the liquid state theory of simple fluids is presented by combining the ideas of thermodynamic perturbation theory and integral equation theories. This hybrid scheme avoids the problems of the latter in the two phase region. A method to compute the perturbation series to any arbitrary order is developed and applied to square well fluids. Apart from the Helmholtz free energy, the method also gives the radial distribution function and the direct correlation function of the perturbed system. The theory is applied for square well fluids of variable ranges and compared with simulation data. While the convergence of perturbation series and the overall performance of the theory is good, improvements are needed for potentials with shorter ranges. Possible directions for further developments in the coupling-parameter expansion are indicated.
ATR applications of minimax entropy models of texture and shape
NASA Astrophysics Data System (ADS)
Zhu, Song-Chun; Yuille, Alan L.; Lanterman, Aaron D.
2001-10-01
Concepts from information theory have recently found favor in both the mainstream computer vision community and the military automatic target recognition community. In the computer vision literature, the principles of minimax entropy learning theory have been used to generate rich probabilitistic models of texture and shape. In addition, the method of types and large deviation theory has permitted the difficulty of various texture and shape recognition tasks to be characterized by 'order parameters' that determine how fundamentally vexing a task is, independent of the particular algorithm used. These information-theoretic techniques have been demonstrated using traditional visual imagery in applications such as simulating cheetah skin textures and such as finding roads in aerial imagery. We discuss their application to problems in the specific application domain of automatic target recognition using infrared imagery. We also review recent theoretical and algorithmic developments which permit learning minimax entropy texture models for infrared textures in reasonable timeframes.
FASTSIM2: a second-order accurate frictional rolling contact algorithm
NASA Astrophysics Data System (ADS)
Vollebregt, E. A. H.; Wilders, P.
2011-01-01
In this paper we consider the frictional (tangential) steady rolling contact problem. We confine ourselves to the simplified theory, instead of using full elastostatic theory, in order to be able to compute results fast, as needed for on-line application in vehicle system dynamics simulation packages. The FASTSIM algorithm is the leading technology in this field and is employed in all dominant railway vehicle system dynamics packages (VSD) in the world. The main contribution of this paper is a new version "FASTSIM2" of the FASTSIM algorithm, which is second-order accurate. This is relevant for VSD, because with the new algorithm 16 times less grid points are required for sufficiently accurate computations of the contact forces. The approach is based on new insights in the characteristics of the rolling contact problem when using the simplified theory, and on taking precise care of the contact conditions in the numerical integration scheme employed.
A unified account of gloss and lightness perception in terms of gamut relativity.
Vladusich, Tony
2013-08-01
A recently introduced computational theory of visual surface representation, termed gamut relativity, overturns the classical assumption that brightness, lightness, and transparency constitute perceptual dimensions corresponding to the physical dimensions of luminance, diffuse reflectance, and transmittance, respectively. Here I extend the theory to show how surface gloss and lightness can be understood in a unified manner in terms of the vector computation of "layered representations" of surface and illumination properties, rather than as perceptual dimensions corresponding to diffuse and specular reflectance, respectively. The theory simulates the effects of image histogram skewness on surface gloss/lightness and lightness constancy as a function of specular highlight intensity. More generally, gamut relativity clarifies, unifies, and generalizes a wide body of previous theoretical and experimental work aimed at understanding how the visual system parses the retinal image into layered representations of surface and illumination properties.
NASA Astrophysics Data System (ADS)
Dalichaouch, Thamine; Davidson, Asher; Xu, Xinlu; Yu, Peicheng; Tsung, Frank; Mori, Warren; Li, Fei; Zhang, Chaojie; Lu, Wei; Vieira, Jorge; Fonseca, Ricardo
2016-10-01
In the past few decades, there has been much progress in theory, simulation, and experiment towards using Laser wakefield acceleration (LWFA) as the basis for designing and building compact x-ray free-electron-lasers (XFEL) as well as a next generation linear collider. Recently, ionization injection and density downramp injection have been proposed and demonstrated as a controllable injection scheme for creating higher quality and ultra-bright relativistic electron beams using LWFA. However, full-3D simulations of plasma-based accelerators are computationally intensive, sometimes taking 100 millions of core-hours on today's computers. A more efficient quasi-3D algorithm was developed and implemented into OSIRIS using a particle-in-cell description with a charge conserving current deposition scheme in r - z and a gridless Fourier expansion in ϕ. Due to the azimuthal symmetry in LWFA, quasi-3D simulations are computationally more efficient than 3D cartesian simulations since only the first few harmonics in are needed ϕ to capture the 3D physics of LWFA. Using the quasi-3D approach, we present preliminary results of ionization and down ramp triggered injection and compare the results against 3D LWFA simulations. This work was supported by DOE and NSF.
NASA Astrophysics Data System (ADS)
Zapp, Kai; Orús, Román
2017-06-01
The simulation of lattice gauge theories with tensor network (TN) methods is becoming increasingly fruitful. The vision is that such methods will, eventually, be used to simulate theories in (3 +1 ) dimensions in regimes difficult for other methods. So far, however, TN methods have mostly simulated lattice gauge theories in (1 +1 ) dimensions. The aim of this paper is to explore the simulation of quantum electrodynamics (QED) on infinite lattices with TNs, i.e., fermionic matter fields coupled to a U (1 ) gauge field, directly in the thermodynamic limit. With this idea in mind we first consider a gauge-invariant infinite density matrix renormalization group simulation of the Schwinger model—i.e., QED in (1 +1 ) d . After giving a precise description of the numerical method, we benchmark our simulations by computing the subtracted chiral condensate in the continuum, in good agreement with other approaches. Our simulations of the Schwinger model allow us to build intuition about how a simulation should proceed in (2 +1 ) dimensions. Based on this, we propose a variational ansatz using infinite projected entangled pair states (PEPS) to describe the ground state of (2 +1 ) d QED. The ansatz includes U (1 ) gauge symmetry at the level of the tensors, as well as fermionic (matter) and bosonic (gauge) degrees of freedom both at the physical and virtual levels. We argue that all the necessary ingredients for the simulation of (2 +1 ) d QED are, a priori, already in place, paving the way for future upcoming results.
Gutiérrez-Sevillano, Juan José; Caro-Pérez, Alejandro; Dubbeldam, David; Calero, Sofía
2011-12-07
We report a molecular simulation study for Cu-BTC metal-organic frameworks as carbon dioxide-methane separation devices. For this study we have computed adsorption and diffusion of methane and carbon dioxide in the structure, both as pure components and mixtures over the full range of bulk gas compositions. From the single component isotherms, mixture adsorption is predicted using the ideal adsorbed solution theory. These predictions are in very good agreement with our computed mixture isotherms and with previously reported data. Adsorption and diffusion selectivities and preferential sitings are also discussed with the aim to provide new molecular level information for all studied systems.
Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids.
Aradi, Bálint; Niklasson, Anders M N; Frauenheim, Thomas
2015-07-14
A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born-Oppenheimer molecular dynamics. For systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can be applied to a broad range of problems in materials science, chemistry, and biology.
Hadron Cancer Therapy: Role of Nuclear Reactions
DOE R&D Accomplishments Database
Chadwick, M. B.
2000-06-20
Recently it has become feasible to calculate energy deposition and particle transport in the body by proton and neutron radiotherapy beams, using Monte Carlo transport methods. A number of advances have made this possible, including dramatic increases in computer speeds, a better understanding of the microscopic nuclear reaction cross sections, and the development of methods to model the characteristics of the radiation emerging from the accelerator treatment unit. This paper describes the nuclear reaction mechanisms involved, and how the cross sections have been evaluated from theory and experiment, for use in computer simulations of radiation therapy. The simulations will allow the dose delivered to a tumor to be optimized, whilst minimizing the dos given to nearby organs at risk.
Toward Theory-Based Instruction in Scientific Problem Solving.
ERIC Educational Resources Information Center
Heller, Joan I.; And Others
Several empirical and theoretical analyses related to scientific problem-solving are reviewed, including: detailed studies of individuals at different levels of expertise, and computer models simulating some aspects of human information processing during problem solving. Analysis of these studies has revealed many facets about the nature of the…
The Computer in Educational Decision Making. An Introduction and Guide for School Administrators.
ERIC Educational Resources Information Center
Sanders, Susan; And Others
This text provides educational administrators with a working knowledge of the problem-solving techniques of PERT (planning, evaluation, and review technique), Linear Programming, Queueing Theory, and Simulation. The text includes an introduction to decision-making and operations research, four chapters consisting of indepth explanations of each…
Rethinking Technology-Enhanced Physics Teacher Education: From Theory to Practice
ERIC Educational Resources Information Center
Milner-Bolotin, Marina
2016-01-01
This article discusses how modern technology, such as electronic response systems, PeerWise system, data collection and analysis tools, computer simulations, and modeling software can be used in physics methods courses to promote teacher-candidates' professional competencies and their positive attitudes about mathematics and science education. We…
Utility of computer simulations in landscape genetics
Bryan K. Epperson; Brad H. McRae; Kim Scribner; Samuel A. Cushman; Michael S. Rosenberg; Marie-Josee Fortin; Patrick M. A. James; Melanie Murphy; Stephanie Manel; Pierre Legendre; Mark R. T. Dale
2010-01-01
Population genetics theory is primarily based on mathematical models in which spatial complexity and temporal variability are largely ignored. In contrast, the field of landscape genetics expressly focuses on how population genetic processes are affected by complex spatial and temporal environmental heterogeneity. It is spatially explicit and relates patterns to...
Understanding Computation of Impulse Response in Microwave Software Tools
ERIC Educational Resources Information Center
Potrebic, Milka M.; Tosic, Dejan V.; Pejovic, Predrag V.
2010-01-01
In modern microwave engineering curricula, the introduction of the many new topics in microwave industrial development, or of software tools for design and simulation, sometimes results in students having an inadequate understanding of the fundamental theory. The terminology for and the explanation of algorithms for calculating impulse response in…
Advanced computations in plasma physics
NASA Astrophysics Data System (ADS)
Tang, W. M.
2002-05-01
Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.
Urbic, Tomaz
2016-01-01
In this paper we applied an analytical theory for the two dimensional dimerising fluid. We applied Wertheims thermodynamic perturbation theory (TPT) and integral equation theory (IET) for associative liquids to the dimerising model with arbitrary position of dimerising points from center of the particles. The theory was used to study thermodynamical and structural properties. To check the accuracy of the theories we compared theoretical results with corresponding results obtained by Monte Carlo computer simulations. The theories are accurate for the different positions of patches of the model at all values of the temperature and density studied. IET correctly predicts the pair correlation function of the model. Both TPT and IET are in good agreement with the Monte Carlo values of the energy, pressure, chemical potential, compressibility and ratios of free and bonded particles. PMID:28529396
A multibody knee model with discrete cartilage prediction of tibio-femoral contact mechanics.
Guess, Trent M; Liu, Hongzeng; Bhashyam, Sampath; Thiagarajan, Ganesh
2013-01-01
Combining musculoskeletal simulations with anatomical joint models capable of predicting cartilage contact mechanics would provide a valuable tool for studying the relationships between muscle force and cartilage loading. As a step towards producing multibody musculoskeletal models that include representation of cartilage tissue mechanics, this research developed a subject-specific multibody knee model that represented the tibia plateau cartilage as discrete rigid bodies that interacted with the femur through deformable contacts. Parameters for the compliant contact law were derived using three methods: (1) simplified Hertzian contact theory, (2) simplified elastic foundation contact theory and (3) parameter optimisation from a finite element (FE) solution. The contact parameters and contact friction were evaluated during a simulated walk in a virtual dynamic knee simulator, and the resulting kinematics were compared with measured in vitro kinematics. The effects on predicted contact pressures and cartilage-bone interface shear forces during the simulated walk were also evaluated. The compliant contact stiffness parameters had a statistically significant effect on predicted contact pressures as well as all tibio-femoral motions except flexion-extension. The contact friction was not statistically significant to contact pressures, but was statistically significant to medial-lateral translation and all rotations except flexion-extension. The magnitude of kinematic differences between model formulations was relatively small, but contact pressure predictions were sensitive to model formulation. The developed multibody knee model was computationally efficient and had a computation time 283 times faster than a FE simulation using the same geometries and boundary conditions.
Wong, Kin-Yiu; Xu, Yuqing; Xu, Liang
2015-11-01
Enzymatic reactions are integral components in many biological functions and malfunctions. The iconic structure of each reaction path for elucidating the reaction mechanism in details is the molecular structure of the rate-limiting transition state (RLTS). But RLTS is very hard to get caught or to get visualized by experimentalists. In spite of the lack of explicit molecular structure of the RLTS in experiment, we still can trace out the RLTS unique "fingerprints" by measuring the isotope effects on the reaction rate. This set of "fingerprints" is considered as a most direct probe of RLTS. By contrast, for computer simulations, oftentimes molecular structures of a number of TS can be precisely visualized on computer screen, however, theoreticians are not sure which TS is the actual rate-limiting one. As a result, this is an excellent stage setting for a perfect "marriage" between experiment and theory for determining the structure of RLTS, along with the reaction mechanism, i.e., experimentalists are responsible for "fingerprinting", whereas theoreticians are responsible for providing candidates that match the "fingerprints". In this Review, the origin of isotope effects on a chemical reaction is discussed from the perspectives of classical and quantum worlds, respectively (e.g., the origins of the inverse kinetic isotope effects and all the equilibrium isotope effects are purely from quantum). The conventional Bigeleisen equation for isotope effect calculations, as well as its refined version in the framework of Feynman's path integral and Kleinert's variational perturbation (KP) theory for systematically incorporating anharmonicity and (non-parabolic) quantum tunneling, are also presented. In addition, the outstanding interplay between theory and experiment for successfully deducing the RLTS structures and the reaction mechanisms is demonstrated by applications on biochemical reactions, namely models of bacterial squalene-to-hopene polycyclization and RNA 2'-O-transphosphorylation. For all these applications, we used our recently-developed path-integral method based on the KP theory, called automated integration-free path-integral (AIF-PI) method, to perform ab initio path-integral calculations of isotope effects. As opposed to the conventional path-integral molecular dynamics (PIMD) and Monte Carlo (PIMC) simulations, values calculated from our AIF-PI path-integral method can be as precise as (not as accurate as) the numerical precision of the computing machine. Lastly, comments are made on the general challenges in theoretical modeling of candidates matching the experimental "fingerprints" of RLTS. This article is part of a Special Issue entitled: Enzyme Transition States from Theory and Experiment. Copyright © 2015 Elsevier B.V. All rights reserved.
Optimal Discrete Event Supervisory Control of Aircraft Gas Turbine Engines
NASA Technical Reports Server (NTRS)
Litt, Jonathan (Technical Monitor); Ray, Asok
2004-01-01
This report presents an application of the recently developed theory of optimal Discrete Event Supervisory (DES) control that is based on a signed real measure of regular languages. The DES control techniques are validated on an aircraft gas turbine engine simulation test bed. The test bed is implemented on a networked computer system in which two computers operate in the client-server mode. Several DES controllers have been tested for engine performance and reliability.
A simplified computational memory model from information processing.
Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang
2016-11-23
This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.
NASA Technical Reports Server (NTRS)
Joslin, Ronald D.
1995-01-01
The spatial evolution of three-dimensional disturbances in an attachment-line boundary layer is computed by direct numerical simulation of the unsteady, incompressible Navier-Stokes equations. Disturbances are introduced into the boundary layer by harmonic sources that involve unsteady suction and blowing through the wall. Various harmonic- source generators are implemented on or near the attachment line, and the disturbance evolutions are compared. Previous two-dimensional simulation results and nonparallel theory are compared with the present results. The three-dimensional simulation results for disturbances with quasi-two-dimensional features indicate growth rates of only a few percent larger than pure two-dimensional results; however, the results are close enough to enable the use of the more computationally efficient, two-dimensional approach. However, true three-dimensional disturbances are more likely in practice and are more stable than two-dimensional disturbances. Disturbances generated off (but near) the attachment line spread both away from and toward the attachment line as they evolve. The evolution pattern is comparable to wave packets in at-plate boundary-layer flows. Suction stabilizes the quasi-two-dimensional attachment-line instabilities, and blowing destabilizes these instabilities; these results qualitatively agree with the theory. Furthermore, suction stabilizes the disturbances that develop off the attachment line. Clearly, disturbances that are generated near the attachment line can supply energy to attachment-line instabilities, but suction can be used to stabilize these instabilities.
NASA Astrophysics Data System (ADS)
Khasare, S. B.
In the present work, an extension of the scaled particle theory (ESPT) for fluid using computer algebra is developed to obtain an equation of state (EOS), for Lennard-Jones fluid. A suitable functional form for surface tension S(r,d,ɛ) is assumed with intermolecular separation r as a variable, given below: $$S(r,d,\\epsilon)=S_{0}[1+2\\delta(d/r)^{m}],\\qquad r\\geq d/2\\,,$$ where m is arbitrary real number, and d and ɛ are related to physical property such as average or suitable molecular diameter and the binding energy of the molecule respectively. It is found that, for hard sphere fluid ɛ = 0, the above assumption when introduced in scaled particle theory (SPT) frame and choosing arbitrary real number, m = 1/3, the corresponding EOS is in good agreement with the computer simulation of molecular dynamics (MD) result. Furthermore, for the value of m = -1 it gives a Percus-Yevick (pressure), and for the value of m = 1, it corresponds Percus-Yevick (compressibility) EOS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoban, Matty J.; Department of Computer Science, University of Oxford, Wolfson Building, Parks Road, Oxford OX1 3QD; Wallman, Joel J.
We consider general settings of Bell inequality experiments with many parties, where each party chooses from a finite number of measurement settings each with a finite number of outcomes. We investigate the constraints that Bell inequalities place upon the correlations possible in local hidden variable theories using a geometrical picture of correlations. We show that local hidden variable theories can be characterized in terms of limited computational expressiveness, which allows us to characterize families of Bell inequalities. The limited computational expressiveness for many settings (each with many outcomes) generalizes previous results about the many-party situation each with a choice ofmore » two possible measurements (each with two outcomes). Using this computational picture we present generalizations of the Popescu-Rohrlich nonlocal box for many parties and nonbinary inputs and outputs at each site. Finally, we comment on the effect of preprocessing on measurement data in our generalized setting and show that it becomes problematic outside of the binary setting, in that it allows local hidden variable theories to simulate maximally nonlocal correlations such as those of these generalized Popescu-Rohrlich nonlocal boxes.« less
NASA Astrophysics Data System (ADS)
Clementi, Enrico
2012-06-01
This is the introductory chapter to the AIP Proceedings volume "Theory and Applications of Computational Chemistry: The First Decade of the Second Millennium" where we discuss the evolution of "computational chemistry". Very early variational computational chemistry developments are reported in Sections 1 to 7, and 11, 12 by recalling some of the computational chemistry contributions by the author and his collaborators (from late 1950 to mid 1990); perturbation techniques are not considered in this already extended work. Present day's computational chemistry is partly considered in Sections 8 to 10 where more recent studies by the author and his collaborators are discussed, including the Hartree-Fock-Heitler-London method; a more general discussion on present day computational chemistry is presented in Section 14. The following chapters of this AIP volume provide a view of modern computational chemistry. Future computational chemistry developments can be extrapolated from the chapters of this AIP volume; further, in Sections 13 and 15 present an overall analysis on computational chemistry, obtained from the Global Simulation approach, by considering the evolution of scientific knowledge confronted with the opportunities offered by modern computers.
Efficient Conservative Reformulation Schemes for Lithium Intercalation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urisanga, PC; Rife, D; De, S
Porous electrode theory coupled with transport and reaction mechanisms is a widely used technique to model Li-ion batteries employing an appropriate discretization or approximation for solid phase diffusion with electrode particles. One of the major difficulties in simulating Li-ion battery models is the need to account for solid phase diffusion in a second radial dimension r, which increases the computation time/cost to a great extent. Various methods that reduce the computational cost have been introduced to treat this phenomenon, but most of them do not guarantee mass conservation. The aim of this paper is to introduce an inherently mass conservingmore » yet computationally efficient method for solid phase diffusion based on Lobatto III A quadrature. This paper also presents coupling of the new solid phase reformulation scheme with a macro-homogeneous porous electrode theory based pseudo 20 model for Li-ion battery. (C) The Author(s) 2015. Published by ECS. All rights reserved.« less
Sergiievskyi, Volodymyr P; Jeanmairet, Guillaume; Levesque, Maximilien; Borgis, Daniel
2014-06-05
Molecular density functional theory (MDFT) offers an efficient implicit-solvent method to estimate molecule solvation free-energies, whereas conserving a fully molecular representation of the solvent. Even within a second-order approximation for the free-energy functional, the so-called homogeneous reference fluid approximation, we show that the hydration free-energies computed for a data set of 500 organic compounds are of similar quality as those obtained from molecular dynamics free-energy perturbation simulations, with a computer cost reduced by 2-3 orders of magnitude. This requires to introduce the proper partial volume correction to transform the results from the grand canonical to the isobaric-isotherm ensemble that is pertinent to experiments. We show that this correction can be extended to 3D-RISM calculations, giving a sound theoretical justification to empirical partial molar volume corrections that have been proposed recently.
Computational Relativistic Astrophysics Using the Flow Field-Dependent Variation Theory
NASA Technical Reports Server (NTRS)
Richardson, G. A.; Chung, T. J.
2002-01-01
We present our method for solving general relativistic nonideal hydrodynamics. Relativistic effects become pronounced in such cases as jet formation from black hole magnetized accretion disks which may lead to the study of gamma-ray bursts. Nonideal flows are present where radiation, magnetic forces, viscosities, and turbulence play an important role. Our concern in this paper is to reexamine existing numerical simulation tools as to the accuracy and efficiency of computations and introduce a new approach known as the flow field-dependent variation (FDV) method. The main feature of the FDV method consists of accommodating discontinuities of shock waves and high gradients of flow variables such as occur in turbulence and unstable motions. In this paper, the physics involved in the solution of relativistic hydrodynamics and solution strategies of the FDV theory are elaborated. The general relativistic astrophysical flow and shock solver (GRAFSS) is introduced, and some simple example problems for computational relativistic astrophysics (CRA) are demonstrated.
NASA Astrophysics Data System (ADS)
De Raedt, Hans; Michielsen, Kristel; Hess, Karl
2016-12-01
Using Einstein-Podolsky-Rosen-Bohm experiments as an example, we demonstrate that the combination of a digital computer and algorithms, as a metaphor for a perfect laboratory experiment, provides solutions to problems of the foundations of physics. Employing discrete-event simulation, we present a counterexample to John Bell's remarkable "proof" that any theory of physics, which is both Einstein-local and "realistic" (counterfactually definite), results in a strong upper bound to the correlations that are being measured in Einstein-Podolsky-Rosen-Bohm experiments. Our counterexample, which is free of the so-called detection-, coincidence-, memory-, and contextuality loophole, violates this upper bound and fully agrees with the predictions of quantum theory for Einstein-Podolsky-Rosen-Bohm experiments.
Electronic structure, chemical bonding, and geometry of pure and Sr-doped CaCO3.
Stashans, Arvids; Chamba, Gaston; Pinto, Henry
2008-02-01
The electronic structure, chemical bonding, geometry, and effects produced by Sr-doping in CaCO(3) have been studied on the basis of density-functional theory using the VASP simulation package and molecular-orbital theory utilizing the CLUSTERD computer code. Two calcium carbonate structures which occur naturally in anhydrous crystalline forms, calcite and aragonite, were considered in the present investigation. The obtained diagrams of density of states show similar patterns for both materials. The spatial structures are computed and analyzed in comparison to the available experimental data. The electronic properties and atomic displacements because of the trace element Sr-incorporation are discussed in a comparative manner for the two crystalline structures. (c) 2007 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Pathak, Ashish; Raessi, Mehdi
2016-11-01
Using an in-house computational framework, we have studied the interaction of water waves with pitching flap-type ocean wave energy converters (WECs). The computational framework solves the full 3D Navier-Stokes equations and captures important effects, including the fluid-solid interaction, the nonlinear and viscous effects. The results of the computational tool, is first compared against the experimental data on the response of a flap-type WEC in a wave tank, and excellent agreement is demonstrated. Further simulations at the model and prototype scales are presented to assess the validity of the Froude scaling. The simulations are used to address some important questions, such as the validity range of common WEC modeling approaches that rely heavily on the Froude scaling and the inviscid potential flow theory. Additionally, the simulations examine the role of the Keulegan-Carpenter (KC) number, which is often used as a measure of relative importance of viscous drag on bodies exposed to oscillating flows. The performance of the flap-type WECs is investigated at various KC numbers to establish the relationship between the viscous drag and KC number for such geometry. That is of significant importance because such relationship only exists for simple geometries, e.g., a cylinder. Support from the National Science Foundation is gratefully acknowledged.
Water condensation: a multiscale phenomenon.
Jensen, Kasper Risgaard; Fojan, Peter; Jensen, Rasmus Lund; Gurevich, Leonid
2014-02-01
The condensation of water is a phenomenon occurring in multiple situations in everyday life, e.g., when fog is formed or when dew forms on the grass or on windows. This means that this phenomenon plays an important role within the different fields of science including meteorology, building physics, and chemistry. In this review we address condensation models and simulations with the main focus on heterogeneous condensation of water. The condensation process is, at first, described from a thermodynamic viewpoint where the nucleation step is described by the classical nucleation theory. Further, we address the shortcomings of the thermodynamic theory in describing the nucleation and emphasize the importance of nanoscale effects. This leads to the description of condensation from a molecular viewpoint. Also presented is how the nucleation can be simulated by use of molecular models, and how the condensation process is simulated on the macroscale using computational fluid dynamics. Finally, examples of hybrid models combining molecular and macroscale models for the simulation of condensation on a surface are presented.
Nascimento, Daniel R; DePrince, A Eugene
2017-07-06
An explicitly time-dependent (TD) approach to equation-of-motion (EOM) coupled-cluster theory with single and double excitations (CCSD) is implemented for simulating near-edge X-ray absorption fine structure in molecular systems. The TD-EOM-CCSD absorption line shape function is given by the Fourier transform of the CCSD dipole autocorrelation function. We represent this transform by its Padé approximant, which provides converged spectra in much shorter simulation times than are required by the Fourier form. The result is a powerful framework for the blackbox simulation of broadband absorption spectra. K-edge X-ray absorption spectra for carbon, nitrogen, and oxygen in several small molecules are obtained from the real part of the absorption line shape function and are compared with experiment. The computed and experimentally obtained spectra are in good agreement; the mean unsigned error in the predicted peak positions is only 1.2 eV. We also explore the spectral signatures of protonation in these molecules.
Representing the work of medical protocols for organizational simulation.
Fridsma, D. B.
1998-01-01
Developing and implementing patient care protocols within a specific organizational setting requires knowledge of the protocol, the organization, and the way in which the organization does its work. Computer-based simulation tools have been used in many industries to provide managers with prospective insight into problems of work process and organization design mismatch. Many of these simulation tools are designed for well-understood routine work processes in which there are few contingent tasks. In this paper, we describe theoretic that make it possible to simulate medical protocols using an information-processing theory framework. These simulations will allow medical administrators to test different protocol and organizational designs before actually using them within a particular clinical setting. PMID:9929231
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.
1992-01-01
The presentation gives a partial overview of research and development underway in the Structures Division of LeRC, which collectively is referred to as the Computational Structures Technology Program. The activities in the program are diverse and encompass four major categories: (1) composite materials and structures; (2) probabilistic analysis and reliability; (3) design optimization and expert systems; and (4) computational methods and simulation. The approach of the program is comprehensive and entails exploration of fundamental theories of structural mechanics to accurately represent the complex physics governing engine structural performance, formulation, and implementation of computational techniques and integrated simulation strategies to provide accurate and efficient solutions of the governing theoretical models by exploiting the emerging advances in computer technology, and validation and verification through numerical and experimental tests to establish confidence and define the qualities and limitations of the resulting theoretical models and computational solutions. The program comprises both in-house and sponsored research activities. The remainder of the presentation provides a sample of activities to illustrate the breadth and depth of the program and to demonstrate the accomplishments and benefits that have resulted.
Statistical mechanics of a cat's cradle
NASA Astrophysics Data System (ADS)
Shen, Tongye; Wolynes, Peter G.
2006-11-01
It is believed that, much like a cat's cradle, the cytoskeleton can be thought of as a network of strings under tension. We show that both regular and random bond-disordered networks having bonds that buckle upon compression exhibit a variety of phase transitions as a function of temperature and extension. The results of self-consistent phonon calculations for the regular networks agree very well with computer simulations at finite temperature. The analytic theory also yields a rigidity onset (mechanical percolation) and the fraction of extended bonds for random networks. There is very good agreement with the simulations by Delaney et al (2005 Europhys. Lett. 72 990). The mean field theory reveals a nontranslationally invariant phase with self-generated heterogeneity of tautness, representing 'antiferroelasticity'.
Computational fluid dynamics - The coming revolution
NASA Technical Reports Server (NTRS)
Graves, R. A., Jr.
1982-01-01
The development of aerodynamic theory is traced from the days of Aristotle to the present, with the next stage in computational fluid dynamics dependent on superspeed computers for flow calculations. Additional attention is given to the history of numerical methods inherent in writing computer codes applicable to viscous and inviscid analyses for complex configurations. The advent of the superconducting Josephson junction is noted to place configurational demands on computer design to avoid limitations imposed by the speed of light, and a Japanese projection of a computer capable of several hundred billion operations/sec is mentioned. The NASA Numerical Aerodynamic Simulator is described, showing capabilities of a billion operations/sec with a memory of 240 million words using existing technology. Near-term advances in fluid dynamics are discussed.
NASA Astrophysics Data System (ADS)
Asath, R. Mohamed; Rekha, T. N.; Premkumar, S.; Mathavan, T.; Benial, A. Milton Franklin
2016-12-01
Conformational analysis was carried out for N-(5-aminopyridin-2-yl)acetamide (APA) molecule. The most stable, optimized structure was predicted by the density functional theory calculations using the B3LYP functional with cc-pVQZ basis set. The optimized structural parameters and vibrational frequencies were calculated. The experimental and theoretical vibrational frequencies were assigned and compared. Ultraviolet-visible spectrum was simulated and validated experimentally. The molecular electrostatic potential surface was simulated. Frontier molecular orbitals and related molecular properties were computed, which reveals that the higher molecular reactivity and stability of the APA molecule and further density of states spectrum was simulated. The natural bond orbital analysis was also performed to confirm the bioactivity of the APA molecule. Antidiabetic activity was studied based on the molecular docking analysis and the APA molecule was identified that it can act as a good inhibitor against diabetic nephropathy.
Molecular-dynamics simulations of urea nucleation from aqueous solution
Salvalaglio, Matteo; Perego, Claudio; Giberti, Federico; Mazzotti, Marco; Parrinello, Michele
2015-01-01
Despite its ubiquitous character and relevance in many branches of science and engineering, nucleation from solution remains elusive. In this framework, molecular simulations represent a powerful tool to provide insight into nucleation at the molecular scale. In this work, we combine theory and molecular simulations to describe urea nucleation from aqueous solution. Taking advantage of well-tempered metadynamics, we compute the free-energy change associated to the phase transition. We find that such a free-energy profile is characterized by significant finite-size effects that can, however, be accounted for. The description of the nucleation process emerging from our analysis differs from classical nucleation theory. Nucleation of crystal-like clusters is in fact preceded by large concentration fluctuations, indicating a predominant two-step process, whereby embryonic crystal nuclei emerge from dense, disordered urea clusters. Furthermore, in the early stages of nucleation, two different polymorphs are seen to compete. PMID:25492932
Molecular-dynamics simulations of urea nucleation from aqueous solution.
Salvalaglio, Matteo; Perego, Claudio; Giberti, Federico; Mazzotti, Marco; Parrinello, Michele
2015-01-06
Despite its ubiquitous character and relevance in many branches of science and engineering, nucleation from solution remains elusive. In this framework, molecular simulations represent a powerful tool to provide insight into nucleation at the molecular scale. In this work, we combine theory and molecular simulations to describe urea nucleation from aqueous solution. Taking advantage of well-tempered metadynamics, we compute the free-energy change associated to the phase transition. We find that such a free-energy profile is characterized by significant finite-size effects that can, however, be accounted for. The description of the nucleation process emerging from our analysis differs from classical nucleation theory. Nucleation of crystal-like clusters is in fact preceded by large concentration fluctuations, indicating a predominant two-step process, whereby embryonic crystal nuclei emerge from dense, disordered urea clusters. Furthermore, in the early stages of nucleation, two different polymorphs are seen to compete.
Testing simulation and structural models with applications to energy demand
NASA Astrophysics Data System (ADS)
Wolff, Hendrik
2007-12-01
This dissertation deals with energy demand and consists of two parts. Part one proposes a unified econometric framework for modeling energy demand and examples illustrate the benefits of the technique by estimating the elasticity of substitution between energy and capital. Part two assesses the energy conservation policy of Daylight Saving Time and empirically tests the performance of electricity simulation. In particular, the chapter "Imposing Monotonicity and Curvature on Flexible Functional Forms" proposes an estimator for inference using structural models derived from economic theory. This is motivated by the fact that in many areas of economic analysis theory restricts the shape as well as other characteristics of functions used to represent economic constructs. Specific contributions are (a) to increase the computational speed and tractability of imposing regularity conditions, (b) to provide regularity preserving point estimates, (c) to avoid biases existent in previous applications, and (d) to illustrate the benefits of our approach via numerical simulation results. The chapter "Can We Close the Gap between the Empirical Model and Economic Theory" discusses the more fundamental question of whether the imposition of a particular theory to a dataset is justified. I propose a hypothesis test to examine whether the estimated empirical model is consistent with the assumed economic theory. Although the proposed methodology could be applied to a wide set of economic models, this is particularly relevant for estimating policy parameters that affect energy markets. This is demonstrated by estimating the Slutsky matrix and the elasticity of substitution between energy and capital, which are crucial parameters used in computable general equilibrium models analyzing energy demand and the impacts of environmental regulations. Using the Berndt and Wood dataset, I find that capital and energy are complements and that the data are significantly consistent with duality theory. Both results would not necessarily be achieved using standard econometric methods. The final chapter "Daylight Time and Energy" uses a quasi-experiment to evaluate a popular energy conservation policy: we challenge the conventional wisdom that extending Daylight Saving Time (DST) reduces energy demand. Using detailed panel data on half-hourly electricity consumption, prices, and weather conditions from four Australian states we employ a novel 'triple-difference' technique to test the electricity-saving hypothesis. We show that the extension failed to reduce electricity demand and instead increased electricity prices. We also apply the most sophisticated electricity simulation model available in the literature to the Australian data. We find that prior simulation models significantly overstate electricity savings. Our results suggest that extending DST will fail as an instrument to save energy resources.
NASA Astrophysics Data System (ADS)
Juntarapaso, Yada
Scanning Acoustic Microscopy (SAM) is one of the most powerful techniques for nondestructive evaluation and it is a promising tool for characterizing the elastic properties of biological tissues/cells. Exploring a single cell is important since there is a connection between single cell biomechanics and human cancer. Scanning acoustic microscopy (SAM) has been accepted and extensively utilized for acoustical cellular and tissue imaging including measurements of the mechanical and elastic properties of biological specimens. SAM provides superb advantages in that it is non-invasive, can measure mechanical properties of biological cells or tissues, and fixation/chemical staining is not necessary. The first objective of this research is to develop a program for simulating the images and contrast mechanism obtained by high-frequency SAM. Computer simulation algorithms based on MatlabRTM were built for simulating the images and contrast mechanisms. The mechanical properties of HeLa and MCF-7 cells were computed from the measurement data of the output signal amplitude as a function of distance from the focal planes of the acoustics lens which is known as V(z) . Algorithms for simulating V(z) responses involved the calculation of the reflectance function and were created based on ray theory and wave theory. The second objective is to design transducer arrays for SAM. Theoretical simulations based on Field II(c) programs of the high frequency ultrasound array designs were performed to enhance image resolution and volumetric imaging capabilities. Phased array beam forming and dynamic apodization and focusing were employed in the simulations. The new transducer array design will be state-of-the-art in improving the performance of SAM by electronic scanning and potentially providing a 4-D image of the specimen.
High-Performance First-Principles Molecular Dynamics for Predictive Theory and Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gygi, Francois; Galli, Giulia; Schwegler, Eric
This project focused on developing high-performance software tools for First-Principles Molecular Dynamics (FPMD) simulations, and applying them in investigations of materials relevant to energy conversion processes. FPMD is an atomistic simulation method that combines a quantum-mechanical description of electronic structure with the statistical description provided by molecular dynamics (MD) simulations. This reliance on fundamental principles allows FPMD simulations to provide a consistent description of structural, dynamical and electronic properties of a material. This is particularly useful in systems for which reliable empirical models are lacking. FPMD simulations are increasingly used as a predictive tool for applications such as batteries, solarmore » energy conversion, light-emitting devices, electro-chemical energy conversion devices and other materials. During the course of the project, several new features were developed and added to the open-source Qbox FPMD code. The code was further optimized for scalable operation of large-scale, Leadership-Class DOE computers. When combined with Many-Body Perturbation Theory (MBPT) calculations, this infrastructure was used to investigate structural and electronic properties of liquid water, ice, aqueous solutions, nanoparticles and solid-liquid interfaces. Computing both ionic trajectories and electronic structure in a consistent manner enabled the simulation of several spectroscopic properties, such as Raman spectra, infrared spectra, and sum-frequency generation spectra. The accuracy of the approximations used allowed for direct comparisons of results with experimental data such as optical spectra, X-ray and neutron diffraction spectra. The software infrastructure developed in this project, as applied to various investigations of solids, liquids and interfaces, demonstrates that FPMD simulations can provide a detailed, atomic-scale picture of structural, vibrational and electronic properties of complex systems relevant to energy conversion devices.« less
Probability for Weather and Climate
NASA Astrophysics Data System (ADS)
Smith, L. A.
2013-12-01
Over the last 60 years, the availability of large-scale electronic computers has stimulated rapid and significant advances both in meteorology and in our understanding of the Earth System as a whole. The speed of these advances was due, in large part, to the sudden ability to explore nonlinear systems of equations. The computer allows the meteorologist to carry a physical argument to its conclusion; the time scales of weather phenomena then allow the refinement of physical theory, numerical approximation or both in light of new observations. Prior to this extension, as Charney noted, the practicing meteorologist could ignore the results of theory with good conscience. Today, neither the practicing meteorologist nor the practicing climatologist can do so, but to what extent, and in what contexts, should they place the insights of theory above quantitative simulation? And in what circumstances can one confidently estimate the probability of events in the world from model-based simulations? Despite solid advances of theory and insight made possible by the computer, the fidelity of our models of climate differs in kind from the fidelity of models of weather. While all prediction is extrapolation in time, weather resembles interpolation in state space, while climate change is fundamentally an extrapolation. The trichotomy of simulation, observation and theory which has proven essential in meteorology will remain incomplete in climate science. Operationally, the roles of probability, indeed the kinds of probability one has access too, are different in operational weather forecasting and climate services. Significant barriers to forming probability forecasts (which can be used rationally as probabilities) are identified. Monte Carlo ensembles can explore sensitivity, diversity, and (sometimes) the likely impact of measurement uncertainty and structural model error. The aims of different ensemble strategies, and fundamental differences in ensemble design to support of decision making versus advance science, are noted. It is argued that, just as no point forecast is complete without an estimate of its accuracy, no model-based probability forecast is complete without an estimate of its own irrelevance. The same nonlinearities that made the electronic computer so valuable links the selection and assimilation of observations, the formation of ensembles, the evolution of models, the casting of model simulations back into observables, and the presentation of this information to those who use it to take action or to advance science. Timescales of interest exceed the lifetime of a climate model and the career of a climate scientist, disarming the trichotomy that lead to swift advances in weather forecasting. Providing credible, informative climate services is a more difficult task. In this context, the value of comparing the forecasts of simulation models not only with each other but also with the performance of simple empirical models, whenever possible, is stressed. The credibility of meteorology is based on its ability to forecast and explain the weather. The credibility of climatology will always be based on flimsier stuff. Solid insights of climate science may be obscured if the severe limits on our ability to see the details of the future even probabilistically are not communicated clearly.
Dynamical Approach Study of Spurious Numerics in Nonlinear Computations
NASA Technical Reports Server (NTRS)
Yee, H. C.; Mansour, Nagi (Technical Monitor)
2002-01-01
The last two decades have been an era when computation is ahead of analysis and when very large scale practical computations are increasingly used in poorly understood multiscale complex nonlinear physical problems and non-traditional fields. Ensuring a higher level of confidence in the predictability and reliability (PAR) of these numerical simulations could play a major role in furthering the design, understanding, affordability and safety of our next generation air and space transportation systems, and systems for planetary and atmospheric sciences, and in understanding the evolution and origin of life. The need to guarantee PAR becomes acute when computations offer the ONLY way of solving these types of data limited problems. Employing theory from nonlinear dynamical systems, some building blocks to ensure a higher level of confidence in PAR of numerical simulations have been revealed by the author and world expert collaborators in relevant fields. Five building blocks with supporting numerical examples were discussed. The next step is to utilize knowledge gained by including nonlinear dynamics, bifurcation and chaos theories as an integral part of the numerical process. The third step is to design integrated criteria for reliable and accurate algorithms that cater to the different multiscale nonlinear physics. This includes but is not limited to the construction of appropriate adaptive spatial and temporal discretizations that are suitable for the underlying governing equations. In addition, a multiresolution wavelets approach for adaptive numerical dissipation/filter controls for high speed turbulence, acoustics and combustion simulations will be sought. These steps are corner stones for guarding against spurious numerical solutions that are solutions of the discretized counterparts but are not solutions of the underlying governing equations.
TOPICAL REVIEW: Advances and challenges in computational plasma science
NASA Astrophysics Data System (ADS)
Tang, W. M.; Chan, V. S.
2005-02-01
Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.
Advances and challenges in computational plasma science
NASA Astrophysics Data System (ADS)
Tang, W. M.
2005-02-01
Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gigley, H.M.
1982-01-01
An artificial intelligence approach to the simulation of neurolinguistically constrained processes in sentence comprehension is developed using control strategies for simulation of cooperative computation in associative networks. The desirability of this control strategy in contrast to ATN and production system strategies is explained. A first pass implementation of HOPE, an artificial intelligence simulation model of sentence comprehension, constrained by studies of aphasic performance, psycholinguistics, neurolinguistics, and linguistic theory is described. Claims that the model could serve as a basis for sentence production simulation and for a model of language acquisition as associative learning are discussed. HOPE is a model thatmore » performs in a normal state and includes a lesion simulation facility. HOPE is also a research tool. Its modifiability and use as a tool to investigate hypothesized causes of degradation in comprehension performance by aphasic patients are described. Issues of using behavioral constraints in modelling and obtaining appropriate data for simulated process modelling are discussed. Finally, problems of validation of the simulation results are raised; and issues of how to interpret clinical results to define the evolution of the model are discussed. Conclusions with respect to the feasibility of artificial intelligence simulation process modelling are discussed based on the current state of research.« less
Ichikawa, Kazuhisa; Suzuki, Takashi; Murata, Noboru
2010-11-30
Molecular events in biological cells occur in local subregions, where the molecules tend to be small in number. The cytoskeleton, which is important for both the structural changes of cells and their functions, is also a countable entity because of its long fibrous shape. To simulate the local environment using a computer, stochastic simulations should be run. We herein report a new method of stochastic simulation based on random walk and reaction by the collision of all molecules. The microscopic reaction rate P(r) is calculated from the macroscopic rate constant k. The formula involves only local parameters embedded for each molecule. The results of the stochastic simulations of simple second-order, polymerization, Michaelis-Menten-type and other reactions agreed quite well with those of deterministic simulations when the number of molecules was sufficiently large. An analysis of the theory indicated a relationship between variance and the number of molecules in the system, and results of multiple stochastic simulation runs confirmed this relationship. We simulated Ca²(+) dynamics in a cell by inward flow from a point on the cell surface and the polymerization of G-actin forming F-actin. Our results showed that this theory and method can be used to simulate spatially inhomogeneous events.
Nonperturbative study of dynamical SUSY breaking in N =(2 ,2 ) Yang-Mills theory
NASA Astrophysics Data System (ADS)
Catterall, Simon; Jha, Raghav G.; Joseph, Anosh
2018-03-01
We examine the possibility of dynamical supersymmetry breaking in two-dimensional N =(2 ,2 ) supersymmetric Yang-Mills theory. The theory is discretized on a Euclidean spacetime lattice using a supersymmetric lattice action. We compute the vacuum energy of the theory at finite temperature and take the zero-temperature limit. Supersymmetry will be spontaneously broken in this theory if the measured ground-state energy is nonzero. By performing simulations on a range of lattices up to 96 ×96 we are able to perform a careful extrapolation to the continuum limit for a wide range of temperatures. Subsequent extrapolations to the zero-temperature limit yield an upper bound on the ground-state energy density. We find the energy density to be statistically consistent with zero in agreement with the absence of dynamical supersymmetry breaking in this theory.
NASA Technical Reports Server (NTRS)
Worm, Jeffrey A.; Culas, Donald E.
1991-01-01
Computers are not designed to handle terms where uncertainty is present. To deal with uncertainty, techniques other than classical logic must be developed. This paper examines the concepts of statistical analysis, the Dempster-Shafer theory, rough set theory, and fuzzy set theory to solve this problem. The fundamentals of these theories are combined to provide the possible optimal solution. By incorporating principles from these theories, a decision-making process may be simulated by extracting two sets of fuzzy rules: certain rules and possible rules. From these rules a corresponding measure of how much we believe these rules is constructed. From this, the idea of how much a fuzzy diagnosis is definable in terms of its fuzzy attributes is studied.
Modeling the role of parallel processing in visual search.
Cave, K R; Wolfe, J M
1990-04-01
Treisman's Feature Integration Theory and Julesz's Texton Theory explain many aspects of visual search. However, these theories require that parallel processing mechanisms not be used in many visual searches for which they would be useful, and they imply that visual processing should be much slower than it is. Most importantly, they cannot account for recent data showing that some subjects can perform some conjunction searches very efficiently. Feature Integration Theory can be modified so that it accounts for these data and helps to answer these questions. In this new theory, which we call Guided Search, the parallel stage guides the serial stage as it chooses display elements to process. A computer simulation of Guided Search produces the same general patterns as human subjects in a number of different types of visual search.
Non-linear interaction of a detonation/vorticity wave
NASA Technical Reports Server (NTRS)
Lasseigne, D. G.; Jackson, T. L.; Hussaini, M. Y.
1991-01-01
The interaction of an oblique, overdriven detonation wave with a vorticity disturbance is investigated by a direct two-dimensional numerical simulation using a multi-domain, finite-difference solution of the compressible Euler equations. The results are compared to those of linear theory, which predict that the effect of exothermicity on the interaction is relatively small except possibly near a critical angle where linear theory no longer holds. It is found that the steady-state computational results agree with the results of linear theory. However, for cases with incident angle near the critical angle, moderate disturbance amplitudes, and/or sudden transient encounter with a disturbance, the effects of exothermicity are more pronounced than predicted by linear theory. Finally, it is found that linear theory correctly determines the critical angle.
Understanding valence-shell electron-pair repulsion (VSEPR) theory using origami molecular models
NASA Astrophysics Data System (ADS)
Endah Saraswati, Teguh; Saputro, Sulistyo; Ramli, Murni; Praseptiangga, Danar; Khasanah, Nurul; Marwati, Sri
2017-01-01
Valence-shell electron-pair repulsion (VSEPR) theory is conventionally used to predict molecular geometry. However, it is difficult to explore the full implications of this theory by simply drawing chemical structures. Here, we introduce origami modelling as a more accessible approach for exploration of the VSEPR theory. Our technique is simple, readily accessible and inexpensive compared with other sophisticated methods such as computer simulation or commercial three-dimensional modelling kits. This method can be implemented in chemistry education at both the high school and university levels. We discuss the example of a simple molecular structure prediction for ammonia (NH3). Using the origami model, both molecular shape and the scientific justification can be visualized easily. This ‘hands-on’ approach to building molecules will help promote understanding of VSEPR theory.
Computational Nanotechnology Program
NASA Technical Reports Server (NTRS)
Scuseria, Gustavo E.
1997-01-01
The objectives are: (1) development of methodological and computational tool for the quantum chemistry study of carbon nanostructures and (2) development of the fundamental understanding of the bonding, reactivity, and electronic structure of carbon nanostructures. Our calculations have continued to play a central role in understanding the outcome of the carbon nanotube macroscopic production experiment. The calculations on buckyonions offer the resolution of a long controversy between experiment and theory. Our new tight binding method offers increased speed for realistic simulations of large carbon nanostructures.
2001 Flight Mechanics Symposium
NASA Technical Reports Server (NTRS)
Lynch, John P. (Editor)
2001-01-01
This conference publication includes papers and abstracts presented at the Flight Mechanics Symposium held on June 19-21, 2001. Sponsored by the Guidance, Navigation and Control Center of Goddard Space Flight Center, this symposium featured technical papers on a wide range of issues related to attitude/orbit determination, prediction and control; attitude simulation; attitude sensor calibration; theoretical foundation of attitude computation; dynamics model improvements; autonomous navigation; constellation design and formation flying; estimation theory and computational techniques; Earth environment mission analysis and design; and, spacecraft re-entry mission design and operations.
Inertial confinement fusion quarterly report, October--December 1992. Volume 3, No. 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dixit, S.N.
1992-12-31
This report contains papers on the following topics: The Beamlet Front End: Prototype of a new pulse generation system;imaging biological objects with x-ray lasers; coherent XUV generation via high-order harmonic generation in rare gases; theory of high-order harmonic generation; two-dimensional computer simulations of ultra- intense, short-pulse laser-plasma interactions; neutron detectors for measuring the fusion burn history of ICF targets; the recirculator; and lasnex evolves to exploit computer industry advances.
Implications of a quadratic stream definition in radiative transfer theory.
NASA Technical Reports Server (NTRS)
Whitney, C.
1972-01-01
An explicit definition of the radiation-stream concept is stated and applied to approximate the integro-differential equation of radiative transfer with a set of twelve coupled differential equations. Computational efficiency is enhanced by distributing the corresponding streams in three-dimensional space in a totally symmetric way. Polarization is then incorporated in this model. A computer program based on the model is briefly compared with a Monte Carlo program for simulation of horizon scans of the earth's atmosphere. It is found to be considerably faster.
Computer studies of multiple-quantum spin dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murdoch, J.B.
The excitation and detection of multiple-quantum (MQ) transitions in Fourier transform NMR spectroscopy is an interesting problem in the quantum mechanical dynamics of spin systems as well as an important new technique for investigation of molecular structure. In particular, multiple-quantum spectroscopy can be used to simplify overly complex spectra or to separate the various interactions between a nucleus and its environment. The emphasis of this work is on computer simulation of spin-system evolution to better relate theory and experiment.
Tsiper, E V
2006-08-18
The concept of fractional charge is central to the theory of the fractional quantum Hall effect. Here I use exact diagonalization as well as configuration space renormalization to study finite clusters which are large enough to contain two independent edges. I analyze the conditions of resonant tunneling between the two edges. The "computer experiment" reveals a periodic sequence of resonant tunneling events consistent with the experimentally observed fractional quantization of electric charge in units of e/3 and e/5.
NASA Technical Reports Server (NTRS)
Fields, Chris
1989-01-01
Continuous dynamical systems intuitively seem capable of more complex behavior than discrete systems. If analyzed in the framework of the traditional theory of computation, a continuous dynamical system with countably many quasistable states has at least the computational power of a universal Turing machine. Such an analysis assumes, however, the classical notion of measurement. If measurement is viewed nonclassically, a continuous dynamical system cannot, even in principle, exhibit behavior that cannot be simulated by a universal Turing machine.
NASA Technical Reports Server (NTRS)
Fields, Chris
1989-01-01
Continuous dynamical systems intuitively seem capable of more complex behavior than discrete systems. If analyzed in the framework of the traditional theory of computation, a continuous dynamical system with countablely many quasistable states has at least the computational power of a universal Turing machine. Such an analyses assumes, however, the classical notion of measurement. If measurement is viewed nonclassically, a continuous dynamical system cannot, even in principle, exhibit behavior that cannot be simulated by a universal Turing machine.
2008-03-01
computational version of the CASIE architecture serves to demonstrate the functionality of our primary theories. However, implementation of several other...following facts. First, based on Theorem 3 and Theorem 5, the objective function is non -increasing under updating rule (6); second, by the criteria for...reassignment in updating rule (7), it is trivial to show that the objective function is non -increasing under updating rule (7). A Unified View to Graph
NASA Astrophysics Data System (ADS)
Engelhardt, Larry
2015-12-01
We discuss how computers can be used to solve the ordinary differential equations that provide a quantum mechanical description of magnetic resonance. By varying the parameters in these equations and visually exploring how these parameters affect the results, students can quickly gain insights into the nature of magnetic resonance that go beyond the standard presentation found in quantum mechanics textbooks. The results were generated using an IPython notebook, which we provide as an online supplement with interactive plots and animations.
Huang, Chen; Muñoz-García, Ana Belén; Pavone, Michele
2016-12-28
Density-functional embedding theory provides a general way to perform multi-physics quantum mechanics simulations of large-scale materials by dividing the total system's electron density into a cluster's density and its environment's density. It is then possible to compute the accurate local electronic structures and energetics of the embedded cluster with high-level methods, meanwhile retaining a low-level description of the environment. The prerequisite step in the density-functional embedding theory is the cluster definition. In covalent systems, cutting across the covalent bonds that connect the cluster and its environment leads to dangling bonds (unpaired electrons). These represent a major obstacle for the application of density-functional embedding theory to study extended covalent systems. In this work, we developed a simple scheme to define the cluster in covalent systems. Instead of cutting covalent bonds, we directly split the boundary atoms for maintaining the valency of the cluster. With this new covalent embedding scheme, we compute the dehydrogenation energies of several different molecules, as well as the binding energy of a cobalt atom on graphene. Well localized cluster densities are observed, which can facilitate the use of localized basis sets in high-level calculations. The results are found to converge faster with the embedding method than the other multi-physics approach ONIOM. This work paves the way to perform the density-functional embedding simulations of heterogeneous systems in which different types of chemical bonds are present.
An Evolutionary Comparison of the Handicap Principle and Hybrid Equilibrium Theories of Signaling.
Kane, Patrick; Zollman, Kevin J S
2015-01-01
The handicap principle has come under significant challenge both from empirical studies and from theoretical work. As a result, a number of alternative explanations for honest signaling have been proposed. This paper compares the evolutionary plausibility of one such alternative, the "hybrid equilibrium," to the handicap principle. We utilize computer simulations to compare these two theories as they are instantiated in Maynard Smith's Sir Philip Sidney game. We conclude that, when both types of communication are possible, evolution is unlikely to lead to handicap signaling and is far more likely to result in the partially honest signaling predicted by hybrid equilibrium theory.
Nonlinear mode coupling theory of the lower-hybrid-drift instability
NASA Technical Reports Server (NTRS)
Drake, J. F.; Guzdar, P. N.; Hassam, A. B.; Huba, J. D.
1984-01-01
A nonlinear mode coupling theory of the lower-hybrid-drift instability is presented. A two-dimensional nonlinear wave equation is derived which describes lower-hybrid drift wave turbulence in the plane transverse to B (k.B = 0), and which is valid for finite beta, collisional and collisionless plasmas. The instability saturates by transferring energy from growing, long wavelength modes to damped, short wavelength modes. Detailed numerical results are presented which compare favorably to both recent computer simulations and experimental observations. Applications of this theory to space plasmas, the earth's magnetotail and the equatorial F region ionosphere, are discussed. Previously announced in STAR as N84-17734
Communication: Simple liquids' high-density viscosity
NASA Astrophysics Data System (ADS)
Costigliola, Lorenzo; Pedersen, Ulf R.; Heyes, David M.; Schrøder, Thomas B.; Dyre, Jeppe C.
2018-02-01
This paper argues that the viscosity of simple fluids at densities above that of the triple point is a specific function of temperature relative to the freezing temperature at the density in question. The proposed viscosity expression, which is arrived at in part by reference to the isomorph theory of systems with hidden scale invariance, describes computer simulations of the Lennard-Jones system as well as argon and methane experimental data and simulation results for an effective-pair-potential model of liquid sodium.
Interleaved concatenated codes: new perspectives on approaching the Shannon limit.
Viterbi, A J; Viterbi, A M; Sindhushayana, N T
1997-09-02
The last few years have witnessed a significant decrease in the gap between the Shannon channel capacity limit and what is practically achievable. Progress has resulted from novel extensions of previously known coding techniques involving interleaved concatenated codes. A considerable body of simulation results is now available, supported by an important but limited theoretical basis. This paper presents a computational technique which further ties simulation results to the known theory and reveals a considerable reduction in the complexity required to approach the Shannon limit.
Designer: A Knowledge-Based Graphic Design Assistant.
1986-07-01
pro- pulsion. The system consists of a color graphics interface to a mathematical simulation. One can view and manipulate this simulation at a number of...valve vaive graph 50- mufi -plot graph 100 4 0 80 6.. 30 60 4 20 .... 40 2 10 V 20 0 2 4 6 8 10 0 20 40 60 80 100 FIGURE 4. Icon Sampler. This view...in Computing Systems. New York: ACM, 1983. 8306. Paul Smolensky. Harmony Theory: A Mathematical Framework for Stochastic Parallel Pro- cessing
ERIC Educational Resources Information Center
Gilstrap, Donald L.
2013-01-01
In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…
The Use of Computer-Simulated Trajectories to Teach Real Particle Flight
ERIC Educational Resources Information Center
Gagnon, Michel
2011-01-01
The close relationship between charged particles and electromagnetic fields has been well known since the 19th century, thanks to James Clerk Maxwell's brilliant unified theory of electricity and magnetism. Today, electromagnetism is recognized as an essential aspect of human activity and has consequently become a major component of senior…
Wolff, Phillip; Barbey, Aron K.
2015-01-01
Causal composition allows people to generate new causal relations by combining existing causal knowledge. We introduce a new computational model of such reasoning, the force theory, which holds that people compose causal relations by simulating the processes that join forces in the world, and compare this theory with the mental model theory (Khemlani et al., 2014) and the causal model theory (Sloman et al., 2009), which explain causal composition on the basis of mental models and structural equations, respectively. In one experiment, the force theory was uniquely able to account for people's ability to compose causal relationships from complex animations of real-world events. In three additional experiments, the force theory did as well as or better than the other two theories in explaining the causal compositions people generated from linguistically presented causal relations. Implications for causal learning and the hierarchical structure of causal knowledge are discussed. PMID:25653611
Experimental evaluation of a flat wake theory for predicting rotor inflow-wake velocities
NASA Technical Reports Server (NTRS)
Wilson, John C.
1992-01-01
The theory for predicting helicopter inflow-wake velocities called flat wake theory was correlated with several sets of experimental data. The theory was developed by V. E. Baskin of the USSR, and a computer code known as DOWN was developed at Princeton University to implement the theory. The theory treats the wake geometry as rigid without interaction between induced velocities and wake structure. The wake structure is assumed to be a flat sheet of vorticity composed of trailing elements whose strength depends on the azimuthal and radial distributions of circulation on a rotor blade. The code predicts the three orthogonal components of flow velocity in the field surrounding the rotor. The predictions can be utilized in rotor performance and helicopter real-time flight-path simulation. The predictive capability of the coded version of flat wake theory provides vertical inflow patterns similar to experimental patterns.
Monolayer Adsorption of Ar and Kr on Graphite: Theoretical Isotherms and Spreading Pressures
Mulero; Cuadros
1997-02-01
The validity of analytical equations for two-dimensional fluids in the prediction of monolayer adsorption isotherms and spreading pressures of rare gases on graphite is analyzed. The statistical mechanical theory of Steele is used to relate the properties of the adsorbed and two-dimensional fluids. In such theory the model of graphite is a perfectly flat surface, which means that only the first order contribution of the fluid-solid interactions are taken into account. Two analytical equations for two-dimensional Lennard-Jones fluids are used: one proposed by Reddy-O'Shea, based in the fit on pressure and potential energy computer simulated results, and other proposed by Cuadros-Mulero, based in the fit of the Helmholtz free energy calculated from computer simulated results of the radial distribution function. The theoretical results are compared with experimental results of Constabaris et al. (J. Chem. Phys. 37, 915 (1962)) for Ar and of Putnam and Fort (J. Phys. Chem. 79, 459 (1975)) for Kr. Good agreement is found using both equations in both cases.
Applications of Computer Simulation Methods in Plastic Forming Technologies for Magnesium Alloys
NASA Astrophysics Data System (ADS)
Zhang, S. H.; Zheng, W. T.; Shang, Y. L.; Wu, X.; Palumbo, G.; Tricarico, L.
2007-05-01
Applications of computer simulation methods in plastic forming of magnesium alloy parts are discussed. As magnesium alloys possess very poor plastic formability at room temperature, various methods have been tried to improve the formability, for example, suitable rolling process and annealing procedures should be found to produce qualified magnesium alloy sheets, which have the reduced anisotropy and improved formability. The blank can be heated to a warm temperature or a hot temperature; a suitable temperature field is designed, tools should be heated or the punch should be cooled; suitable deformation speed should be found to ensure suitable strain rate range. Damage theory considering non-isothermal forming is established. Various modeling methods have been tried to consider above situations. The following situations for modeling the forming process of magnesium alloy sheets and tubes are dealt with: (1) modeling for predicting wrinkling and anisotropy of sheet warm forming; (2) damage theory used for predicting ruptures in sheet warm forming; (3) modeling for optimizing of blank shape and dimensions for sheet warm forming; (4) modeling in non-steady-state creep in hot metal gas forming of AZ31 tubes.
Advanced Computation in Plasma Physics
NASA Astrophysics Data System (ADS)
Tang, William
2001-10-01
Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. This talk will review recent progress and future directions for advanced simulations in magnetically-confined plasmas with illustrative examples chosen from areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop MPP's to produce 3-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for tens of thousands time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.
Souris, Kevin; Lee, John Aldo; Sterpin, Edmond
2016-04-01
Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.
SciDAC GSEP: Gyrokinetic Simulation of Energetic Particle Turbulence and Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Zhihong
Energetic particle (EP) confinement is a key physics issue for burning plasma experiment ITER, the crucial next step in the quest for clean and abundant energy, since ignition relies on self-heating by energetic fusion products (α-particles). Due to the strong coupling of EP with burning thermal plasmas, plasma confinement property in the ignition regime is one of the most uncertain factors when extrapolating from existing fusion devices to the ITER tokamak. EP population in current tokamaks are mostly produced by auxiliary heating such as neutral beam injection (NBI) and radio frequency (RF) heating. Remarkable progress in developing comprehensive EP simulationmore » codes and understanding basic EP physics has been made by two concurrent SciDAC EP projects GSEP funded by the Department of Energy (DOE) Office of Fusion Energy Science (OFES), which have successfully established gyrokinetic turbulence simulation as a necessary paradigm shift for studying the EP confinement in burning plasmas. Verification and validation have rapidly advanced through close collaborations between simulation, theory, and experiment. Furthermore, productive collaborations with computational scientists have enabled EP simulation codes to effectively utilize current petascale computers and emerging exascale computers. We review here key physics progress in the GSEP projects regarding verification and validation of gyrokinetic simulations, nonlinear EP physics, EP coupling with thermal plasmas, and reduced EP transport models. Advances in high performance computing through collaborations with computational scientists that enable these large scale electromagnetic simulations are also highlighted. These results have been widely disseminated in numerous peer-reviewed publications including many Phys. Rev. Lett. papers and many invited presentations at prominent fusion conferences such as the biennial International Atomic Energy Agency (IAEA) Fusion Energy Conference and the annual meeting of the American Physics Society, Division of Plasma Physics (APS-DPP).« less
NASA Astrophysics Data System (ADS)
Wang, Tianmin; Gao, Fei; Hu, Wangyu; Lai, Wensheng; Lu, Guang-Hong; Zu, Xiaotao
2009-09-01
The Ninth International Conference on Computer Simulation of Radiation Effects in Solids (COSIRES 2008) was hosted by Beihang University in Beijing, China from 12 to 17 October 2008. Started in 1992 in Berlin, Germany, this conference series has been held biennially in Santa Barbara, CA, USA (1994); Guildford, UK (1996); Okayama, Japan (1998); State College, PA, USA (2000); Dresden, Germany (2002); Helsinki Finland (2004); and Richland, WA USA (2006). The COSIRES conferences are the foremost international forum on the theory, development and application of advanced computer simulation methods and algorithms to achieve fundamental understanding and predictive modeling of the interaction of energetic particles and clusters with solids. As can be noticed in the proceedings of the COSIRES conferences, these computer simulation methods and algorithms have been proven to be very useful for the study of fundamental radiation effect processes, which are not easily accessible by experimental methods owing to small time and length scales. Moreover, with advance in computing power, they have remarkably been developed in the different scales ranging from meso to atomistic, and even down to electronic levels, as well as coupling of the different scales. They are now becoming increasingly applicable for materials processing and performance prediction in advance engineering and energy-production technologies.
Nishizawa, Hiroaki; Nishimura, Yoshifumi; Kobayashi, Masato; Irle, Stephan; Nakai, Hiromi
2016-08-05
The linear-scaling divide-and-conquer (DC) quantum chemical methodology is applied to the density-functional tight-binding (DFTB) theory to develop a massively parallel program that achieves on-the-fly molecular reaction dynamics simulations of huge systems from scratch. The functions to perform large scale geometry optimization and molecular dynamics with DC-DFTB potential energy surface are implemented to the program called DC-DFTB-K. A novel interpolation-based algorithm is developed for parallelizing the determination of the Fermi level in the DC method. The performance of the DC-DFTB-K program is assessed using a laboratory computer and the K computer. Numerical tests show the high efficiency of the DC-DFTB-K program, a single-point energy gradient calculation of a one-million-atom system is completed within 60 s using 7290 nodes of the K computer. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Helfer, Peter; Shultz, Thomas R
2014-12-01
The widespread availability of calorie-dense food is believed to be a contributing cause of an epidemic of obesity and associated diseases throughout the world. One possible countermeasure is to empower consumers to make healthier food choices with useful nutrition labeling. An important part of this endeavor is to determine the usability of existing and proposed labeling schemes. Here, we report an experiment on how four different labeling schemes affect the speed and nutritional value of food choices. We then apply decision field theory, a leading computational model of human decision making, to simulate the experimental results. The psychology experiment shows that quantitative, single-attribute labeling schemes have greater usability than multiattribute and binary ones, and that they remain effective under moderate time pressure. The computational model simulates these psychological results and provides explanatory insights into them. This work shows how experimental psychology and computational modeling can contribute to the evaluation and improvement of nutrition-labeling schemes. © 2014 New York Academy of Sciences.
Theory and computer simulation of relaxor ferroelectrics doped by off-center impurities
NASA Astrophysics Data System (ADS)
Su, Chin-Cheng
A family of ferroelectric materials have relaxation type dynamics. These materials, called relaxor ferroelectrics, show remarkable dielectric and electromechanical properties important for many practical applications that are different from those of normal ferroelectrics. Despite of the engineering importance of relaxor ferroelectrics, the physical origin of the relaxor behavior is not fully understood. A purpose of this thesis is to advance the theory of relaxor ferroelectrics and to develop the model, which could be used for a computer simulation of the static dielectric and dynamic properties and their relation to the concentration of dopant ions. In this thesis, a Ginzburg-Landau type theory of interaction of randomly distributed local dipoles immersed in a paraelectric crystal is developed. The interaction is caused by the polarization of the host lattice generated by these dipoles. It is long-ranged and decays proportionally to the inverse distance between the local dipoles. The obtained effective Hamiltonian of the dipole-dipole interaction is employed for both the Monte Carlo and the Master Equation simulations of the dielectric and ferroelectric properties of a system with off-center dopant ions producing local dipoles. The computer simulation shows that at low concentration of dopant ions the paraelectric state transforms into a macroscopically paraelectric state consisting of randomly oriented polar clusters. The behavior of the system is similar to that of a spin-glass system. The polar clusters amplify the effective dipole moment and significantly increase the dielectric constant. It is shown that the interaction between the clusters results in a spectrum of relaxation times and the transition to the relaxor state. The real and imaginary parts of the susceptibility of this state are calculated. The slim hysteresis loop in the polarization, which usually appears in the high temperature non-polarized relaxor ferroelectrics, is also obtained for our doped system under similar physical conditions. At intermediate dopant concentration, the material undergoes a diffuse phase transition smeared within a temperature range to a ferroelectric state. A further increase in the dopant concentration makes the transition sharper and closer to the conventional ferroelectric transition. The results obtained are compared with the behavior of the K1-xLixTaO 3 relaxor ferroelectric.
Simulator for neural networks and action potentials.
Baxter, Douglas A; Byrne, John H
2007-01-01
A key challenge for neuroinformatics is to devise methods for representing, accessing, and integrating vast amounts of diverse and complex data. A useful approach to represent and integrate complex data sets is to develop mathematical models [Arbib (The Handbook of Brain Theory and Neural Networks, pp. 741-745, 2003); Arbib and Grethe (Computing the Brain: A Guide to Neuroinformatics, 2001); Ascoli (Computational Neuroanatomy: Principles and Methods, 2002); Bower and Bolouri (Computational Modeling of Genetic and Biochemical Networks, 2001); Hines et al. (J. Comput. Neurosci. 17, 7-11, 2004); Shepherd et al. (Trends Neurosci. 21, 460-468, 1998); Sivakumaran et al. (Bioinformatics 19, 408-415, 2003); Smolen et al. (Neuron 26, 567-580, 2000); Vadigepalli et al. (OMICS 7, 235-252, 2003)]. Models of neural systems provide quantitative and modifiable frameworks for representing data and analyzing neural function. These models can be developed and solved using neurosimulators. One such neurosimulator is simulator for neural networks and action potentials (SNNAP) [Ziv (J. Neurophysiol. 71, 294-308, 1994)]. SNNAP is a versatile and user-friendly tool for developing and simulating models of neurons and neural networks. SNNAP simulates many features of neuronal function, including ionic currents and their modulation by intracellular ions and/or second messengers, and synaptic transmission and synaptic plasticity. SNNAP is written in Java and runs on most computers. Moreover, SNNAP provides a graphical user interface (GUI) and does not require programming skills. This chapter describes several capabilities of SNNAP and illustrates methods for simulating neurons and neural networks. SNNAP is available at http://snnap.uth.tmc.edu .
Refined Zigzag Theory for Laminated Composite and Sandwich Plates
NASA Technical Reports Server (NTRS)
Tessler, Alexander; DiSciuva, Marco; Gherlone, Marco
2009-01-01
A refined zigzag theory is presented for laminated-composite and sandwich plates that includes the kinematics of first-order shear deformation theory as its baseline. The theory is variationally consistent and is derived from the virtual work principle. Novel piecewise-linear zigzag functions that provide a more realistic representation of the deformation states of transverse-shear-flexible plates than other similar theories are used. The formulation does not enforce full continuity of the transverse shear stresses across the plate s thickness, yet is robust. Transverse-shear correction factors are not required to yield accurate results. The theory is devoid of the shortcomings inherent in the previous zigzag theories including shear-force inconsistency and difficulties in simulating clamped boundary conditions, which have greatly limited the accuracy of these theories. This new theory requires only C(sup 0)-continuous kinematic approximations and is perfectly suited for developing computationally efficient finite elements. The theory should be useful for obtaining relatively efficient, accurate estimates of structural response needed to design high-performance load-bearing aerospace structures.
NASA Astrophysics Data System (ADS)
Feldt, Jonas; Miranda, Sebastião; Pratas, Frederico; Roma, Nuno; Tomás, Pedro; Mata, Ricardo A.
2017-12-01
In this work, we present an optimized perturbative quantum mechanics/molecular mechanics (QM/MM) method for use in Metropolis Monte Carlo simulations. The model adopted is particularly tailored for the simulation of molecular systems in solution but can be readily extended to other applications, such as catalysis in enzymatic environments. The electrostatic coupling between the QM and MM systems is simplified by applying perturbation theory to estimate the energy changes caused by a movement in the MM system. This approximation, together with the effective use of GPU acceleration, leads to a negligible added computational cost for the sampling of the environment. Benchmark calculations are carried out to evaluate the impact of the approximations applied and the overall computational performance.
Feldt, Jonas; Miranda, Sebastião; Pratas, Frederico; Roma, Nuno; Tomás, Pedro; Mata, Ricardo A
2017-12-28
In this work, we present an optimized perturbative quantum mechanics/molecular mechanics (QM/MM) method for use in Metropolis Monte Carlo simulations. The model adopted is particularly tailored for the simulation of molecular systems in solution but can be readily extended to other applications, such as catalysis in enzymatic environments. The electrostatic coupling between the QM and MM systems is simplified by applying perturbation theory to estimate the energy changes caused by a movement in the MM system. This approximation, together with the effective use of GPU acceleration, leads to a negligible added computational cost for the sampling of the environment. Benchmark calculations are carried out to evaluate the impact of the approximations applied and the overall computational performance.
Propulsive efficiency of the underwater dolphin kick in humans.
von Loebbecke, Alfred; Mittal, Rajat; Fish, Frank; Mark, Russell
2009-05-01
Three-dimensional fully unsteady computational fluid dynamic simulations of five Olympic-level swimmers performing the underwater dolphin kick are used to estimate the swimmer's propulsive efficiencies. These estimates are compared with those of a cetacean performing the dolphin kick. The geometries of the swimmers and the cetacean are based on laser and CT scans, respectively, and the stroke kinematics is based on underwater video footage. The simulations indicate that the propulsive efficiency for human swimmers varies over a relatively wide range from about 11% to 29%. The efficiency of the cetacean is found to be about 56%, which is significantly higher than the human swimmers. The computed efficiency is found not to correlate with either the slender body theory or with the Strouhal number.
Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aradi, Bálint; Niklasson, Anders M. N.; Frauenheim, Thomas
A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born–Oppenheimer molecular dynamics. Furthermore, for systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can also be applied to a broad range of problems in materialsmore » science, chemistry, and biology.« less
Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids
Aradi, Bálint; Niklasson, Anders M. N.; Frauenheim, Thomas
2015-06-26
A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born–Oppenheimer molecular dynamics. Furthermore, for systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can also be applied to a broad range of problems in materialsmore » science, chemistry, and biology.« less
Computational design and experimental validation of new thermal barrier systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Shengmin
2015-03-31
The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validationmore » applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr 2.75O 8 and confirmed it’s hot corrosion performance.« less
Modeling the fusion of cylindrical bioink particles in post bioprinting structure formation
NASA Astrophysics Data System (ADS)
McCune, Matt; Shafiee, Ashkan; Forgacs, Gabor; Kosztin, Ioan
2015-03-01
Cellular Particle Dynamics (CPD) is an effective computational method to describe the shape evolution and biomechanical relaxation processes in multicellular systems. Thus, CPD is a useful tool to predict the outcome of post-printing structure formation in bioprinting. The predictive power of CPD has been demonstrated for multicellular systems composed of spherical bioink units. Experiments and computer simulations were related through an independently developed theoretical formalism based on continuum mechanics. Here we generalize the CPD formalism to (i) include cylindrical bioink particles often used in specific bioprinting applications, (ii) describe the more realistic experimental situation in which both the length and the volume of the cylindrical bioink units decrease during post-printing structure formation, and (iii) directly connect CPD simulations to the corresponding experiments without the need of the intermediate continuum theory inherently based on simplifying assumptions. Work supported by NSF [PHY-0957914]. Computer time provided by the University of Missouri Bioinformatics Consortium.
Computational strategies in the dynamic simulation of constrained flexible MBS
NASA Technical Reports Server (NTRS)
Amirouche, F. M. L.; Xie, M.
1993-01-01
This research focuses on the computational dynamics of flexible constrained multibody systems. At first a recursive mapping formulation of the kinematical expressions in a minimum dimension as well as the matrix representation of the equations of motion are presented. The method employs Kane's equation, FEM, and concepts of continuum mechanics. The generalized active forces are extended to include the effects of high temperature conditions, such as creep, thermal stress, and elastic-plastic deformation. The time variant constraint relations for rolling/contact conditions between two flexible bodies are also studied. The constraints for validation of MBS simulation of gear meshing contact using a modified Timoshenko beam theory are also presented. The last part deals with minimization of vibration/deformation of the elastic beam in multibody systems making use of time variant boundary conditions. The above methodologies and computational procedures developed are being implemented in a program called DYAMUS.
Theoretical and experimental physical methods of neutron-capture therapy
NASA Astrophysics Data System (ADS)
Borisov, G. I.
2011-09-01
This review is based to a substantial degree on our priority developments and research at the IR-8 reactor of the Russian Research Centre Kurchatov Institute. New theoretical and experimental methods of neutron-capture therapy are developed and applied in practice; these are: A general analytical and semi-empiric theory of neutron-capture therapy (NCT) based on classical neutron physics and its main sections (elementary theories of moderation, diffuse, reflection, and absorption of neutrons) rather than on methods of mathematical simulation. The theory is, first of all, intended for practical application by physicists, engineers, biologists, and physicians. This theory can be mastered by anyone with a higher education of almost any kind and minimal experience in operating a personal computer.
An Economical Semi-Analytical Orbit Theory for Retarded Satellite Motion About an Oblate Planet
NASA Technical Reports Server (NTRS)
Gordon, R. A.
1980-01-01
Brouwer and Brouwer-Lyddanes' use of the Von Zeipel-Delaunay method is employed to develop an efficient analytical orbit theory suitable for microcomputers. A succinctly simple pseudo-phenomenologically conceptualized algorithm is introduced which accurately and economically synthesizes modeling of drag effects. The method epitomizes and manifests effortless efficient computer mechanization. Simulated trajectory data is employed to illustrate the theory's ability to accurately accommodate oblateness and drag effects for microcomputer ground based or onboard predicted orbital representation. Real tracking data is used to demonstrate that the theory's orbit determination and orbit prediction capabilities are favorably adaptable to and are comparable with results obtained utilizing complex definitive Cowell method solutions on satellites experiencing significant drag effects.
Alam, Mohammad Jane; Ahmad, Shabbir
2015-02-05
FTIR, FT-Raman and electronic spectra of allantoin molecule are recorded and investigated using DFT and MP2 methods with 6-311++G(d,p) basis set. The molecular structure, anharmonic vibrational spectra, natural atomic charges, non-linear optical properties, etc. have been computed for the ground state of allantoin. The anharmonic vibrational frequencies are calculated using PT2 algorithm (Barone method) as well as VSCF and CC-VSCF methods. These methods yield results that are in remarkable agreement with the experiment. The coupling strengths between pairs of modes are also calculated using coupling integral based on 2MR-QFF approximation. The simulations on allantoin dimers have been also performed at B3LYP/6-311++G(d,p) level of theory to investigate the effect of the intermolecular interactions on the molecular structure and vibrational frequencies of the monomer. Vibrational assignments are made with the great accuracy using PED calculations and animated modes. The combination and overtone bands have been also identified in the FTIR spectrum with the help of anharmonic computations. The electronic spectra are simulated in gas and solution at TD-B3LYP/6-311++G(d,p) level of theory. The important global quantities such as electro-negativity, electronic chemical potential, electrophilicity index, chemical hardness and softness based on HOMO, LUMO energy eigenvalues are also computed. NBO analysis has been performed for monomer and dimers of allantoin at B3LYP/6-311++G(d,p) level of theory. Copyright © 2014 Elsevier B.V. All rights reserved.
Opalka, Daniel; Sprik, Michiel
2014-06-10
The electronic structure of simple hydrated ions represents one of the most challenging problems in electronic-structure theory. Spectroscopic experiments identified the lowest excited state of the solvated hydroxide as a charge-transfer-to-solvent (CTTS) state. In the present work we report computations of the absorption spectrum of the solvated hydroxide ion, treating both solvent and solute strictly at the same level of theory. The average absorption spectrum up to 25 eV has been computed for samples taken from periodic ab initio molecular dynamics simulations. The experimentally observed CTTS state near the onset of the absorption threshold has been analyzed at the generalized-gradient approximation (GGA) and with a hybrid density-functional. Based on results for the lowest excitation energies computed with the HSE hybrid functional and a Davidson diagonalization scheme, the CTTS transition has been found 0.6 eV below the first absorption band of liquid water. The transfer of an electron to the solvent can be assigned to an excitation from the solute 2pπ orbitals, which are subject to a small energetic splitting due to the asymmetric solvent environment, to the significantly delocalized lowest unoccupied orbital of the solvent. The distribution of the centers of the excited state shows that CTTS along the OH(-) axis of the hydroxide ion is avoided. Furthermore, our simulations indicate that the systematic error arising in the calculated spectrum at the GGA originates from a poor description of the valence band energies in the solution.
Equivalent circuit simulation of HPEM-induced transient responses at nonlinear loads
NASA Astrophysics Data System (ADS)
Kotzev, Miroslav; Bi, Xiaotang; Kreitlow, Matthias; Gronwald, Frank
2017-09-01
In this paper the equivalent circuit modeling of a nonlinearly loaded loop antenna and its transient responses to HPEM field excitations are investigated. For the circuit modeling the general strategy to characterize the nonlinearly loaded antenna by a linear and a nonlinear circuit part is pursued. The linear circuit part can be determined by standard methods of antenna theory and numerical field computation. The modeling of the nonlinear circuit part requires realistic circuit models of the nonlinear loads that are given by Schottky diodes. Combining both parts, appropriate circuit models are obtained and analyzed by means of a standard SPICE circuit simulator. It is the main result that in this way full-wave simulation results can be reproduced. Furthermore it is clearly seen that the equivalent circuit modeling offers considerable advantages with respect to computation speed and also leads to improved physical insights regarding the coupling between HPEM field excitation and nonlinearly loaded loop antenna.
Quantum decision-maker theory and simulation
NASA Astrophysics Data System (ADS)
Zak, Michail; Meyers, Ronald E.; Deacon, Keith S.
2000-07-01
A quantum device simulating the human decision making process is introduced. It consists of quantum recurrent nets generating stochastic processes which represent the motor dynamics, and of classical neural nets describing the evolution of probabilities of these processes which represent the mental dynamics. The autonomy of the decision making process is achieved by a feedback from the mental to motor dynamics which changes the stochastic matrix based upon the probability distribution. This feedback replaces unavailable external information by an internal knowledge- base stored in the mental model in the form of probability distributions. As a result, the coupled motor-mental dynamics is described by a nonlinear version of Markov chains which can decrease entropy without an external source of information. Applications to common sense based decisions as well as to evolutionary games are discussed. An example exhibiting self-organization is computed using quantum computer simulation. Force on force and mutual aircraft engagements using the quantum decision maker dynamics are considered.
A simplified computational memory model from information processing
Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang
2016-01-01
This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847
NASA Astrophysics Data System (ADS)
Huhn, William Paul; Lange, Björn; Yu, Victor; Blum, Volker; Lee, Seyong; Yoon, Mina
Density-functional theory has been well established as the dominant quantum-mechanical computational method in the materials community. Large accurate simulations become very challenging on small to mid-scale computers and require high-performance compute platforms to succeed. GPU acceleration is one promising approach. In this talk, we present a first implementation of all-electron density-functional theory in the FHI-aims code for massively parallel GPU-based platforms. Special attention is paid to the update of the density and to the integration of the Hamiltonian and overlap matrices, realized in a domain decomposition scheme on non-uniform grids. The initial implementation scales well across nodes on ORNL's Titan Cray XK7 supercomputer (8 to 64 nodes, 16 MPI ranks/node) and shows an overall speed up in runtime due to utilization of the K20X Tesla GPUs on each Titan node of 1.4x, with the charge density update showing a speed up of 2x. Further acceleration opportunities will be discussed. Work supported by the LDRD Program of ORNL managed by UT-Battle, LLC, for the U.S. DOE and by the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.
NASA Astrophysics Data System (ADS)
Hao, Ming-Hong; Scheraga, Harold A.
1995-01-01
A comparative study of protein folding with an analytical theory and computer simulations, respectively, is reported. The theory is based on an improved mean-field formalism which, in addition to the usual mean-field approximations, takes into account the distributions of energies in the subsets of conformational states. Sequence-specific properties of proteins are parametrized in the theory by two sets of variables, one for the energetics of mean-field interactions and one for the distribution of energies. Simulations are carried out on model polypeptides with different sequences, with different chain lengths, and with different interaction potentials, ranging from strong biases towards certain local chain states (bond angles and torsional angles) to complete absence of local conformational preferences. Theoretical analysis of the simulation results for the model polypeptides reveals three different types of behavior in the folding transition from the statistical coiled state to the compact globular state; these include a cooperative two-state transition, a continuous folding, and a glasslike transition. It is found that, with the fitted theoretical parameters which are specific for each polypeptide under a different potential, the mean-field theory can describe the thermodynamic properties and folding behavior of the different polypeptides accurately. By comparing the theoretical descriptions with simulation results, we verify the basic assumptions of the theory and, thereby, obtain new insights about the folding transitions of proteins. It is found that the cooperativity of the first-order folding transition of the model polypeptides is determined mainly by long-range interactions, in particular the dipolar orientation; the local interactions (e.g., bond-angle and torsion-angle potentials) have only marginal effect on the cooperative characteristic of the folding, but have a large impact on the difference in energy between the folded lowest-energy structure and the unfolded conformations of a protein.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahms, Rainer N.
2014-12-31
The fidelity of Gradient Theory simulations depends on the accuracy of saturation properties and influence parameters, and require equations of state (EoS) which exhibit a fundamentally consistent behavior in the two-phase regime. Widely applied multi-parameter EoS, however, are generally invalid inside this region. Hence, they may not be fully suitable for application in concert with Gradient Theory despite their ability to accurately predict saturation properties. The commonly assumed temperature-dependence of pure component influence parameters usually restricts their validity to subcritical temperature regimes. This may distort predictions for general multi-component interfaces where temperatures often exceed the critical temperature of vapor phasemore » components. Then, the calculation of influence parameters is not well defined. In this paper, one of the first studies is presented in which Gradient Theory is combined with a next-generation Helmholtz energy EoS which facilitates fundamentally consistent calculations over the entire two-phase regime. Illustrated on pentafluoroethane as an example, reference simulations using this method are performed. They demonstrate the significance of such high-accuracy and fundamentally consistent calculations for the computation of interfacial properties. These reference simulations are compared to corresponding results from cubic PR EoS, widely-applied in combination with Gradient Theory, and mBWR EoS. The analysis reveals that neither of those two methods succeeds to consistently capture the qualitative distribution of obtained key thermodynamic properties in Gradient Theory. Furthermore, a generalized expression of the pure component influence parameter is presented. This development is informed by its fundamental definition based on the direct correlation function of the homogeneous fluid and by presented high-fidelity simulations of interfacial density profiles. As a result, the new model preserves the accuracy of previous temperature-dependent expressions, remains well-defined at supercritical temperatures, and is fully suitable for calculations of general multi-component two-phase interfaces.« less
Passive Motion Paradigm: An Alternative to Optimal Control
Mohan, Vishwanathan; Morasso, Pietro
2011-01-01
In the last years, optimal control theory (OCT) has emerged as the leading approach for investigating neural control of movement and motor cognition for two complementary research lines: behavioral neuroscience and humanoid robotics. In both cases, there are general problems that need to be addressed, such as the “degrees of freedom (DoFs) problem,” the common core of production, observation, reasoning, and learning of “actions.” OCT, directly derived from engineering design techniques of control systems quantifies task goals as “cost functions” and uses the sophisticated formal tools of optimal control to obtain desired behavior (and predictions). We propose an alternative “softer” approach passive motion paradigm (PMP) that we believe is closer to the biomechanics and cybernetics of action. The basic idea is that actions (overt as well as covert) are the consequences of an internal simulation process that “animates” the body schema with the attractor dynamics of force fields induced by the goal and task-specific constraints. This internal simulation offers the brain a way to dynamically link motor redundancy with task-oriented constraints “at runtime,” hence solving the “DoFs problem” without explicit kinematic inversion and cost function computation. We argue that the function of such computational machinery is not only restricted to shaping motor output during action execution but also to provide the self with information on the feasibility, consequence, understanding and meaning of “potential actions.” In this sense, taking into account recent developments in neuroscience (motor imagery, simulation theory of covert actions, mirror neuron system) and in embodied robotics, PMP offers a novel framework for understanding motor cognition that goes beyond the engineering control paradigm provided by OCT. Therefore, the paper is at the same time a review of the PMP rationale, as a computational theory, and a perspective presentation of how to develop it for designing better cognitive architectures. PMID:22207846
Geoid Recovery Using Geophysical Inverse Theory Applied to Satellite to Satellite Tracking Data
NASA Technical Reports Server (NTRS)
Gaposchkin, E. M.
2000-01-01
This report describes a new method for determination of the geopotential, or the equivalent geoid. It is based on Satellite-to-Satellite Tracking (SST) of two co-orbiting low earth satellites separated by a few hundred kilometers. The analysis is aimed at the GRACE Mission, though it is generally applicable to any SST data. It is proposed that the SST be viewed as a mapping mission. That is, the result will be maps of the geoid or gravity, as contrasted with determination of spherical harmonics or Fourier coefficients. A method has been developed, based on Geophysical Inverse Theory (GIT), that can provide maps at a prescribed (desired) resolution and the corresponding error map from the SST data. This computation can be done area by area avoiding simultaneous recovery of all the geopotential information. The necessary elements of potential theory, celestial mechanics, and Geophysical Inverse Theory are described, a computation architecture is described, and the results of several simulations presented. Centimeter accuracy geoids with 50 to 100 km resolution can be recovered with a 30 to 60 day mission.
NASA Technical Reports Server (NTRS)
Pohorille, Andrew; New, Michael H.; Schweighofer, Karl; Wilson, Michael A.; DeVincenzi, Donald L. (Technical Monitor)
2000-01-01
Two of Ernest Overton's lasting contributions to biology are the Meyer-Overton relationship between the potency of an anesthetic and its solubility in oil, and the Overton rule which relates the permeability of a membrane to the oil-water partition coefficient of the permeating molecule. A growing body of experimental evidence, however, cannot be reconciled with these theories. In particular, the molecular nature of membranes, unknown to Overton, needs to be included in any description of these phenomena. Computer simulations are ideally suited for providing atomic-level information about the behavior of small molecules in membranes. The authors discuss simulation studies relevant to Overton's ideas. Through simulations it was found that anesthetics tend to concentrate at interfaces and their anesthetic potency correlates better with solubility at the water-membrane interface than with solubility in oil. Simulation studies of membrane permeation revealed the anisotropic nature of the membranes, as evidenced, for example, by the highly nonuniform distribution of free volume in the bilayer. This, in turn, influences the diffusion rates of solutes, which increase with the depth in the membrane. Small solutes tend to move by hopping between voids in the bilayer, and this hopping motion may be responsible for the deviation from the Overton rule of the permeation rates of these molecules.
Developing an Asteroid Rotational Theory
NASA Astrophysics Data System (ADS)
Geis, Gena; Williams, Miguel; Linder, Tyler; Pakey, Donald
2018-01-01
The goal of this project is to develop a theoretical asteroid rotational theory from first principles. Starting at first principles provides a firm foundation for computer simulations which can be used to analyze multiple variables at once such as size, rotation period, tensile strength, and density. The initial theory will be presented along with early models of applying the theory to the asteroid population. Early results confirm previous work by Pravec et al. (2002) that show the majority of the asteroids larger than 200m have negligible tensile strength and have spin rates close to their critical breakup point. Additionally, results show that an object with zero tensile strength has a maximum rotational rate determined by the object’s density, not size. Therefore, an iron asteroid with a density of 8000 kg/m^3 would have a minimum spin period of 1.16h if the only forces were gravitational and centrifugal. The short-term goal is to include material forces in the simulations to determine what tensile strength will allow the high spin rates of asteroids smaller than 150m.
Ma, Qingyu; He, Bin
2007-08-21
A theoretical study on the magnetoacoustic signal generation with magnetic induction and its applications to electrical conductivity reconstruction is conducted. An object with a concentric cylindrical geometry is located in a static magnetic field and a pulsed magnetic field. Driven by Lorentz force generated by the static magnetic field, the magnetically induced eddy current produces acoustic vibration and the propagated sound wave is received by a transducer around the object to reconstruct the corresponding electrical conductivity distribution of the object. A theory on the magnetoacoustic waveform generation for a circular symmetric model is provided as a forward problem. The explicit formulae and quantitative algorithm for the electrical conductivity reconstruction are then presented as an inverse problem. Computer simulations were conducted to test the proposed theory and assess the performance of the inverse algorithms for a multi-layer cylindrical model. The present simulation results confirm the validity of the proposed theory and suggest the feasibility of reconstructing electrical conductivity distribution based on the proposed theory on the magnetoacoustic signal generation with magnetic induction.
NASA Astrophysics Data System (ADS)
Solana, J. R.; Akhouri, B. P.
2018-07-01
A perturbation theory for square-well chain fluids is developed within the scheme of the (generalised) Wertheim thermodynamic perturbation theory. The theory is based on the Pavlyukhin parametrisations [Y. T. Pavlyukhin, J. Struct. Chem. 53, 476 (2012)] of their simulation data for the first four perturbation terms in the high temperature expansion of the Helmholtz free energy of square-well monomer fluids combined with a second-order perturbation theory for the contact value of the radial distribution function of the square-well monomer fluid that enters into bonding contribution. To obtain the latter perturbation terms, we have performed computer simulations in the hard-sphere reference system. The importance of the perturbation terms beyond the second-order one for the monomer fluid and of the approximations of different orders in the bonding contribution for the chain fluids in the predicted equation of state, excess energy and liquid-vapour coexistence densities is analysed.
Reconstruction of Vectorial Acoustic Sources in Time-Domain Tomography
Xia, Rongmin; Li, Xu; He, Bin
2009-01-01
A new theory is proposed for the reconstruction of curl-free vector field, whose divergence serves as acoustic source. The theory is applied to reconstruct vector acoustic sources from the scalar acoustic signals measured on a surface enclosing the source area. It is shown that, under certain conditions, the scalar acoustic measurements can be vectorized according to the known measurement geometry and subsequently be used to reconstruct the original vector field. Theoretically, this method extends the application domain of the existing acoustic reciprocity principle from a scalar field to a vector field, indicating that the stimulating vectorial source and the transmitted acoustic pressure vector (acoustic pressure vectorized according to certain measurement geometry) are interchangeable. Computer simulation studies were conducted to evaluate the proposed theory, and the numerical results suggest that reconstruction of a vector field using the proposed theory is not sensitive to variation in the detecting distance. The present theory may be applied to magnetoacoustic tomography with magnetic induction (MAT-MI) for reconstructing current distribution from acoustic measurements. A simulation on MAT-MI shows that, compared to existing methods, the present method can give an accurate estimation on the source current distribution and a better conductivity reconstruction. PMID:19211344
Development of an aeroelastic methodology for surface morphing rotors
NASA Astrophysics Data System (ADS)
Cook, James R.
Helicopter performance capabilities are limited by maximum lift characteristics and vibratory loading. In high speed forward flight, dynamic stall and transonic flow greatly increase the amplitude of vibratory loads. Experiments and computational simulations alike have indicated that a variety of active rotor control devices are capable of reducing vibratory loads. For example, periodic blade twist and flap excitation have been optimized to reduce vibratory loads in various rotors. Airfoil geometry can also be modified in order to increase lift coefficient, delay stall, or weaken transonic effects. To explore the potential benefits of active controls, computational methods are being developed for aeroelastic rotor evaluation, including coupling between computational fluid dynamics (CFD) and computational structural dynamics (CSD) solvers. In many contemporary CFD/CSD coupling methods it is assumed that the airfoil is rigid to reduce the interface by single dimension. Some methods retain the conventional one-dimensional beam model while prescribing an airfoil shape to simulate active chord deformation. However, to simulate the actual response of a compliant airfoil it is necessary to include deformations that originate not only from control devices (such as piezoelectric actuators), but also inertial forces, elastic stresses, and aerodynamic pressures. An accurate representation of the physics requires an interaction with a more complete representation of loads and geometry. A CFD/CSD coupling methodology capable of communicating three-dimensional structural deformations and a distribution of aerodynamic forces over the wetted blade surface has not yet been developed. In this research an interface is created within the Fully Unstructured Navier-Stokes (FUN3D) solver that communicates aerodynamic forces on the blade surface to University of Michigan's Nonlinear Active Beam Solver (UM/NLABS -- referred to as NLABS in this thesis). Interface routines are developed for transmission of force and deflection information to achieve an aeroelastic coupling updated at each time step. The method is validated first by comparing the integrated aerodynamic work at CFD and CSD nodes to verify work conservation across the interface. Second, the method is verified by comparing the sectional blade loads and deflections of a rotor in hover and in forward flight with experimental data. Finally, stability analyses for pitch/plunge flutter and camber flutter are performed with comprehensive CSD/low-order-aerodynamics and tightly coupled CFD/CSD simulations and compared to analytical solutions of Peters' thin airfoil theory to verify proper aeroelastic behavior. The effects of simple harmonic camber actuation are examined and compared to the response predicted by Peters' finite-state (F-S) theory. In anticipation of active rotor experiments inside enclosed facilities, computational simulations are performed to evaluate the capability of CFD for accurately simulating flow inside enclosed volumes. A computational methodology for accurately simulating a rotor inside a test chamber is developed to determine the influence of test facility components and turbulence modeling and performance predictions. A number of factors that influence the physical accuracy of the simulation, such as temporal resolution, grid resolution, and aeroelasticity are also evaluated.
Modeling, Simulation and Analysis of Public Key Infrastructure
NASA Technical Reports Server (NTRS)
Liu, Yuan-Kwei; Tuey, Richard; Ma, Paul (Technical Monitor)
1998-01-01
Security is an essential part of network communication. The advances in cryptography have provided solutions to many of the network security requirements. Public Key Infrastructure (PKI) is the foundation of the cryptography applications. The main objective of this research is to design a model to simulate a reliable, scalable, manageable, and high-performance public key infrastructure. We build a model to simulate the NASA public key infrastructure by using SimProcess and MatLab Software. The simulation is from top level all the way down to the computation needed for encryption, decryption, digital signature, and secure web server. The application of secure web server could be utilized in wireless communications. The results of the simulation are analyzed and confirmed by using queueing theory.
A geostatistical extreme-value framework for fast simulation of natural hazard events
Stephenson, David B.
2016-01-01
We develop a statistical framework for simulating natural hazard events that combines extreme value theory and geostatistics. Robust generalized additive model forms represent generalized Pareto marginal distribution parameters while a Student’s t-process captures spatial dependence and gives a continuous-space framework for natural hazard event simulations. Efficiency of the simulation method allows many years of data (typically over 10 000) to be obtained at relatively little computational cost. This makes the model viable for forming the hazard module of a catastrophe model. We illustrate the framework by simulating maximum wind gusts for European windstorms, which are found to have realistic marginal and spatial properties, and validate well against wind gust measurements. PMID:27279768
Wormlike Chain Theory and Bending of Short DNA
NASA Astrophysics Data System (ADS)
Mazur, Alexey K.
2007-05-01
The probability distributions for bending angles in double helical DNA obtained in all-atom molecular dynamics simulations are compared with theoretical predictions. The computed distributions remarkably agree with the wormlike chain theory and qualitatively differ from predictions of the subelastic chain model. The computed data exhibit only small anomalies in the apparent flexibility of short DNA and cannot account for the recently reported AFM data. It is possible that the current atomistic DNA models miss some essential mechanisms of DNA bending on intermediate length scales. Analysis of bent DNA structures reveal, however, that the bending motion is structurally heterogeneous and directionally anisotropic on the length scales where the experimental anomalies were detected. These effects are essential for interpretation of the experimental data and they also can be responsible for the apparent discrepancy.
Application of a Resource Theory for Magic States to Fault-Tolerant Quantum Computing.
Howard, Mark; Campbell, Earl
2017-03-03
Motivated by their necessity for most fault-tolerant quantum computation schemes, we formulate a resource theory for magic states. First, we show that robustness of magic is a well-behaved magic monotone that operationally quantifies the classical simulation overhead for a Gottesman-Knill-type scheme using ancillary magic states. Our framework subsequently finds immediate application in the task of synthesizing non-Clifford gates using magic states. When magic states are interspersed with Clifford gates, Pauli measurements, and stabilizer ancillas-the most general synthesis scenario-then the class of synthesizable unitaries is hard to characterize. Our techniques can place nontrivial lower bounds on the number of magic states required for implementing a given target unitary. Guided by these results, we have found new and optimal examples of such synthesis.
Thermokinetic Simulation of Precipitation in NiTi Shape Memory Alloys
NASA Astrophysics Data System (ADS)
Cirstea, C. D.; Karadeniz-Povoden, E.; Kozeschnik, E.; Lungu, M.; Lang, P.; Balagurov, A.; Cirstea, V.
2017-06-01
Considering classical nucleation theory and evolution equations for the growth and composition change of precipitates, we simulate the evolution of the precipitates structure in the classical stages of nucleation, growth and coarsening using the solid-state transformation Matcalc software. The formation of Ni3Ti, Ni4Ti3 or Ni3Ti2 precipitate is the key to hardening phenomenon of the alloys, which depends on the nickel solubility in the bulk alloys. The microstructural evolution of metastable Ni4Ti3 and Ni3Ti2 precipitates in Ni-rich TiNi alloys is simulated by computational thermokinetics, based on thermodynamic and diffusion databases. The simulated precipitate phase fractions are compared with experimental data.
Molecular beam mass spectrometer development
NASA Technical Reports Server (NTRS)
Brock, F. J.; Hueser, J. E.
1976-01-01
An analytical model, based on the kinetics theory of a drifting Maxwellian gas is used to determine the nonequilibrium molecular density distribution within a hemispherical shell open aft with its axis parallel to its velocity. The concept of a molecular shield in terrestrial orbit above 200 km is also analyzed using the kinetic theory of a drifting Maxwellian gas. Data are presented for the components of the gas density within the shield due to the free stream atmosphere, outgassing from the shield and enclosed experiments, and atmospheric gas scattered off a shield orbiter system. A description is given of a FORTRAN program for computating the three dimensional transition flow regime past the space shuttle orbiter that employs the Monte Carlo simulation method to model real flow by some thousands of simulated molecules.
Computer simulation and high level virial theory of Saturn-ring or UFO colloids.
Bates, Martin A; Dennison, Matthew; Masters, Andrew
2008-08-21
Monte Carlo simulations are used to map out the complete phase diagram of hard body UFO systems, in which the particles are composed of a concentric sphere and thin disk. The equation of state and phase behavior are determined for a range of relative sizes of the sphere and disk. We show that for relatively large disks, nematic and solid phases are observed in addition to the isotropic fluid. For small disks, two different solid phases exist. For intermediate sizes, only a disordered fluid phase is observed. The positional and orientational structure of the various phases are examined. We also compare the equations of state and the nematic-isotropic coexistence densities with those predicted by an extended Onsager theory using virial coefficients up to B(8).
Computer simulation and high level virial theory of Saturn-ring or UFO colloids
NASA Astrophysics Data System (ADS)
Bates, Martin A.; Dennison, Matthew; Masters, Andrew
2008-08-01
Monte Carlo simulations are used to map out the complete phase diagram of hard body UFO systems, in which the particles are composed of a concentric sphere and thin disk. The equation of state and phase behavior are determined for a range of relative sizes of the sphere and disk. We show that for relatively large disks, nematic and solid phases are observed in addition to the isotropic fluid. For small disks, two different solid phases exist. For intermediate sizes, only a disordered fluid phase is observed. The positional and orientational structure of the various phases are examined. We also compare the equations of state and the nematic-isotropic coexistence densities with those predicted by an extended Onsager theory using virial coefficients up to B8.
NASA Technical Reports Server (NTRS)
Fischer, James R.; Grosch, Chester; Mcanulty, Michael; Odonnell, John; Storey, Owen
1987-01-01
NASA's Office of Space Science and Applications (OSSA) gave a select group of scientists the opportunity to test and implement their computational algorithms on the Massively Parallel Processor (MPP) located at Goddard Space Flight Center, beginning in late 1985. One year later, the Working Group presented its report, which addressed the following: algorithms, programming languages, architecture, programming environments, the way theory relates, and performance measured. The findings point to a number of demonstrated computational techniques for which the MPP architecture is ideally suited. For example, besides executing much faster on the MPP than on conventional computers, systolic VLSI simulation (where distances are short), lattice simulation, neural network simulation, and image problems were found to be easier to program on the MPP's architecture than on a CYBER 205 or even a VAX. The report also makes technical recommendations covering all aspects of MPP use, and recommendations concerning the future of the MPP and machines based on similar architectures, expansion of the Working Group, and study of the role of future parallel processors for space station, EOS, and the Great Observatories era.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khakhaleva-Li, Zimu; Gnedin, Nickolay Y., E-mail: zimu@uchicago.edu, E-mail: gnedin@fnal.gov
We compare the properties of stellar populations of model galaxies from the Cosmic Reionization On Computers (CROC) project with the exiting ultraviolet (UV) and IR data. Since CROC simulations do not follow cosmic dust directly, we adopt two variants of the dust-follows-metals ansatz to populate model galaxies with dust. Using the dust radiative transfer code Hyperion, we compute synthetic stellar spectra, UV continuum slopes, and IR fluxes for simulated galaxies. We find that the simulation results generally match observational measurements, but, perhaps, not in full detail. The differences seem to indicate that our adopted dust-follows-metals ansatzes are not fully sufficient.more » While the discrepancies with the exiting data are marginal, the future James Webb Space Telescope (JWST) data will be of much higher precision, rendering highly significant any tentative difference between theory and observations. It is, therefore, likely, that in order to fully utilize the precision of JWST observations, fully dynamical modeling of dust formation, evolution, and destruction may be required.« less
Stoudenmire, E M; Wagner, Lucas O; White, Steven R; Burke, Kieron
2012-08-03
We extend the density matrix renormalization group to compute exact ground states of continuum many-electron systems in one dimension with long-range interactions. We find the exact ground state of a chain of 100 strongly correlated artificial hydrogen atoms. The method can be used to simulate 1D cold atom systems and to study density-functional theory in an exact setting. To illustrate, we find an interacting, extended system which is an insulator but whose Kohn-Sham system is metallic.
Beyond the constraints underlying Kolmogorov-Johnson-Mehl-Avrami theory related to the growth laws.
Tomellini, M; Fanfoni, M
2012-02-01
The theory of Kolmogorov-Johnson-Mehl-Avrami for phase transition kinetics is subjected to severe limitations concerning the functional form of the growth law. This paper is devoted to sidestepping this drawback through the use of the correlation function approach. Moreover, we put forward an easy-to-handle formula, written in terms of the experimentally accessible actual extended volume fraction, which is found to match several types of growths. Computer simulations have been performed for corroborating the theoretical approach. © 2012 American Physical Society
Communication: An exact bound on the bridge function in integral equation theories.
Kast, Stefan M; Tomazic, Daniel
2012-11-07
We show that the formal solution of the general closure relation occurring in Ornstein-Zernike-type integral equation theories in terms of the Lambert W function leads to an exact relation between the bridge function and correlation functions, most notably to an inequality that bounds possible bridge values. The analytical results are illustrated on the example of the Lennard-Jones fluid for which the exact bridge function is known from computer simulations under various conditions. The inequality has consequences for the development of bridge function models and rationalizes numerical convergence issues.
Distributed collaborative response surface method for mechanical dynamic assembly reliability design
NASA Astrophysics Data System (ADS)
Bai, Guangchen; Fei, Chengwei
2013-11-01
Because of the randomness of many impact factors influencing the dynamic assembly relationship of complex machinery, the reliability analysis of dynamic assembly relationship needs to be accomplished considering the randomness from a probabilistic perspective. To improve the accuracy and efficiency of dynamic assembly relationship reliability analysis, the mechanical dynamic assembly reliability(MDAR) theory and a distributed collaborative response surface method(DCRSM) are proposed. The mathematic model of DCRSM is established based on the quadratic response surface function, and verified by the assembly relationship reliability analysis of aeroengine high pressure turbine(HPT) blade-tip radial running clearance(BTRRC). Through the comparison of the DCRSM, traditional response surface method(RSM) and Monte Carlo Method(MCM), the results show that the DCRSM is not able to accomplish the computational task which is impossible for the other methods when the number of simulation is more than 100 000 times, but also the computational precision for the DCRSM is basically consistent with the MCM and improved by 0.40˜4.63% to the RSM, furthermore, the computational efficiency of DCRSM is up to about 188 times of the MCM and 55 times of the RSM under 10000 times simulations. The DCRSM is demonstrated to be a feasible and effective approach for markedly improving the computational efficiency and accuracy of MDAR analysis. Thus, the proposed research provides the promising theory and method for the MDAR design and optimization, and opens a novel research direction of probabilistic analysis for developing the high-performance and high-reliability of aeroengine.
Airbreathing Propulsion System Analysis Using Multithreaded Parallel Processing
NASA Technical Reports Server (NTRS)
Schunk, Richard Gregory; Chung, T. J.; Rodriguez, Pete (Technical Monitor)
2000-01-01
In this paper, parallel processing is used to analyze the mixing, and combustion behavior of hypersonic flow. Preliminary work for a sonic transverse hydrogen jet injected from a slot into a Mach 4 airstream in a two-dimensional duct combustor has been completed [Moon and Chung, 1996]. Our aim is to extend this work to three-dimensional domain using multithreaded domain decomposition parallel processing based on the flowfield-dependent variation theory. Numerical simulations of chemically reacting flows are difficult because of the strong interactions between the turbulent hydrodynamic and chemical processes. The algorithm must provide an accurate representation of the flowfield, since unphysical flowfield calculations will lead to the faulty loss or creation of species mass fraction, or even premature ignition, which in turn alters the flowfield information. Another difficulty arises from the disparity in time scales between the flowfield and chemical reactions, which may require the use of finite rate chemistry. The situations are more complex when there is a disparity in length scales involved in turbulence. In order to cope with these complicated physical phenomena, it is our plan to utilize the flowfield-dependent variation theory mentioned above, facilitated by large eddy simulation. Undoubtedly, the proposed computation requires the most sophisticated computational strategies. The multithreaded domain decomposition parallel processing will be necessary in order to reduce both computational time and storage. Without special treatments involved in computer engineering, our attempt to analyze the airbreathing combustion appears to be difficult, if not impossible.
Simulation of surface processes
Jónsson, Hannes
2011-01-01
Computer simulations of surface processes can reveal unexpected insight regarding atomic-scale structure and transitions. Here, the strengths and weaknesses of some commonly used approaches are reviewed as well as promising avenues for improvements. The electronic degrees of freedom are usually described by gradient-dependent functionals within Kohn–Sham density functional theory. Although this level of theory has been remarkably successful in numerous studies, several important problems require a more accurate theoretical description. It is important to develop new tools to make it possible to study, for example, localized defect states and band gaps in large and complex systems. Preliminary results presented here show that orbital density-dependent functionals provide a promising avenue, but they require the development of new numerical methods and substantial changes to codes designed for Kohn–Sham density functional theory. The nuclear degrees of freedom can, in most cases, be described by the classical equations of motion; however, they still pose a significant challenge, because the time scale of interesting transitions, which typically involve substantial free energy barriers, is much longer than the time scale of vibrations—often 10 orders of magnitude. Therefore, simulation of diffusion, structural annealing, and chemical reactions cannot be achieved with direct simulation of the classical dynamics. Alternative approaches are needed. One such approach is transition state theory as implemented in the adaptive kinetic Monte Carlo algorithm, which, thus far, has relied on the harmonic approximation but could be extended and made applicable to systems with rougher energy landscape and transitions through quantum mechanical tunneling. PMID:21199939
Nexus: A modular workflow management system for quantum simulation codes
NASA Astrophysics Data System (ADS)
Krogel, Jaron T.
2016-01-01
The management of simulation workflows represents a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.
NASA Astrophysics Data System (ADS)
Baroni, Stefano
Modern simulation methods based on electronic-structure theory have long been deemed unfit to compute heat transport coefficients within the Green-Kubo formalism. This is so because the quantum-mechanical energy density from which the heat flux is derived is inherently ill defined, thus allegedly hampering the use of the Green-Kubo formula. While this objection would actually apply to classical systems as well, I will demonstrate that the thermal conductivity is indeed independent of the specific microscopic expression for the energy density and current from which it is derived. This fact results from a kind of gauge invariance stemming from energy conservation and extensivity, which I will illustrate numerically for a classical Lennard-Jones fluid. I will then introduce an expression for the adiabatic energy flux, derived within density-functional theory, that allows simulating atomic heat transport using equilibrium ab initio molecular dynamics. The resulting methodology is demonstrated by comparing results from ab-initio and classical molecular-dynamics simulations of a model liquid-Argon system, for which accurate inter-atomic potentials are derived by the force-matching method, and applied to compute the thermal conductivity of heavy water at ambient conditions. The problem of evaluating transport coefficients along with their accuracy from relatively short trajectories is finally addressed and discussed with a few representative examples. Partially funded by the European Union through the MaX Centre of Excellence (Grant No. 676598).
NASA Astrophysics Data System (ADS)
Schwörer, Magnus; Breitenfeld, Benedikt; Tröster, Philipp; Bauer, Sebastian; Lorenzen, Konstantin; Tavan, Paul; Mathias, Gerald
2013-06-01
Hybrid molecular dynamics (MD) simulations, in which the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 103-105 molecules, pose a challenge. A corresponding computational approach should guarantee energy conservation, exclude artificial distortions of the electron density at the interface between the DFT and PMM fragments, and should treat the long-range electrostatic interactions within the hybrid simulation system in a linearly scaling fashion. Here we describe a corresponding Hamiltonian DFT/(P)MM implementation, which accounts for inducible atomic dipoles of a PMM environment in a joint DFT/PMM self-consistency iteration. The long-range parts of the electrostatics are treated by hierarchically nested fast multipole expansions up to a maximum distance dictated by the minimum image convention of toroidal boundary conditions and, beyond that distance, by a reaction field approach such that the computation scales linearly with the number of PMM atoms. Short-range over-polarization artifacts are excluded by using Gaussian inducible dipoles throughout the system and Gaussian partial charges in the PMM region close to the DFT fragment. The Hamiltonian character, the stability, and efficiency of the implementation are investigated by hybrid DFT/PMM-MD simulations treating one molecule of the water dimer and of bulk water by DFT and the respective remainder by PMM.
Modeling Carbon Dioxide Vibrational Frequencies in Ionic Liquids: II. Spectroscopic Map.
Daly, Clyde A; Berquist, Eric J; Brinzer, Thomas; Garrett-Roe, Sean; Lambrecht, Daniel S; Corcelli, Steven A
2016-12-15
The primary challenge for connecting molecular dynamics (MD) simulations to linear and two-dimensional infrared measurements is the calculation of the vibrational frequency for the chromophore of interest. Computing the vibrational frequency at each time step of the simulation with a quantum mechanical method like density functional theory (DFT) is generally prohibitively expensive. One approach to circumnavigate this problem is the use of spectroscopic maps. Spectroscopic maps are empirical relationships that correlate the frequency of interest to properties of the surrounding solvent that are readily accessible in the MD simulation. Here, we develop a spectroscopic map for the asymmetric stretch of CO 2 in the 1-butyl-3-methylimidazolium hexafluorophosphate ([C 4 C 1 im][PF 6 ]) ionic liquid (IL). DFT is used to compute the vibrational frequency of 500 statistically independent CO 2 -[C 4 C 1 im][PF 6 ] clusters extracted from an MD simulation. When the map was tested on 500 different CO 2 -[C 4 C 1 im][PF 6 ] clusters, the correlation coefficient between the benchmark frequencies and the predicted frequencies was R = 0.94, and the root-mean-square error was 2.7 cm -1 . The calculated distribution of frequencies also agrees well with experiment. The spectroscopic map required information about the CO 2 angle, the electrostatics of the surrounding solvent, and the Lennard-Jones interaction between the CO 2 and the IL. The contribution of each term in the map was investigated using symmetry-adapted perturbation theory calculations.
Franz, Delbert D.; Melching, Charles S.
1997-01-01
The Full EQuations UTiLities (FEQUTL) model is a computer program for computation of tables that list the hydraulic characteristics of open channels and control structures as a function of upstream and downstream depths; these tables facilitate the simulation of unsteady flow in a stream system with the Full Equations (FEQ) model. Simulation of unsteady flow requires many iterations for each time period computed. Thus, computation of hydraulic characteristics during the simulations is impractical, and preparation of function tables and application of table look-up procedures facilitates simulation of unsteady flow. Three general types of function tables are computed: one-dimensional tables that relate hydraulic characteristics to upstream flow depth, two-dimensional tables that relate flow through control structures to upstream and downstream flow depth, and three-dimensional tables that relate flow through gated structures to upstream and downstream flow depth and gate setting. For open-channel reaches, six types of one-dimensional function tables contain different combinations of the top width of flow, area, first moment of area with respect to the water surface, conveyance, flux coefficients, and correction coefficients for channel curvilinearity. For hydraulic control structures, one type of one-dimensional function table contains relations between flow and upstream depth, and two types of two-dimensional function tables contain relations among flow and upstream and downstream flow depths. For hydraulic control structures with gates, a three-dimensional function table lists the system of two-dimensional tables that contain the relations among flow and upstream and downstream flow depths that correspond to different gate openings. Hydraulic control structures for which function tables containing flow relations are prepared in FEQUTL include expansions, contractions, bridges, culverts, embankments, weirs, closed conduits (circular, rectangular, and pipe-arch shapes), dam failures, floodways, and underflow gates (sluice and tainter gates). The theory for computation of the hydraulic characteristics is presented for open channels and for each hydraulic control structure. For the hydraulic control structures, the theory is developed from the results of experimental tests of flow through the structure for different upstream and downstream flow depths. These tests were done to describe flow hydraulics for a single, steady-flow design condition and, thus, do not provide complete information on flow transitions (for example, between free- and submerged-weir flow) that may result in simulation of unsteady flow. Therefore, new procedures are developed to approximate the hydraulics of flow transitions for culverts, embankments, weirs, and underflow gates.
Coherent states field theory in supramolecular polymer physics
NASA Astrophysics Data System (ADS)
Fredrickson, Glenn H.; Delaney, Kris T.
2018-05-01
In 1970, Edwards and Freed presented an elegant representation of interacting branched polymers that resembles the coherent states (CS) formulation of second-quantized field theory. This CS polymer field theory has been largely overlooked during the intervening period in favor of more conventional "auxiliary field" (AF) interacting polymer representations that form the basis of modern self-consistent field theory (SCFT) and field-theoretic simulation approaches. Here we argue that the CS representation provides a simpler and computationally more efficient framework than the AF approach for broad classes of reversibly bonding polymers encountered in supramolecular polymer science. The CS formalism is reviewed, initially for a simple homopolymer solution, and then extended to supramolecular polymers capable of forming reversible linkages and networks. In the context of the Edwards model of a non-reacting homopolymer solution and one and two-component models of telechelic reacting polymers, we discuss the structure of CS mean-field theory, including the equivalence to SCFT, and show how weak-amplitude expansions (random phase approximations) can be readily developed without explicit enumeration of all reaction products in a mixture. We further illustrate how to analyze CS field theories beyond SCFT at the level of Gaussian field fluctuations and provide a perspective on direct numerical simulations using a recently developed complex Langevin technique.
Teaching emergency medical services management skills using a computer simulation exercise.
Hubble, Michael W; Richards, Michael E; Wilfong, Denise
2011-02-01
Simulation exercises have long been used to teach management skills in business schools. However, this pedagogical approach has not been reported in emergency medical services (EMS) management education. We sought to develop, deploy, and evaluate a computerized simulation exercise for teaching EMS management skills. Using historical data, a computer simulation model of a regional EMS system was developed. After validation, the simulation was used in an EMS management course. Using historical operational and financial data of the EMS system under study, students designed an EMS system and prepared a budget based on their design. The design of each group was entered into the model that simulated the performance of the EMS system. Students were evaluated on operational and financial performance of their system design and budget accuracy and then surveyed about their experiences with the exercise. The model accurately simulated the performance of the real-world EMS system on which it was based. The exercise helped students identify operational inefficiencies in their system designs and highlighted budget inaccuracies. Most students rated the exercise as moderately or very realistic in ambulance deployment scheduling, budgeting, personnel cost calculations, demand forecasting, system design, and revenue projections. All students indicated the exercise was helpful in gaining a top management perspective, and 89% stated the exercise was helpful in bridging the gap between theory and reality. Preliminary experience with a computer simulator to teach EMS management skills was well received by students in a baccalaureate paramedic program and seems to be a valuable teaching tool. Copyright © 2011 Society for Simulation in Healthcare
Computation of turbulence and dispersion of cork in the NETL riser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiradilok, Veeraya; Gidaspow, Dimitri; Breault, R.W.
The knowledge of dispersion coefficients is essential for reliable design of gasifiers. However, a literature review had shown that dispersion coefficients in fluidized beds differ by more than five orders of magnitude. This study presents a comparison of the computed axial solids dispersion coefficients for cork particles to the NETL riser cork data. The turbulence properties, the Reynolds stresses, the granular temperature spectra and the radial and axial gas and solids dispersion coefficients are computed. The standard kinetic theory model described in Gidaspow’s 1994 book, Multiphase Flow and Fluidization, Academic Press and the IIT and Fluent codes were used tomore » compute the measured axial solids volume fraction profiles for flow of cork particles in the NETL riser. The Johnson–Jackson boundary conditions were used. Standard drag correlations were used. This study shows that the computed solids volume fractions for the low flux flow are within the experimental error of those measured, using a two-dimensional model. At higher solids fluxes the simulated solids volume fractions are close to the experimental measurements, but deviate significantly at the top of the riser. This disagreement is due to use of simplified geometry in the two-dimensional simulation. There is a good agreement between the experiment and the three-dimensional simulation for a high flux condition. This study concludes that the axial and radial gas and solids dispersion coefficients in risers operating in the turbulent flow regime can be computed using a multiphase computational fluid dynamics model.« less
NASA Astrophysics Data System (ADS)
Miao, Linling; Young, Charles D.; Sing, Charles E.
2017-07-01
Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N3). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N2 - N2.25), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.