A manifold learning approach to data-driven computational materials and processes
NASA Astrophysics Data System (ADS)
Ibañez, Ruben; Abisset-Chavanne, Emmanuelle; Aguado, Jose Vicente; Gonzalez, David; Cueto, Elias; Duval, Jean Louis; Chinesta, Francisco
2017-10-01
Standard simulation in classical mechanics is based on the use of two very different types of equations. The first one, of axiomatic character, is related to balance laws (momentum, mass, energy, …), whereas the second one consists of models that scientists have extracted from collected, natural or synthetic data. In this work we propose a new method, able to directly link data to computers in order to perform numerical simulations. These simulations will employ universal laws while minimizing the need of explicit, often phenomenological, models. They are based on manifold learning methodologies.
Simulation of the Physics of Flight
ERIC Educational Resources Information Center
Lane, W. Brian
2013-01-01
Computer simulations continue to prove to be a valuable tool in physics education. Based on the needs of an Aviation Physics course, we developed the PHYSics of FLIght Simulator (PhysFliS), which numerically solves Newton's second law for an airplane in flight based on standard aerodynamics relationships. The simulation can be used to pique…
Advances in free-energy-based simulations of protein folding and ligand binding.
Perez, Alberto; Morrone, Joseph A; Simmerling, Carlos; Dill, Ken A
2016-02-01
Free-energy-based simulations are increasingly providing the narratives about the structures, dynamics and biological mechanisms that constitute the fabric of protein science. Here, we review two recent successes. It is becoming practical: first, to fold small proteins with free-energy methods without knowing substructures and second, to compute ligand-protein binding affinities, not just their binding poses. Over the past 40 years, the timescales that can be simulated by atomistic MD are doubling every 1.3 years--which is faster than Moore's law. Thus, these advances are not simply due to the availability of faster computers. Force fields, solvation models and simulation methodology have kept pace with computing advancements, and are now quite good. At the tip of the spear recently are GPU-based computing, improved fast-solvation methods, continued advances in force fields, and conformational sampling methods that harness external information. Copyright © 2015 Elsevier Ltd. All rights reserved.
LAWS simulation: Sampling strategies and wind computation algorithms
NASA Technical Reports Server (NTRS)
Emmitt, G. D. A.; Wood, S. A.; Houston, S. H.
1989-01-01
In general, work has continued on developing and evaluating algorithms designed to manage the Laser Atmospheric Wind Sounder (LAWS) lidar pulses and to compute the horizontal wind vectors from the line-of-sight (LOS) measurements. These efforts fall into three categories: Improvements to the shot management and multi-pair algorithms (SMA/MPA); observing system simulation experiments; and ground-based simulations of LAWS.
Multi-dimensional computer simulation of MHD combustor hydrodynamics
NASA Astrophysics Data System (ADS)
Berry, G. F.; Chang, S. L.; Lottes, S. A.; Rimkus, W. A.
1991-04-01
Argonne National Laboratory is investigating the nonreacting jet gas mixing patterns in an MHD second stage combustor by using a 2-D multiphase hydrodynamics computer program and a 3-D single phase hydrodynamics computer program. The computer simulations are intended to enhance the understanding of flow and mixing patterns in the combustor, which in turn may lead to improvement of the downstream MHD channel performance. A 2-D steady state computer model, based on mass and momentum conservation laws for multiple gas species, is used to simulate the hydrodynamics of the combustor in which a jet of oxidizer is injected into an unconfined cross stream gas flow. A 3-D code is used to examine the effects of the side walls and the distributed jet flows on the non-reacting jet gas mixing patterns. The code solves the conservation equations of mass, momentum, and energy, and a transport equation of a turbulence parameter and allows permeable surfaces to be specified for any computational cell.
NASA Astrophysics Data System (ADS)
Kirstetter, G.; Popinet, S.; Fullana, J. M.; Lagrée, P. Y.; Josserand, C.
2015-12-01
The full resolution of shallow-water equations for modeling flash floods may have a high computational cost, so that majority of flood simulation softwares used for flood forecasting uses a simplification of this model : 1D approximations, diffusive or kinematic wave approximations or exotic models using non-physical free parameters. These kind of approximations permit to save a lot of computational time by sacrificing in an unquantified way the precision of simulations. To reduce drastically the cost of such 2D simulations by quantifying the lost of precision, we propose a 2D shallow-water flow solver built with the open source code Basilisk1, which is using adaptive refinement on a quadtree grid. This solver uses a well-balanced central-upwind scheme, which is at second order in time and space, and treats the friction and rain terms implicitly in finite volume approach. We demonstrate the validity of our simulation on the case of the flood of Tewkesbury (UK) occurred in July 2007, as shown on Fig. 1. On this case, a systematic study of the impact of the chosen criterium for adaptive refinement is performed. The criterium which has the best computational time / precision ratio is proposed. Finally, we present the power law giving the computational time in respect to the maximum resolution and we show that this law for our 2D simulation is close to the one of 1D simulation, thanks to the fractal dimension of the topography. [1] http://basilisk.fr/
Comparison Between 2D and 3D Simulations of Rate Dependent Friction Using DEM
NASA Astrophysics Data System (ADS)
Wang, C.; Elsworth, D.
2017-12-01
Rate-state dependent constitutive laws of frictional evolution have been successful in representing many of the first- and second- order components of earthquake rupture. Although this constitutive law has been successfully applied in numerical models, difficulty remains in efficient implementation of this constitutive law in computationally-expensive granular mechanics simulations using discrete element methods (DEM). This study introduces a novel approach in implementing a rate-dependent constitutive relation of contact friction into DEM. This is essentially an implementation of a slip-weakening constitutive law onto local particle contacts without sacrificing computational efficiency. This implementation allows the analysis of slip stability of simulated fault gouge materials. Velocity-stepping experiments are reported on both uniform and textured distributions of quartz and talc as 3D analogs of gouge mixtures. Distinct local slip stability parameters (a-b) are assigned to the quartz and talc, respectively. We separately vary talc content from 0 to 100% in the uniform mixtures and talc layer thickness from 1 to 20 particles in the textured mixtures. Applied shear displacements are cycled through velocities of 1μm/s and 10μm/s. Frictional evolution data are collected and compared to 2D simulation results. We show that dimensionality significantly impacts the evolution of friction. 3D simulation results are more representative of laboratory observed behavior and numerical noise is shown at a magnitude of 0.01 in terms of friction coefficient. Stability parameters (a-b) can be straightforwardly obtained from analyzing velocity steps, and are different from locally assigned (a-b) values. Sensitivity studies on normal stress, shear velocity, particle size, local (a-b) values, and characteristic slip distance (Dc) show that the implementation is sensitive to local (a-b) values and relations between (Dc) and particle size.
Simulation of the Two Stages Stretch-Blow Molding Process: Infrared Heating and Blowing Modeling
NASA Astrophysics Data System (ADS)
Bordival, M.; Schmidt, F. M.; Le Maoult, Y.; Velay, V.
2007-05-01
In the Stretch-Blow Molding (SBM) process, the temperature distribution of the reheated perform affects drastically the blowing kinematic, the bottle thickness distribution, as well as the orientation induced by stretching. Consequently, mechanical and optical properties of the final bottle are closely related to heating conditions. In order to predict the 3D temperature distribution of a rotating preform, numerical software using control-volume method has been developed. Since PET behaves like a semi-transparent medium, the radiative flux absorption was computed using Beer Lambert law. In a second step, 2D axi-symmetric simulations of the SBM have been developed using the finite element package ABAQUS®. Temperature profiles through the preform wall thickness and along its length were computed and applied as initial condition. Air pressure inside the preform was not considered as an input variable, but was automatically computed using a thermodynamic model. The heat transfer coefficient applied between the mold and the polymer was also measured. Finally, the G'sell law was used for modeling PET behavior. For both heating and blowing stage simulations, a good agreement has been observed with experimental measurements. This work is part of the European project "APT_PACK" (Advanced knowledge of Polymer deformation for Tomorrow's PACKaging).
Palmer, T. N.
2014-01-01
This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic–dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only. PMID:24842038
Palmer, T N
2014-06-28
This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic-dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only.
Data-driven non-linear elasticity: constitutive manifold construction and problem discretization
NASA Astrophysics Data System (ADS)
Ibañez, Ruben; Borzacchiello, Domenico; Aguado, Jose Vicente; Abisset-Chavanne, Emmanuelle; Cueto, Elias; Ladeveze, Pierre; Chinesta, Francisco
2017-11-01
The use of constitutive equations calibrated from data has been implemented into standard numerical solvers for successfully addressing a variety problems encountered in simulation-based engineering sciences (SBES). However, the complexity remains constantly increasing due to the need of increasingly detailed models as well as the use of engineered materials. Data-Driven simulation constitutes a potential change of paradigm in SBES. Standard simulation in computational mechanics is based on the use of two very different types of equations. The first one, of axiomatic character, is related to balance laws (momentum, mass, energy,\\ldots ), whereas the second one consists of models that scientists have extracted from collected, either natural or synthetic, data. Data-driven (or data-intensive) simulation consists of directly linking experimental data to computers in order to perform numerical simulations. These simulations will employ laws, universally recognized as epistemic, while minimizing the need of explicit, often phenomenological, models. The main drawback of such an approach is the large amount of required data, some of them inaccessible from the nowadays testing facilities. Such difficulty can be circumvented in many cases, and in any case alleviated, by considering complex tests, collecting as many data as possible and then using a data-driven inverse approach in order to generate the whole constitutive manifold from few complex experimental tests, as discussed in the present work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bordival, M.; Schmidt, F. M.; Le Maoult, Y.
In the Stretch-Blow Molding (SBM) process, the temperature distribution of the reheated perform affects drastically the blowing kinematic, the bottle thickness distribution, as well as the orientation induced by stretching. Consequently, mechanical and optical properties of the final bottle are closely related to heating conditions. In order to predict the 3D temperature distribution of a rotating preform, numerical software using control-volume method has been developed. Since PET behaves like a semi-transparent medium, the radiative flux absorption was computed using Beer Lambert law. In a second step, 2D axi-symmetric simulations of the SBM have been developed using the finite element packagemore » ABAQUS registered . Temperature profiles through the preform wall thickness and along its length were computed and applied as initial condition. Air pressure inside the preform was not considered as an input variable, but was automatically computed using a thermodynamic model. The heat transfer coefficient applied between the mold and the polymer was also measured. Finally, the G'sell law was used for modeling PET behavior. For both heating and blowing stage simulations, a good agreement has been observed with experimental measurements. This work is part of the European project ''APT{sub P}ACK'' (Advanced knowledge of Polymer deformation for Tomorrow's PACKaging)« less
In-Orbit Collision Analysis for VEGA Second Flight
NASA Astrophysics Data System (ADS)
Volpi, M.; Fossati, T.; Battie, F.
2013-08-01
ELV, as prime contractor of the VEGA launcher, which operates in the protected LEO zone (up to 2000 km altitude), has to demonstrate that it abides by ESA debris mitigation rules, as well as by those imposed by the French Law on Space Operations (LOS). After the full success of VEGA qualification flight, the second flight(VV02) will extend the qualification domain of the launcher to multi-payload missions, with the release of two satellites (Proba-V and VNRedSat-1) and one Cubesat (ESTCube-1) on different SSO orbits The multi-payload adapter, VESPA, also separates its upper part before the second payload release. This paper will present the results of the long-term analyses on inorbit collision between these different bodies. Typical duration of propagation requested by ELV customer is around 50 orbits, requiring a state-of-the-art simulator able to compute efficiently orbits disturbs, usually neglected in launcher trajectory optimization itself. To address the issue of in-orbit collision, ELV has therefore developed its own simulator, POLPO [1], a FORTRAN code which performs the long-term propagation of the released objects trajectories and computes the mutual distance between them. The first part of the paper shall introduce the simulator itself, explaining the computation method chosen and briefly discussing the perturbing effects and their models taken into account in the tool, namely: - gravity field modeling (zonal and tesseral harmonics) - atmospheric model - solar pressure - third-body interaction A second part will describe the application of the in-orbit collision analysis to the second flight mission. Main characteristics of the second flight will be introduced, as well as the dispersions considered for the Monte-Carlo analysis performed. The results of the long-term collision analysis between all the separated bodies will then be presented and discussed.
Discrete particle noise in a nonlinearly saturated plasma
NASA Astrophysics Data System (ADS)
Jenkins, Thomas; Lee, W. W.
2006-04-01
Understanding discrete particle noise in an equilibrium plasma has been an important topic since the early days of particle-in- cell (PIC) simulation [1]. In this paper, particle noise in a nonlinearly saturated system is investigated. We investigate the usefulness of the fluctuation-dissipation theorem (FDT) in a regime where drift instabilities are nonlinearly saturated. We obtain excellent agreement between the simulation results and our theoretical predictions of the noise properties. It is found that discrete particle noise always enhances the particle and thermal transport in the plasma, in agreement with the second law of thermodynamics. [1] C.K. Birdsall and A.B. Langdon, Plasma Physics via Computer Simulation, McGraw-Hill, New York (1985).
Analysis and simulation of the I C engine Otto cycle using the second law of thermodynamics
NASA Astrophysics Data System (ADS)
Abdel-Rahim, Y. M.
The present investigation is an application of the second law of thermodynamics to the spark ignition engine cycle. A comprehensive thermodynamic analysis of the air standard cycle is conducted using the first and second laws of thermodynamics, the ideal gas equation of state and the perfect gas properties for air. The study investigates the effect of the cycle parameters on the cycle performance reflected by the first and second law efficiencies, the heat added, the work done, the available energy added as well as the history of the internal, available and unavailable energies along the cycle. The study shows that the second law efficiency is a function of the compression ratio, the initial temperature, the maximum temperature as well as the dead state temperature. A non-dimensional comprehensive thermodynamic simulation model for the actual Otto cycle is developed to study the effects of the design and operating parameters of the cycle on the cycle performance. The analysis takes into account engine geometry, mixture strength, heat transfer, piston motion, engine speed, mechanical friction, spark advance and combustion duration.
NASA Technical Reports Server (NTRS)
Fisher, Travis C.; Carpenter, Mark H.; Nordstroem, Jan; Yamaleev, Nail K.; Swanson, R. Charles
2011-01-01
Simulations of nonlinear conservation laws that admit discontinuous solutions are typically restricted to discretizations of equations that are explicitly written in divergence form. This restriction is, however, unnecessary. Herein, linear combinations of divergence and product rule forms that have been discretized using diagonal-norm skew-symmetric summation-by-parts (SBP) operators, are shown to satisfy the sufficient conditions of the Lax-Wendroff theorem and thus are appropriate for simulations of discontinuous physical phenomena. Furthermore, special treatments are not required at the points that are near physical boundaries (i.e., discrete conservation is achieved throughout the entire computational domain, including the boundaries). Examples are presented of a fourth-order, SBP finite-difference operator with second-order boundary closures. Sixth- and eighth-order constructions are derived, and included in E. Narrow-stencil difference operators for linear viscous terms are also derived; these guarantee the conservative form of the combined operator.
ERIC Educational Resources Information Center
Dolinko, A. E.
2009-01-01
By simulating the dynamics of a bidimensional array of springs and masses, the propagation of conveniently generated waves is visualized. The simulation is exclusively based on Newton's second law and was made to provide insight into the physics of wave propagation. By controlling parameters such as the magnitude of the mass and the elastic…
Summary: Special Session SpS15: Data Intensive Astronomy
NASA Astrophysics Data System (ADS)
Montmerle, Thierry
2015-03-01
A new paradigm in astronomical research has been emerging - ``Data Intensive Astronomy'' that utilizes large amounts of data combined with statistical data analyses. The first research method in astronomy was observations by our eyes. It is well known that the invention of telescope impacted the human view on our Universe (although it was almost limited to the solar system), and lead to Keplerfs law that was later used by Newton to derive his mechanics. Newtonian mechanics then enabled astronomers to provide the theoretical explanation to the motion of the planets. Thus astronomers obtained the second paradigm, theoretical astronomy. Astronomers succeeded to apply various laws of physics to reconcile phenomena in the Universe; e.g., nuclear fusion was found to be the energy source of a star. Theoretical astronomy has been paired with observational astronomy to better understand the background physics in observed phenomena in the Universe. Although theoretical astronomy succeeded to provide good physical explanations qualitatively, it was not easy to have quantitative agreements with observations in the Universe. Since the invention of high-performance computers, however, astronomers succeeded to have the third research method, simulations, to get better agreements with observations. Simulation astronomy developed so rapidly along with the development of computer hardware (CPUs, GPUs, memories, storage systems, networks, and others) and simulation codes.
Space Life Support Engineering Program
NASA Technical Reports Server (NTRS)
Seagrave, Richard C.
1993-01-01
This report covers the second year of research relating to the development of closed-loop long-term life support systems. Emphasis was directed toward concentrating on the development of dynamic simulation techniques and software and on performing a thermodynamic systems analysis in an effort to begin optimizing the system needed for water purification. Four appendices are attached. The first covers the ASPEN modeling of the closed loop Environmental Control Life Support System (ECLSS) and its thermodynamic analysis. The second is a report on the dynamic model development for water regulation in humans. The third regards the development of an interactive computer-based model for determining exercise limitations. The fourth attachment is an estimate of the second law thermodynamic efficiency of the various units comprising an ECLSS.
ERIC Educational Resources Information Center
Holko, David A.
1982-01-01
Presents a complete computer program demonstrating the relationship between volume/pressure for Boyle's Law, volume/temperature for Charles' Law, and volume/moles of gas for Avagadro's Law. The programing reinforces students' application of gas laws and equates a simulated moving piston to theoretical values derived using the ideal gas law.…
Second-order accurate nonoscillatory schemes for scalar conservation laws
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1989-01-01
Explicit finite difference schemes for the computation of weak solutions of nonlinear scalar conservation laws is presented and analyzed. These schemes are uniformly second-order accurate and nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time.
Discovering the gas laws and understanding the kinetic theory of gases with an iPad app
NASA Astrophysics Data System (ADS)
Davies, Gary B.
2017-07-01
Carrying out classroom experiments that demonstrate Boyle’s law and Gay-Lussac’s law can be challenging. Even if we are able to conduct classroom experiments using pressure gauges and syringes, the results of these experiments do little to illuminate the kinetic theory of gases. However, molecular dynamics simulations that run on computers allow us to visualise the behaviour of individual particles and to link this behaviour to the bulk properties of the gas e.g. its pressure and temperature. In this article, I describe how to carry out ‘computer experiments’ using a commercial molecular dynamics iPad app called Atoms in Motion [1]. Using the app, I show how to obtain data from simulations that demonstrate Boyle’s law and Gay-Lussac’s law, and hence also the combined gas law.
Mathematical and computational model for the analysis of micro hybrid rocket motor
NASA Astrophysics Data System (ADS)
Stoia-Djeska, Marius; Mingireanu, Florin
2012-11-01
The hybrid rockets use a two-phase propellant system. In the present work we first develop a simplified model of the coupling of the hybrid combustion process with the complete unsteady flow, starting from the combustion port and ending with the nozzle. The physical and mathematical model are adapted to the simulations of micro hybrid rocket motors. The flow model is based on the one-dimensional Euler equations with source terms. The flow equations and the fuel regression rate law are solved in a coupled manner. The platform of the numerical simulations is an implicit fourth-order Runge-Kutta second order cell-centred finite volume method. The numerical results obtained with this model show a good agreement with published experimental and numerical results. The computational model developed in this work is simple, computationally efficient and offers the advantage of taking into account a large number of functional and constructive parameters that are used by the engineers.
Computational efficiency and Amdahl’s law for the adaptive resolution simulation technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Junghans, Christoph; Agarwal, Animesh; Delle Site, Luigi
Here, we discuss the computational performance of the adaptive resolution technique in molecular simulation when it is compared with equivalent full coarse-grained and full atomistic simulations. We show that an estimate of its efficiency, within 10%–15% accuracy, is given by the Amdahl’s Law adapted to the specific quantities involved in the problem. The derivation of the predictive formula is general enough that it may be applied to the general case of molecular dynamics approaches where a reduction of degrees of freedom in a multi scale fashion occurs.
Computational efficiency and Amdahl’s law for the adaptive resolution simulation technique
Junghans, Christoph; Agarwal, Animesh; Delle Site, Luigi
2017-06-01
Here, we discuss the computational performance of the adaptive resolution technique in molecular simulation when it is compared with equivalent full coarse-grained and full atomistic simulations. We show that an estimate of its efficiency, within 10%–15% accuracy, is given by the Amdahl’s Law adapted to the specific quantities involved in the problem. The derivation of the predictive formula is general enough that it may be applied to the general case of molecular dynamics approaches where a reduction of degrees of freedom in a multi scale fashion occurs.
ERIC Educational Resources Information Center
Abdullah, Sopiah; Shariff, Adilah
2008-01-01
The purpose of the study was to investigate the effects of inquiry-based computer simulation with heterogeneous-ability cooperative learning (HACL) and inquiry-based computer simulation with friendship cooperative learning (FCL) on (a) scientific reasoning (SR) and (b) conceptual understanding (CU) among Form Four students in Malaysian Smart…
Optimizing Cognitive Load for Learning from Computer-Based Science Simulations
ERIC Educational Resources Information Center
Lee, Hyunjeong; Plass, Jan L.; Homer, Bruce D.
2006-01-01
How can cognitive load in visual displays of computer simulations be optimized? Middle-school chemistry students (N = 257) learned with a simulation of the ideal gas law. Visual complexity was manipulated by separating the display of the simulations in two screens (low complexity) or presenting all information on one screen (high complexity). The…
Numerical tool for SMA material simulation: application to composite structure design
NASA Astrophysics Data System (ADS)
Chemisky, Yves; Duval, Arnaud; Piotrowski, Boris; Ben Zineb, Tarak; Tahiri, Vanessa; Patoor, Etienne
2009-10-01
Composite materials based on shape memory alloys (SMA) have received growing attention over these last few years. In this paper, two particular morphologies of composites are studied. The first one is an SMA/elastomer composite in which a snake-like wire NiTi SMA is embedded into an elastomer ribbon. The second one is a commercial Ni47Ti44Nb9 which presents elastic-plastic inclusions in an NiTi SMA matrix. In both cases, the design of such composites required the development of an SMA design tool, based on a macroscopic 3D constitutive law for NiTi alloys. Two different strategies are then applied to compute these composite behaviors. For the SMA/elastomer composite, the macroscopic behavior law is implemented in commercial FEM software, and for the Ni47Ti44Nb9 a scale transition approach based on the Mori-Tanaka scheme is developed. In both cases, simulations are compared to experimental data.
Demonstrating Newton's Third Law: Changing Aristotelian Viewpoints.
ERIC Educational Resources Information Center
Roach, Linda E.
1992-01-01
Suggests techniques to help eliminate students' misconceptions involving Newton's Third Law. Approaches suggested include teaching physics from a historical perspective, using computer programs with simulations, rewording the law, drawing free-body diagrams, and using demonstrations and examples. (PR)
ERIC Educational Resources Information Center
Rubin, Michael Rogers
1988-01-01
The second of three articles on abusive data collection and usage practices and their effect on personal privacy, discusses the evolution of data protection laws worldwide, and compares the scope, major provisions, and enforcement components of the laws. A chronology of key events in the regulation of computer databanks in included. (1 reference)…
ERIC Educational Resources Information Center
Rieber, Lloyd P.; Tzeng, Shyh-Chii; Tribble, Kelly
2004-01-01
The purpose of this research was to explore how adult users interact and learn during an interactive computer-based simulation supplemented with brief multimedia explanations of the content. A total of 52 college students interacted with a computer-based simulation of Newton's laws of motion in which they had control over the motion of a simple…
Feedback and Elaboration within a Computer-Based Simulation: A Dual Coding Perspective.
ERIC Educational Resources Information Center
Rieber, Lloyd P.; And Others
The purpose of this study was to explore how adult users interact and learn during a computer-based simulation given visual and verbal forms of feedback coupled with embedded elaborations of the content. A total of 52 college students interacted with a computer-based simulation of Newton's laws of motion in which they had control over the motion…
Thermodynamic Modeling and Analysis of Human Stress Response
NASA Technical Reports Server (NTRS)
Boregowda, S. C.; Tiwari, S. N.
1999-01-01
A novel approach based on the second law of thermodynamics is developed to investigate the psychophysiology and quantify human stress level. Two types of stresses (thermal and mental) are examined. A Unified Stress Response Theory (USRT) is developed under the new proposed field of study called Engineering Psychophysiology. The USRT is used to investigate both thermal and mental stresses from a holistic (human body as a whole) and thermodynamic viewpoint. The original concepts and definitions are established as postulates which form the basis for thermodynamic approach to quantify human stress level. An Objective Thermal Stress Index (OTSI) is developed by applying the second law of thermodynamics to the human thermal system to quantify thermal stress or dis- comfort in the human body. The human thermal model based on finite element method is implemented. It is utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal stress responses under different environmental conditions. An innovative hybrid technique is developed to analyze human thermal behavior based on series of human-environment interaction simulations. Continuous monitoring of thermal stress is demonstrated with the help of OTSI. It is well established that the human thermal system obeys the second law of thermodynamics. Further, the OTSI is validated against the experimental data. Regarding mental stress, an Objective Mental Stress Index (OMSI) is developed by applying the Maxwell relations of thermodynamics to the combined thermal and cardiovascular system in the human body. The OMSI is utilized to demonstrate the technique of monitoring mental stress continuously and is validated with the help of series of experimental studies. Although the OMSI indicates the level of mental stress, it provides a strong thermodynamic and mathematical relationship between activities of thermal and cardiovascular systems of the human body.
Unsteady flow simulations around complex geometries using stationary or rotating unstructured grids
NASA Astrophysics Data System (ADS)
Sezer-Uzol, Nilay
In this research, the computational analysis of three-dimensional, unsteady, separated, vortical flows around complex geometries is studied by using stationary or moving unstructured grids. Two main engineering problems are investigated. The first problem is the unsteady simulation of a ship airwake, where helicopter operations become even more challenging, by using stationary unstructured grids. The second problem is the unsteady simulation of wind turbine rotor flow fields by using moving unstructured grids which are rotating with the whole three-dimensional rigid rotor geometry. The three dimensional, unsteady, parallel, unstructured, finite volume flow solver, PUMA2, is used for the computational fluid dynamics (CFD) simulations considered in this research. The code is modified to have a moving grid capability to perform three-dimensional, time-dependent rotor simulations. An instantaneous log-law wall model for Large Eddy Simulations is also implemented in PUMA2 to investigate the very large Reynolds number flow fields of rotating blades. To verify the code modifications, several sample test cases are also considered. In addition, interdisciplinary studies, which are aiming to provide new tools and insights to the aerospace and wind energy scientific communities, are done during this research by focusing on the coupling of ship airwake CFD simulations with the helicopter flight dynamics and control analysis, the coupling of wind turbine rotor CFD simulations with the aeroacoustic analysis, and the analysis of these time-dependent and large-scale CFD simulations with the help of a computational monitoring, steering and visualization tool, POSSE.
Efficient Control Law Simulation for Multiple Mobile Robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Driessen, B.J.; Feddema, J.T.; Kotulski, J.D.
1998-10-06
In this paper we consider the problem of simulating simple control laws involving large numbers of mobile robots. Such simulation can be computationally prohibitive if the number of robots is large enough, say 1 million, due to the 0(N2 ) cost of each time step. This work therefore uses hierarchical tree-based methods for calculating the control law. These tree-based approaches have O(NlogN) cost per time step, thus allowing for efficient simulation involving a large number of robots. For concreteness, a decentralized control law which involves only the distance and bearing to the closest neighbor robot will be considered. The timemore » to calculate the control law for each robot at each time step is demonstrated to be O(logN).« less
Treeby, Bradley E; Jaros, Jiri; Rendell, Alistair P; Cox, B T
2012-06-01
The simulation of nonlinear ultrasound propagation through tissue realistic media has a wide range of practical applications. However, this is a computationally difficult problem due to the large size of the computational domain compared to the acoustic wavelength. Here, the k-space pseudospectral method is used to reduce the number of grid points required per wavelength for accurate simulations. The model is based on coupled first-order acoustic equations valid for nonlinear wave propagation in heterogeneous media with power law absorption. These are derived from the equations of fluid mechanics and include a pressure-density relation that incorporates the effects of nonlinearity, power law absorption, and medium heterogeneities. The additional terms accounting for convective nonlinearity and power law absorption are expressed as spatial gradients making them efficient to numerically encode. The governing equations are then discretized using a k-space pseudospectral technique in which the spatial gradients are computed using the Fourier-collocation method. This increases the accuracy of the gradient calculation and thus relaxes the requirement for dense computational grids compared to conventional finite difference methods. The accuracy and utility of the developed model is demonstrated via several numerical experiments, including the 3D simulation of the beam pattern from a clinical ultrasound probe.
Investigating the Effectiveness of Computer Simulations for Chemistry Learning
ERIC Educational Resources Information Center
Plass, Jan L.; Milne, Catherine; Homer, Bruce D.; Schwartz, Ruth N.; Hayward, Elizabeth O.; Jordan, Trace; Verkuilen, Jay; Ng, Florrie; Wang, Yan; Barrientos, Juan
2012-01-01
Are well-designed computer simulations an effective tool to support student understanding of complex concepts in chemistry when integrated into high school science classrooms? We investigated scaling up the use of a sequence of simulations of kinetic molecular theory and associated topics of diffusion, gas laws, and phase change, which we designed…
ERIC Educational Resources Information Center
Dewdney, A. K.
1988-01-01
Describes the creation of the computer program "BOUNCE," designed to simulate a weighted piston coming into equilibrium with a cloud of bouncing balls. The model follows the ideal gas law. Utilizes the critical event technique to create the model. Discusses another program, "BOOM," which simulates a chain reaction. (CW)
Passive scalars: Mixing, diffusion, and intermittency in helical and nonhelical rotating turbulence
NASA Astrophysics Data System (ADS)
Imazio, P. Rodriguez; Mininni, P. D.
2017-03-01
We use direct numerical simulations to compute structure functions, scaling exponents, probability density functions, and effective transport coefficients of passive scalars in turbulent rotating helical and nonhelical flows. We show that helicity affects the inertial range scaling of the velocity and of the passive scalar when rotation is present, with a spectral law consistent with ˜k⊥-1.4 for the passive scalar variance spectrum. This scaling law is consistent with a phenomenological argument [P. Rodriguez Imazio and P. D. Mininni, Phys. Rev. E 83, 066309 (2011), 10.1103/PhysRevE.83.066309] for rotating nonhelical flows, which follows directly from Kolmogorov-Obukhov scaling and states that if energy follows a E (k ) ˜k-n law, then the passive scalar variance follows a law V (k ) ˜k-nθ with nθ=(5 -n ) /2 . With the second-order scaling exponent obtained from this law, and using the Kraichnan model, we obtain anomalous scaling exponents for the passive scalar that are in good agreement with the numerical results. Multifractal intermittency models are also considered. Intermittency of the passive scalar is stronger than in the nonhelical rotating case, a result that is also confirmed by stronger non-Gaussian tails in the probability density functions of field increments. Finally, Fick's law is used to compute the effective diffusion coefficients in the directions parallel and perpendicular to rotation. Calculations indicate that horizontal diffusion decreases in the presence of helicity in rotating flows, while vertical diffusion increases. A simple mean field argument explains this behavior in terms of the amplitude of velocity fluctuations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerrier, C.; Holcman, D., E-mail: david.holcman@ens.fr; Mathematical Institute, Oxford OX2 6GG, Newton Institute
The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationallymore » greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.« less
Experimental verification and simulation of negative index of refraction using Snell's law.
Parazzoli, C G; Greegor, R B; Li, K; Koltenbah, B E C; Tanielian, M
2003-03-14
We report the results of a Snell's law experiment on a negative index of refraction material in free space from 12.6 to 13.2 GHz. Numerical simulations using Maxwell's equations solvers show good agreement with the experimental results, confirming the existence of negative index of refraction materials. The index of refraction is a function of frequency. At 12.6 GHz we measure and compute the real part of the index of refraction to be -1.05. The measurements and simulations of the electromagnetic field profiles were performed at distances of 14lambda and 28lambda from the sample; the fields were also computed at 100lambda.
NASA Technical Reports Server (NTRS)
Beacom, John Francis; Dominik, Kurt G.; Melott, Adrian L.; Perkins, Sam P.; Shandarin, Sergei F.
1991-01-01
Results are presented from a series of gravitational clustering simulations in two dimensions. These simulations are a significant departure from previous work, since in two dimensions one can have large dynamic range in both length scale and mass using present computer technology. Controlled experiments were conducted by varying the slope of power-law initial density fluctuation spectra and varying cutoffs at large k, while holding constant the phases of individual Fourier components and the scale of nonlinearity. Filaments are found in many different simulations, even with pure power-law initial conditions. By direct comparison, filaments, called 'second-generation pancakes' are shown to arise as a consequence of mild nonlinearity on scales much larger than the correlation length and are not relics of an initial lattice or due to sparse sampling of the Fourier components. Bumps of low amplitude in the two-point correlation are found to be generic but usually only statistical fluctuations. Power spectra are much easier to relate to initial conditions, and seem to follow a simple triangular shape (on log-log plot) in the nonlinear regime. The rms density fluctuation with Gaussian smoothing is the most stable indicator of nonlinearity.
Strength computation of forged parts taking into account strain hardening and damage
NASA Astrophysics Data System (ADS)
Cristescu, Michel L.
2004-06-01
Modern non-linear simulation software, such as FORGE 3 (registered trade mark of TRANSVALOR), are able to compute the residual stresses, the strain hardening and the damage during the forging process. A thermally dependent elasto-visco-plastic law is used to simulate the behavior of the material of the hot forged piece. A modified Lemaitre law coupled with elasticiy, plasticity and thermic is used to simulate the damage. After the simulation of the different steps of the forging process, the part is cooled and then virtually machined, in order to obtain the finished part. An elastic computation is then performed to equilibrate the residual stresses, so that we obtain the true geometry of the finished part after machining. The response of the part to the loadings it will sustain during it's life is then computed, taking into account the residual stresses, the strain hardening and the damage that occur during forging. This process is illustrated by the forging, virtual machining and stress analysis of an aluminium wheel hub.
Quasi-static earthquake cycle simulation based on nonlinear viscoelastic finite element analyses
NASA Astrophysics Data System (ADS)
Agata, R.; Ichimura, T.; Hyodo, M.; Barbot, S.; Hori, T.
2017-12-01
To explain earthquake generation processes, simulation methods of earthquake cycles have been studied. For such simulations, the combination of the rate- and state-dependent friction law at the fault plane and the boundary integral method based on Green's function in an elastic half space is widely used (e.g. Hori 2009; Barbot et al. 2012). In this approach, stress change around the fault plane due to crustal deformation can be computed analytically, while the effects of complex physics such as mantle rheology and gravity are generally not taken into account. To consider such effects, we seek to develop an earthquake cycle simulation combining crustal deformation computation based on the finite element (FE) method with the rate- and state-dependent friction law. Since the drawback of this approach is the computational cost associated with obtaining numerical solutions, we adopt a recently developed fast and scalable FE solver (Ichimura et al. 2016), which assumes use of supercomputers, to solve the problem in a realistic time. As in the previous approach, we solve the governing equations consisting of the rate- and state-dependent friction law. In solving the equations, we compute stress changes along the fault plane due to crustal deformation using FE simulation, instead of computing them by superimposing slip response function as in the previous approach. In stress change computation, we take into account nonlinear viscoelastic deformation in the asthenosphere. In the presentation, we will show simulation results in a normative three-dimensional problem, where a circular-shaped velocity-weakening area is set in a square-shaped fault plane. The results with and without nonlinear viscosity in the asthenosphere will be compared. We also plan to apply the developed code to simulate the post-earthquake deformation of a megathrust earthquake, such as the 2011 Tohoku earthquake. Acknowledgment: The results were obtained using the K computer at the RIKEN (Proposal number hp160221).
Toward Petascale Biologically Plausible Neural Networks
NASA Astrophysics Data System (ADS)
Long, Lyle
This talk will describe an approach to achieving petascale neural networks. Artificial intelligence has been oversold for many decades. Computers in the beginning could only do about 16,000 operations per second. Computer processing power, however, has been doubling every two years thanks to Moore's law, and growing even faster due to massively parallel architectures. Finally, 60 years after the first AI conference we have computers on the order of the performance of the human brain (1016 operations per second). The main issues now are algorithms, software, and learning. We have excellent models of neurons, such as the Hodgkin-Huxley model, but we do not know how the human neurons are wired together. With careful attention to efficient parallel computing, event-driven programming, table lookups, and memory minimization massive scale simulations can be performed. The code that will be described was written in C + + and uses the Message Passing Interface (MPI). It uses the full Hodgkin-Huxley neuron model, not a simplified model. It also allows arbitrary network structures (deep, recurrent, convolutional, all-to-all, etc.). The code is scalable, and has, so far, been tested on up to 2,048 processor cores using 107 neurons and 109 synapses.
Deadbeat Predictive Controllers
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh
1997-01-01
Several new computational algorithms are presented to compute the deadbeat predictive control law. The first algorithm makes use of a multi-step-ahead output prediction to compute the control law without explicitly calculating the controllability matrix. The system identification must be performed first and then the predictive control law is designed. The second algorithm uses the input and output data directly to compute the feedback law. It combines the system identification and the predictive control law into one formulation. The third algorithm uses an observable-canonical form realization to design the predictive controller. The relationship between all three algorithms is established through the use of the state-space representation. All algorithms are applicable to multi-input, multi-output systems with disturbance inputs. In addition to the feedback terms, feed forward terms may also be added for disturbance inputs if they are measurable. Although the feedforward terms do not influence the stability of the closed-loop feedback law, they enhance the performance of the controlled system.
The Erector Set Computer: Building a Virtual Workstation over a Large Multi-Vendor Network.
ERIC Educational Resources Information Center
Farago, John M.
1989-01-01
Describes a computer network developed at the City University of New York Law School that uses device sharing and local area networking to create a simulated law office. Topics discussed include working within a multi-vendor environment, and the communication, information, and database access services available through the network. (CLB)
NASA Astrophysics Data System (ADS)
Maire, Pierre-Henri; Abgrall, Rémi; Breil, Jérôme; Loubère, Raphaël; Rebourcet, Bernard
2013-02-01
In this paper, we describe a cell-centered Lagrangian scheme devoted to the numerical simulation of solid dynamics on two-dimensional unstructured grids in planar geometry. This numerical method, utilizes the classical elastic-perfectly plastic material model initially proposed by Wilkins [M.L. Wilkins, Calculation of elastic-plastic flow, Meth. Comput. Phys. (1964)]. In this model, the Cauchy stress tensor is decomposed into the sum of its deviatoric part and the thermodynamic pressure which is defined by means of an equation of state. Regarding the deviatoric stress, its time evolution is governed by a classical constitutive law for isotropic material. The plasticity model employs the von Mises yield criterion and is implemented by means of the radial return algorithm. The numerical scheme relies on a finite volume cell-centered method wherein numerical fluxes are expressed in terms of sub-cell force. The generic form of the sub-cell force is obtained by requiring the scheme to satisfy a semi-discrete dissipation inequality. Sub-cell force and nodal velocity to move the grid are computed consistently with cell volume variation by means of a node-centered solver, which results from total energy conservation. The nominally second-order extension is achieved by developing a two-dimensional extension in the Lagrangian framework of the Generalized Riemann Problem methodology, introduced by Ben-Artzi and Falcovitz [M. Ben-Artzi, J. Falcovitz, Generalized Riemann Problems in Computational Fluid Dynamics, Cambridge Monogr. Appl. Comput. Math. (2003)]. Finally, the robustness and the accuracy of the numerical scheme are assessed through the computation of several test cases.
Studying Scientific Discovery by Computer Simulation.
1983-03-30
Mendel’s laws of inheritance, the law of Gay- Lussac for gaseous reactions, tile law of Dulong and Petit, the derivation of atomic weights by Avogadro...neceseary mid identify by block number) scientific discovery -ittri sic properties physical laws extensive terms data-driven heuristics intensive...terms theory-driven heuristics conservation laws 20. ABSTRACT (Continue on revere. side It necessary and identify by block number) Scientific discovery
High-resolution schemes for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Harten, A.
1982-01-01
A class of new explicit second order accurate finite difference schemes for the computation of weak solutions of hyperbolic conservation laws is presented. These highly nonlinear schemes are obtained by applying a nonoscillatory first order accurae scheme to an appropriately modified flux function. The so derived second order accurate schemes achieve high resolution while preserving the robustness of the original nonoscillatory first order accurate scheme.
Entropy-Based Approach To Nonlinear Stability
NASA Technical Reports Server (NTRS)
Merriam, Marshal L.
1991-01-01
NASA technical memorandum suggests schemes for numerical solution of differential equations of flow made more accurate and robust by invoking second law of thermodynamics. Proposes instead of using artificial viscosity to suppress such unphysical solutions as spurious numerical oscillations and nonlinear instabilities, one should formulate equations so that rate of production of entropy within each cell of computational grid be nonnegative, as required by second law.
Development and validation of real-time simulation of X-ray imaging with respiratory motion.
Vidal, Franck P; Villard, Pierre-Frédéric
2016-04-01
We present a framework that combines evolutionary optimisation, soft tissue modelling and ray tracing on GPU to simultaneously compute the respiratory motion and X-ray imaging in real-time. Our aim is to provide validated building blocks with high fidelity to closely match both the human physiology and the physics of X-rays. A CPU-based set of algorithms is presented to model organ behaviours during respiration. Soft tissue deformation is computed with an extension of the Chain Mail method. Rigid elements move according to kinematic laws. A GPU-based surface rendering method is proposed to compute the X-ray image using the Beer-Lambert law. It is provided as an open-source library. A quantitative validation study is provided to objectively assess the accuracy of both components: (i) the respiration against anatomical data, and (ii) the X-ray against the Beer-Lambert law and the results of Monte Carlo simulations. Our implementation can be used in various applications, such as interactive medical virtual environment to train percutaneous transhepatic cholangiography in interventional radiology, 2D/3D registration, computation of digitally reconstructed radiograph, simulation of 4D sinograms to test tomography reconstruction tools. Copyright © 2015 Elsevier Ltd. All rights reserved.
Computer simulation of space charge
NASA Astrophysics Data System (ADS)
Yu, K. W.; Chung, W. K.; Mak, S. S.
1991-05-01
Using the particle-mesh (PM) method, a one-dimensional simulation of the well-known Langmuir-Child's law is performed on an INTEL 80386-based personal computer system. The program is coded in turbo basic (trademark of Borland International, Inc.). The numerical results obtained were in excellent agreement with theoretical predictions and the computational time required is quite modest. This simulation exercise demonstrates that some simple computer simulation using particles may be implemented successfully on PC's that are available today, and hopefully this will provide the necessary incentives for newcomers to the field who wish to acquire a flavor of the elementary aspects of the practice.
Control Law Design in a Computational Aeroelasticity Environment
NASA Technical Reports Server (NTRS)
Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.
2003-01-01
A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.
On improving the algorithm efficiency in the particle-particle force calculations
NASA Astrophysics Data System (ADS)
Kozynchenko, Alexander I.; Kozynchenko, Sergey A.
2016-09-01
The problem of calculating inter-particle forces in the particle-particle (PP) simulation models takes an important place in scientific computing. Such simulation models are used in diverse scientific applications arising in astrophysics, plasma physics, particle accelerators, etc., where the long-range forces are considered. The inverse-square laws such as Coulomb's law of electrostatic forces and Newton's law of universal gravitation are the examples of laws pertaining to the long-range forces. The standard naïve PP method outlined, for example, by Hockney and Eastwood [1] is straightforward, processing all pairs of particles in a double nested loop. The PP algorithm provides the best accuracy of all possible methods, but its computational complexity is O (Np2), where Np is a total number of particles involved. Too low efficiency of the PP algorithm seems to be the challenging issue in some cases where the high accuracy is required. An example can be taken from the charged particle beam dynamics where, under computing the own space charge of the beam, so-called macro-particles are used (see e.g., Humphries Jr. [2], Kozynchenko and Svistunov [3]).
Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations
NASA Astrophysics Data System (ADS)
Wyszkowska, Patrycja
2017-12-01
The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula) are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.
Systems-on-chip approach for real-time simulation of wheel-rail contact laws
NASA Astrophysics Data System (ADS)
Mei, T. X.; Zhou, Y. J.
2013-04-01
This paper presents the development of a systems-on-chip approach to speed up the simulation of wheel-rail contact laws, which can be used to reduce the requirement for high-performance computers and enable simulation in real time for the use of hardware-in-loop for experimental studies of the latest vehicle dynamic and control technologies. The wheel-rail contact laws are implemented using a field programmable gate array (FPGA) device with a design that substantially outperforms modern general-purpose PC platforms or fixed architecture digital signal processor devices in terms of processing time, configuration flexibility and cost. In order to utilise the FPGA's parallel-processing capability, the operations in the contact laws algorithms are arranged in a parallel manner and multi-contact patches are tackled simultaneously in the design. The interface between the FPGA device and the host PC is achieved by using a high-throughput and low-latency Ethernet link. The development is based on FASTSIM algorithms, although the design can be adapted and expanded for even more computationally demanding tasks.
NASA Technical Reports Server (NTRS)
Davidson, John B.; Murphy, Patrick C.; Lallman, Frederick J.; Hoffler, Keith D.; Bacon, Barton J.
1998-01-01
This report contains a description of a lateral-directional control law designed for the NASA High-Alpha Research Vehicle (HARV). The HARV is a F/A-18 aircraft modified to include a research flight computer, spin chute, and thrust-vectoring in the pitch and yaw axes. Two separate design tools, CRAFT and Pseudo Controls, were integrated to synthesize the lateral-directional control law. This report contains a description of the lateral-directional control law, analyses, and nonlinear simulation (batch and piloted) results. Linear analysis results include closed-loop eigenvalues, stability margins, robustness to changes in various plant parameters, and servo-elastic frequency responses. Step time responses from nonlinear batch simulation are presented and compared to design guidelines. Piloted simulation task scenarios, task guidelines, and pilot subjective ratings for the various maneuvers are discussed. Linear analysis shows that the control law meets the stability margin guidelines and is robust to stability and control parameter changes. Nonlinear batch simulation analysis shows the control law exhibits good performance and meets most of the design guidelines over the entire range of angle-of-attack. This control law (designated NASA-1A) was flight tested during the Summer of 1994 at NASA Dryden Flight Research Center.
Exploring Focal and Aberration Properties of Electrostatic Lenses through Computer Simulation
ERIC Educational Resources Information Center
Sise, Omer; Manura, David J.; Dogan, Mevlut
2008-01-01
The interactive nature of computer simulation allows students to develop a deeper understanding of the laws of charged particle optics. Here, the use of commercially available optical design programs is described as a tool to aid in solving charged particle optics problems. We describe simple and practical demonstrations of basic electrostatic…
Scaling laws for first and second generation electrospray droplets
NASA Astrophysics Data System (ADS)
Basaran, Osman; Sambath, Krishnaraj; Anthony, Christopher; Collins, Robert; Wagoner, Brayden; Harris, Michael
2017-11-01
When uncharged liquid interfaces of pendant and free drops (hereafter referred to as parent drops) or liquid films are subject to a sufficiently strong electric field, they can emit thin fluid jets from conical tip structures that form at their surfaces. The disintegration of such jets into a spray consisting of charged droplets (hereafter referred to as daughter droplets) is common to electrospray ionization mass spectrometry, printing and coating processes, and raindrops in thunderclouds. We use simulation to determine the sizes and charges of these first-generation daughter droplets which are shown to be Coulombically stable and charged below the Rayleigh limit of stability. Once these daughter droplets shrink in size due to evaporation, they in turn reach their respective Rayleigh limits and explode by emitting yet even smaller second-generation daughter droplets from their conical tips. Once again, we use simulation and theory to deduce scaling laws for the sizes and charges of these second-generation droplets. A comparison is also provided for scaling laws pertaining to different generations of daughter droplets.
NASA Technical Reports Server (NTRS)
Sturdza, Peter (Inventor); Martins-Rivas, Herve (Inventor); Suzuki, Yoshifumi (Inventor)
2014-01-01
A fluid-flow simulation over a computer-generated surface is generated using a quasi-simultaneous technique. The simulation includes a fluid-flow mesh of inviscid and boundary-layer fluid cells. An initial fluid property for an inviscid fluid cell is determined using an inviscid fluid simulation that does not simulate fluid viscous effects. An initial boundary-layer fluid property a boundary-layer fluid cell is determined using the initial fluid property and a viscous fluid simulation that simulates fluid viscous effects. An updated boundary-layer fluid property is determined for the boundary-layer fluid cell using the initial fluid property, initial boundary-layer fluid property, and an interaction law. The interaction law approximates the inviscid fluid simulation using a matrix of aerodynamic influence coefficients computed using a two-dimensional surface panel technique and a fluid-property vector. An updated fluid property is determined for the inviscid fluid cell using the updated boundary-layer fluid property.
Metriplectic simulated annealing for quasigeostrophic flow
NASA Astrophysics Data System (ADS)
Morrison, P. J.; Flierl, G. R.
2016-11-01
Metriplectic dynamics is a general form for dynamical systems that embodies the first and second laws of thermodynamics, energy conservation and entropy production. The formalism provides an H-theorem for relaxation to nontrivial equilibrium states. Upon choosing enstrophy as entropy and potential vorticity of the form q =∇2 Ψ + T (x) , recent results of computations, akin to those of, will be described for various topography functions T (x) , including ridge (T = exp (-x2 / 2)) and random functions. Interpretation of the results, in particular their sensitivity to the chosen entropy function will be discussed. PJM supported by U.S. Dept. of Energy Contract # DE-FG05-80ET-53088.
A Genetic Algorithm for UAV Routing Integrated with a Parallel Swarm Simulation
2005-03-01
Metrics. 2.3.5.1 Amdahl’s, Gustafson-Barsis’s, and Sun-Ni’s Laws . At the heart of parallel computing is the ratio of communication time to...parallel execution. Three ‘ laws ’ in particular are of interest with regard to this ratio: Amdahl’s Law , the Gustafson-Barsis’s Law , and Sun-Ni’s Law ...Amdahl’s Law makes the case for fixed size speedup. This conjecture states that speedup saturates and efficiency drops as a consequence of holding the
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1990-01-01
Strong artificial intelligence claims that conscious thought can arise in computers containing the right algorithms even though none of the programs or components of those computers understand which is going on. As proof, it asserts that brains are finite webs of neurons, each with a definite function governed by the laws of physics; this web has a set of equations that can be solved (or simulated) by a sufficiently powerful computer. Strong AI claims the Turing test as a criterion of success. A recent debate in Scientific American concludes that the Turing test is not sufficient, but leaves intact the underlying premise that thought is a computable process. The recent book by Roger Penrose, however, offers a sharp challenge, arguing that the laws of quantum physics may govern mental processes and that these laws may not be computable. In every area of mathematics and physics, Penrose finds evidence of nonalgorithmic human activity and concludes that mental processes are inherently more powerful than computational processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maire, Pierre-Henri, E-mail: maire@celia.u-bordeaux1.fr; Abgrall, Rémi, E-mail: remi.abgrall@math.u-bordeau1.fr; Breil, Jérôme, E-mail: breil@celia.u-bordeaux1.fr
2013-02-15
In this paper, we describe a cell-centered Lagrangian scheme devoted to the numerical simulation of solid dynamics on two-dimensional unstructured grids in planar geometry. This numerical method, utilizes the classical elastic-perfectly plastic material model initially proposed by Wilkins [M.L. Wilkins, Calculation of elastic–plastic flow, Meth. Comput. Phys. (1964)]. In this model, the Cauchy stress tensor is decomposed into the sum of its deviatoric part and the thermodynamic pressure which is defined by means of an equation of state. Regarding the deviatoric stress, its time evolution is governed by a classical constitutive law for isotropic material. The plasticity model employs themore » von Mises yield criterion and is implemented by means of the radial return algorithm. The numerical scheme relies on a finite volume cell-centered method wherein numerical fluxes are expressed in terms of sub-cell force. The generic form of the sub-cell force is obtained by requiring the scheme to satisfy a semi-discrete dissipation inequality. Sub-cell force and nodal velocity to move the grid are computed consistently with cell volume variation by means of a node-centered solver, which results from total energy conservation. The nominally second-order extension is achieved by developing a two-dimensional extension in the Lagrangian framework of the Generalized Riemann Problem methodology, introduced by Ben-Artzi and Falcovitz [M. Ben-Artzi, J. Falcovitz, Generalized Riemann Problems in Computational Fluid Dynamics, Cambridge Monogr. Appl. Comput. Math. (2003)]. Finally, the robustness and the accuracy of the numerical scheme are assessed through the computation of several test cases.« less
Population patterns in World’s administrative units
Miramontes, Pedro; Cocho, Germinal
2017-01-01
Whereas there has been an extended discussion concerning city population distribution, little has been said about that of administrative divisions. In this work, we investigate the population distribution of second-level administrative units of 150 countries and territories and propose the discrete generalized beta distribution (DGBD) rank-size function to describe the data. After testing the balance between the goodness of fit and number of parameters of this function compared with a power law, which is the most common model for city population, the DGBD is a good statistical model for 96% of our datasets and preferred over a power law in almost every case. Moreover, the DGBD is preferred over a power law for fitting country population data, which can be seen as the zeroth-level administrative unit. We present a computational toy model to simulate the formation of administrative divisions in one dimension and give numerical evidence that the DGBD arises from a particular case of this model. This model, along with the fitting of the DGBD, proves adequate in reproducing and describing local unit evolution and its effect on the population distribution. PMID:28791153
Transmit beamforming for optimal second-harmonic generation.
Hoilund-Kaupang, Halvard; Masoy, Svein-Erik
2011-08-01
A simulation study of transmit ultrasound beams from several transducer configurations is conducted to compare second-harmonic imaging at 3.5 MHz and 11 MHz. Second- harmonic generation and the ability to suppress near field echoes are compared. Each transducer configuration is defined by a chosen f-number and focal depth, and the transmit pressure is estimated to not exceed a mechanical index of 1.2. The medium resembles homogeneous muscle tissue with nonlinear elasticity and power-law attenuation. To improve computational efficiency, the KZK equation is utilized, and all transducers are circular-symmetric. Previous literature shows that second-harmonic generation is proportional to the square of the transmit pressure, and that transducer configurations with different transmit frequencies, but equal aperture and focal depth in terms of wavelengths, generate identical second-harmonic fields in terms of shape. Results verify this for a medium with attenuation f1. For attenuation f1.1, deviations are found, and the high frequency subsequently performs worse than the low frequency. The results suggest that high frequencies are less able to suppress near-field echoes in the presence of a heterogeneous body wall than low frequencies.
Turbulence modeling: Near-wall turbulence and effects of rotation on turbulence
NASA Technical Reports Server (NTRS)
Shih, T.-H.
1990-01-01
Many Reynolds averaged Navier-Stokes solvers use closure models in conjunction with 'the law of the wall', rather than deal with a thin, viscous sublayer near the wall. This work is motivated by the need for better models to compute near wall turbulent flow. The authors use direct numerical simulation of fully developed channel flow and one of three dimensional turbulent boundary layer flow to develop new models. These direct numerical simulations provide detailed data that experimentalists have not been able to measure directly. Another objective of the work is to examine analytically the effects of rotation on turbulence, using Rapid Distortion Theory (RDT). This work was motivated by the observation that the pressure strain models in all current second order closure models are unable to predict the effects of rotation on turbulence.
A New Model that Generates Lotka's Law.
ERIC Educational Resources Information Center
Huber, John C.
2002-01-01
Develops a new model for a process that generates Lotka's Law. Topics include measuring scientific productivity through the number of publications; rate of production; career duration; randomness; Poisson distribution; computer simulations; goodness-of-fit; theoretical support for the model; and future research. (Author/LRW)
NASA Technical Reports Server (NTRS)
Hague, D. S.
1977-01-01
Computer simulations of the one-on-one aerial combat encounter are generated under the control of specified guidance laws. Given an initial state, the vehicle and atmospheric characteristics, and the guidance laws, the aerial combat encounter is simulated by forward integration of the two vehicles' motions. The development of a combat guidance law which converts positional advantage into an improved firing opportunity is reported. A combination of lag, line of sight, and lead pursuit steering paths are followed in the guidance law. The law is based on steering error, target angle-off and the relative velocities. It readily is automated either as an onboard aid to manned aircraft pilots or as a combat guidance law for unmanned vehicles.
Sukop, Michael C.; Huang, Haibo; Alvarez, Pedro F.; Variano, Evan A.; Cunningham, Kevin J.
2013-01-01
Lattice Boltzmann flow simulations provide a physics-based means of estimating intrinsic permeability from pore structure and accounting for inertial flow that leads to departures from Darcy's law. Simulations were used to compute intrinsic permeability where standard measurement methods may fail and to provide better understanding of departures from Darcy's law under field conditions. Simulations also investigated resolution issues. Computed tomography (CT) images were acquired at 0.8 mm interscan spacing for seven samples characterized by centimeter-scale biogenic vuggy macroporosity from the extremely transmissive sole-source carbonate karst Biscayne aquifer in southeastern Florida. Samples were as large as 0.3 m in length; 7–9 cm-scale-length subsamples were used for lattice Boltzmann computations. Macroporosity of the subsamples was as high as 81%. Matrix porosity was ignored in the simulations. Non-Darcy behavior led to a twofold reduction in apparent hydraulic conductivity as an applied hydraulic gradient increased to levels observed at regional scale within the Biscayne aquifer; larger reductions are expected under higher gradients near wells and canals. Thus, inertial flows and departures from Darcy's law may occur under field conditions. Changes in apparent hydraulic conductivity with changes in head gradient computed with the lattice Boltzmann model closely fit the Darcy-Forchheimer equation allowing estimation of the Forchheimer parameter. CT-scan resolution appeared adequate to capture intrinsic permeability; however, departures from Darcy behavior were less detectable as resolution coarsened.
Solid H2 in the interstellar medium
NASA Astrophysics Data System (ADS)
Füglistaler, A.; Pfenniger, D.
2018-06-01
Context. Condensation of H2 in the interstellar medium (ISM) has long been seen as a possibility, either by deposition on dust grains or thanks to a phase transition combined with self-gravity. H2 condensation might explain the observed low efficiency of star formation and might help to hide baryons in spiral galaxies. Aims: Our aim is to quantify the solid fraction of H2 in the ISM due to a phase transition including self-gravity for different densities and temperatures in order to use the results in more complex simulations of the ISM as subgrid physics. Methods: We used molecular dynamics simulations of fluids at different temperatures and densities to study the formation of solids. Once the simulations reached a steady state, we calculated the solid mass fraction, energy increase, and timescales. By determining the power laws measured over several orders of magnitude, we extrapolated to lower densities the higher density fluids that can be simulated with current computers. Results: The solid fraction and energy increase of fluids in a phase transition are above 0.1 and do not follow a power law. Fluids out of a phase transition are still forming a small amount of solids due to chance encounters of molecules. The solid mass fraction and energy increase of these fluids are linearly dependent on density and can easily be extrapolated. The timescale is below one second, the condensation can be considered instantaneous. Conclusions: The presence of solid H2 grains has important dynamic implications on the ISM as they may be the building blocks for larger solid bodies when gravity is included. We provide the solid mass fraction, energy increase, and timescales for high density fluids and extrapolation laws for lower densities.
Discussion on ``Frontiers of the Second Law''
NASA Astrophysics Data System (ADS)
Lloyd, Seth; Bejan, Adrian; Bennett, Charles; Beretta, Gian Paolo; Butler, Howard; Gordon, Lyndsay; Grmela, Miroslav; Gyftopoulos, Elias P.; Hatsopoulos, George N.; Jou, David; Kjelstrup, Signe; Lior, Noam; Miller, Sam; Rubi, Miguel; Schneider, Eric D.; Sekulic, Dusan P.; Zhang, Zhuomin
2008-08-01
This article reports an open discussion that took place during the Keenan Symposium "Meeting the Entropy Challenge" (held in Cambridge, Massachusetts, on October 4, 2007) following the short presentations—each reported as a separate article in the present volume—by Adrian Bejan, Bjarne Andresen, Miguel Rubi, Signe Kjelstrup, David Jou, Miroslav Grmela, Lyndsay Gordon, and Eric Schneider. All panelists and the audience were asked to address the following questions • Is the second law relevant when we trap single ions, prepare, manipulate and measure single photons, excite single atoms, induce spin echoes, measure quantum entanglement? Is it possible or impossible to build Maxwell demons that beat the second law by exploiting fluctuations? • Is the maximum entropy generation principle capable of unifying nonequilibrium molecular dynamics, chemical kinetics, nonlocal and nonequilibrium rheology, biological systems, natural structures, and cosmological evolution? • Research in quantum computation and quantum information has raised many fundamental questions about the foundations of quantum theory. Are any of these questions related to the second law?
Discovering the Gas Laws and Understanding the Kinetic Theory of Gases with an iPad App
ERIC Educational Resources Information Center
Davies, Gary B.
2017-01-01
Carrying out classroom experiments that demonstrate Boyle's law and Gay-Lussac's law can be challenging. Even if we are able to conduct classroom experiments using pressure gauges and syringes, the results of these experiments do little to illuminate the kinetic theory of gases. However, molecular dynamics simulations that run on computers allow…
2010-01-01
Background The finite volume solver Fluent (Lebanon, NH, USA) is a computational fluid dynamics software employed to analyse biological mass-transport in the vasculature. A principal consideration for computational modelling of blood-side mass-transport is convection-diffusion discretisation scheme selection. Due to numerous discretisation schemes available when developing a mass-transport numerical model, the results obtained should either be validated against benchmark theoretical solutions or experimentally obtained results. Methods An idealised aneurysm model was selected for the experimental and computational mass-transport analysis of species concentration due to its well-defined recirculation region within the aneurysmal sac, allowing species concentration to vary slowly with time. The experimental results were obtained from fluid samples extracted from a glass aneurysm model, using the direct spectrophometric concentration measurement technique. The computational analysis was conducted using the four convection-diffusion discretisation schemes available to the Fluent user, including the First-Order Upwind, the Power Law, the Second-Order Upwind and the Quadratic Upstream Interpolation for Convective Kinetics (QUICK) schemes. The fluid has a diffusivity of 3.125 × 10-10 m2/s in water, resulting in a Peclet number of 2,560,000, indicating strongly convection-dominated flow. Results The discretisation scheme applied to the solution of the convection-diffusion equation, for blood-side mass-transport within the vasculature, has a significant influence on the resultant species concentration field. The First-Order Upwind and the Power Law schemes produce similar results. The Second-Order Upwind and QUICK schemes also correlate well but differ considerably from the concentration contour plots of the First-Order Upwind and Power Law schemes. The computational results were then compared to the experimental findings. An average error of 140% and 116% was demonstrated between the experimental results and those obtained from the First-Order Upwind and Power Law schemes, respectively. However, both the Second-Order upwind and QUICK schemes accurately predict species concentration under high Peclet number, convection-dominated flow conditions. Conclusion Convection-diffusion discretisation scheme selection has a strong influence on resultant species concentration fields, as determined by CFD. Furthermore, either the Second-Order or QUICK discretisation schemes should be implemented when numerically modelling convection-dominated mass-transport conditions. Finally, care should be taken not to utilize computationally inexpensive discretisation schemes at the cost of accuracy in resultant species concentration. PMID:20642816
A guidance law for hypersonic descent to a point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisler, G.R.; Hull, D.G.
1992-05-01
A neighboring external control problem is formulated for a hypersonic glider to execute a maximum-terminal-velocity descent to a stationary target. The resulting two-part, feedback control scheme initially solves a nonlinear algebraic problem to generate a nominal trajectory to the target altitude. Secondly, a neighboring optimal path computation about the nominal provides a lift and side-force perturbations necessary to achieve the target downrange and crossrange. On-line feedback simulations of the proposed scheme and a form of proportional navigation are compared with an off-line parameter optimization method. The neighboring optimal terminal velocity compares very well with the parameter optimization solution and ismore » far superior to proportional navigation. 8 refs.« less
A guidance law for hypersonic descent to a point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisler, G.R.; Hull, D.G.
1992-01-01
A neighboring external control problem is formulated for a hypersonic glider to execute a maximum-terminal-velocity descent to a stationary target. The resulting two-part, feedback control scheme initially solves a nonlinear algebraic problem to generate a nominal trajectory to the target altitude. Secondly, a neighboring optimal path computation about the nominal provides a lift and side-force perturbations necessary to achieve the target downrange and crossrange. On-line feedback simulations of the proposed scheme and a form of proportional navigation are compared with an off-line parameter optimization method. The neighboring optimal terminal velocity compares very well with the parameter optimization solution and ismore » far superior to proportional navigation. 8 refs.« less
NASA Astrophysics Data System (ADS)
Garrett, T. J.; Alva, S.; Glenn, I. B.; Krueger, S. K.
2015-12-01
There are two possible approaches for parameterizing sub-grid cloud dynamics in a coarser grid model. The most common is to use a fine scale model to explicitly resolve the mechanistic details of clouds to the best extent possible, and then to parameterize these behaviors cloud state for the coarser grid. A second is to invoke physical intuition and some very general theoretical principles from equilibrium statistical mechanics. This approach avoids any requirement to resolve time-dependent processes in order to arrive at a suitable solution. The second approach is widely used elsewhere in the atmospheric sciences: for example the Planck function for blackbody radiation is derived this way, where no mention is made of the complexities of modeling a large ensemble of time-dependent radiation-dipole interactions in order to obtain the "grid-scale" spectrum of thermal emission by the blackbody as a whole. We find that this statistical approach may be equally suitable for modeling convective clouds. Specifically, we make the physical argument that the dissipation of buoyant energy in convective clouds is done through mixing across a cloud perimeter. From thermodynamic reasoning, one might then anticipate that vertically stacked isentropic surfaces are characterized by a power law dlnN/dlnP = -1, where N(P) is the number clouds of perimeter P. In a Giga-LES simulation of convective clouds within a 100 km square domain we find that such a power law does appear to characterize simulated cloud perimeters along isentropes, provided a sufficient cloudy sample. The suggestion is that it may be possible to parameterize certain important aspects of cloud state without appealing to computationally expensive dynamic simulations.
Modeling Mendel's Laws on Inheritance in Computational Biology and Medical Sciences
ERIC Educational Resources Information Center
Singh, Gurmukh; Siddiqui, Khalid; Singh, Mankiran; Singh, Satpal
2011-01-01
The current research article is based on a simple and practical way of employing the computational power of widely available, versatile software MS Excel 2007 to perform interactive computer simulations for undergraduate/graduate students in biology, biochemistry, biophysics, microbiology, medicine in college and university classroom setting. To…
Reversible simulation of irreversible computation
NASA Astrophysics Data System (ADS)
Li, Ming; Tromp, John; Vitányi, Paul
1998-09-01
Computer computations are generally irreversible while the laws of physics are reversible. This mismatch is penalized by among other things generating excess thermic entropy in the computation. Computing performance has improved to the extent that efficiency degrades unless all algorithms are executed reversibly, for example by a universal reversible simulation of irreversible computations. All known reversible simulations are either space hungry or time hungry. The leanest method was proposed by Bennett and can be analyzed using a simple ‘reversible’ pebble game. The reachable reversible simulation instantaneous descriptions (pebble configurations) of such pebble games are characterized completely. As a corollary we obtain the reversible simulation by Bennett and, moreover, show that it is a space-optimal pebble game. We also introduce irreversible steps and give a theorem on the tradeoff between the number of allowed irreversible steps and the memory gain in the pebble game. In this resource-bounded setting the limited erasing needs to be performed at precise instants during the simulation. The reversible simulation can be modified so that it is applicable also when the simulated computation time is unknown.
Development of a Robust and Efficient Parallel Solver for Unsteady Turbomachinery Flows
NASA Technical Reports Server (NTRS)
West, Jeff; Wright, Jeffrey; Thakur, Siddharth; Luke, Ed; Grinstead, Nathan
2012-01-01
The traditional design and analysis practice for advanced propulsion systems relies heavily on expensive full-scale prototype development and testing. Over the past decade, use of high-fidelity analysis and design tools such as CFD early in the product development cycle has been identified as one way to alleviate testing costs and to develop these devices better, faster and cheaper. In the design of advanced propulsion systems, CFD plays a major role in defining the required performance over the entire flight regime, as well as in testing the sensitivity of the design to the different modes of operation. Increased emphasis is being placed on developing and applying CFD models to simulate the flow field environments and performance of advanced propulsion systems. This necessitates the development of next generation computational tools which can be used effectively and reliably in a design environment. The turbomachinery simulation capability presented here is being developed in a computational tool called Loci-STREAM [1]. It integrates proven numerical methods for generalized grids and state-of-the-art physical models in a novel rule-based programming framework called Loci [2] which allows: (a) seamless integration of multidisciplinary physics in a unified manner, and (b) automatic handling of massively parallel computing. The objective is to be able to routinely simulate problems involving complex geometries requiring large unstructured grids and complex multidisciplinary physics. An immediate application of interest is simulation of unsteady flows in rocket turbopumps, particularly in cryogenic liquid rocket engines. The key components of the overall methodology presented in this paper are the following: (a) high fidelity unsteady simulation capability based on Detached Eddy Simulation (DES) in conjunction with second-order temporal discretization, (b) compliance with Geometric Conservation Law (GCL) in order to maintain conservative property on moving meshes for second-order time-stepping scheme, (c) a novel cloud-of-points interpolation method (based on a fast parallel kd-tree search algorithm) for interfaces between turbomachinery components in relative motion which is demonstrated to be highly scalable, and (d) demonstrated accuracy and parallel scalability on large grids (approx 250 million cells) in full turbomachinery geometries.
Fast-response free-running dc-to-dc converter employing a state-trajectory control law
NASA Technical Reports Server (NTRS)
Huffman, S. D.; Burns, W. W., III; Wilson, T. G.; Owen, H. A., Jr.
1977-01-01
A recently proposed state-trajectory control law for a family of energy-storage dc-to-dc converters has been implemented for the voltage step-up configuration. Two methods of realization are discussed; one employs a digital processor and the other uses analog computational circuits. Performance characteristics of experimental voltage step-up converters operating under the control of each of these implementations are reported and compared to theoretical predictions and computer simulations.
A Computing based Simulation Model for Missile Guidance in Planar Domain
NASA Astrophysics Data System (ADS)
Chauhan, Deepak Singh; Sharma, Rajiv
2017-10-01
This paper presents the design, development and implementation of a computing based simulation model for interceptor missile guidance for countering an anti-ship missile through a navigation law. It investigates the possibility of deriving, testing and implementing an efficient variation of the PN and RPN laws. A new guidance law [true combined proportional navigation (TCPN) guidance law] that combines the strengths of both the PN and RPN and has a superior capturability in a specified zone of interest is presented in this paper. The presented proportional navigation (PN) guidance law is modeled in a two dimensional planar engagement model and its performance is studied with respect to a varying navigation ratio (N) that is dependent on the `heading error (HE)' and missile lead angle. The advantage of varying navigation ratio is: if N' > 2, Vc > 0, Vm > 0, then the sign of navigation ratio is determined by cos (ɛ + HE) and for cos (ɛ + HE) ≥ 0 and N > 0, the formulation reduces to that of PN and for cos (ɛ + HE) < 0 and N < 0, the formulation reduces to that of RPN. Hence, depending upon the values of cos (ɛ + HE) the presented navigation guidance strategy is shuffled between the PN navigation ratio and the RPN navigation ratio. The theoretical framework of TCPN guidance law is implemented in two dimensional setting of parameters. An important feature of TCPN is the HE and the aim is to achieve lower values of the heading error in simulation. The presented results in this paper show the efficiency of simulation model and also establish that TCPN can be an accurate guidance strategy that has its own range of application and suitability.
Thermodynamic cost of computation, algorithmic complexity and the information metric
NASA Technical Reports Server (NTRS)
Zurek, W. H.
1989-01-01
Algorithmic complexity is discussed as a computational counterpart to the second law of thermodynamics. It is shown that algorithmic complexity, which is a measure of randomness, sets limits on the thermodynamic cost of computations and casts a new light on the limitations of Maxwell's demon. Algorithmic complexity can also be used to define distance between binary strings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Juan, E-mail: cheng_juan@iapcm.ac.cn; Shu, Chi-Wang, E-mail: shu@dam.brown.edu
In applications such as astrophysics and inertial confinement fusion, there are many three-dimensional cylindrical-symmetric multi-material problems which are usually simulated by Lagrangian schemes in the two-dimensional cylindrical coordinates. For this type of simulation, a critical issue for the schemes is to keep spherical symmetry in the cylindrical coordinate system if the original physical problem has this symmetry. In the past decades, several Lagrangian schemes with such symmetry property have been developed, but all of them are only first order accurate. In this paper, we develop a second order cell-centered Lagrangian scheme for solving compressible Euler equations in cylindrical coordinates, basedmore » on the control volume discretizations, which is designed to have uniformly second order accuracy and capability to preserve one-dimensional spherical symmetry in a two-dimensional cylindrical geometry when computed on an equal-angle-zoned initial grid. The scheme maintains several good properties such as conservation for mass, momentum and total energy, and the geometric conservation law. Several two-dimensional numerical examples in cylindrical coordinates are presented to demonstrate the good performance of the scheme in terms of accuracy, symmetry, non-oscillation and robustness. The advantage of higher order accuracy is demonstrated in these examples.« less
NASA Astrophysics Data System (ADS)
Johnson, Kristina Mary
In 1973 the computerized tomography (CT) scanner revolutionized medical imaging. This machine can isolate and display in two-dimensional cross-sections, internal lesions and organs previously impossible to visualize. The possibility of three-dimensional imaging however is not yet exploited by present tomographic systems. Using multiple-exposure holography, three-dimensional displays can be synthesizing from two-dimensional CT cross -sections. A multiple-exposure hologram is an incoherent superposition of many individual holograms. Intuitively it is expected that holograms recorded with equal energy will reconstruct images with equal brightness. It is found however, that holograms recorded first are brighter than holograms recorded later in the superposition. This phenomena is called Holographic Reciprocity Law Failure (HRLF). Computer simulations of latent image formation in multiple-exposure holography are one of the methods used to investigate HRLF. These simulations indicate that it is the time between individual exposures in the multiple -exposure hologram that is responsible for HRLF. This physical parameter introduces an asymmetry into the latent image formation process that favors the signal of previously recorded holograms over holograms recorded later in the superposition. The origin of this asymmetry lies in the dynamics of latent image formation, and in particular in the decay of single-atom latent image specks, which have lifetimes that are short compared to typical times between exposures. An analytical model is developed for a double exposure hologram that predicts a decrease in the brightness of the second exposure as compared to the first exposure as the time between exposures increases. These results are consistent with the computer simulations. Experiments investigating the influence of this parameter on the diffraction efficiency of reconstructed images in a double exposure hologram are also found to be consistent with the computer simulations and analytical results. From this information, two techniques are presented that correct for HRLF, and succeed in reconstructing multiple holographic images of CT cross-sections with equal brightness. The multiple multiple-exposure hologram is a new hologram that increases the number of equally bright images that can be superimposed on one photographic plate.
Performance of low-rank QR approximation of the finite element Biot-Savart law
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, D A; Fasenfest, B J
2006-01-12
We are concerned with the computation of magnetic fields from known electric currents in the finite element setting. In finite element eddy current simulations it is necessary to prescribe the magnetic field (or potential, depending upon the formulation) on the conductor boundary. In situations where the magnetic field is due to a distributed current density, the Biot-Savart law can be used, eliminating the need to mesh the nonconducting regions. Computation of the Biot-Savart law can be significantly accelerated using a low-rank QR approximation. We review the low-rank QR method and report performance on selected problems.
NASA Technical Reports Server (NTRS)
Claus, Steven J.; Loos, Alfred C.
1989-01-01
RTM is a FORTRAN '77 computer code which simulates the infiltration of textile reinforcements and the kinetics of thermosetting polymer resin systems. The computer code is based on the process simulation model developed by the author. The compaction of dry, woven textile composites is simulated to describe the increase in fiber volume fraction with increasing compaction pressure. Infiltration is assumed to follow D'Arcy's law for Newtonian viscous fluids. The chemical changes which occur in the resin during processing are simulated with a thermo-kinetics model. The computer code is discussed on the basis of the required input data, output files and some comments on how to interpret the results. An example problem is solved and a complete listing is included.
ERIC Educational Resources Information Center
Ticcioni, Daniel A.
1981-01-01
A "Civil Litigation Exercise" (a litigation simulation) conducted during the second semester of a first year procedure course at the New England School of Law is described. The purpose of the exercise is to simulate the real world of adversary pleading and practice. The Civil Procedure Litigation exercises are appended. (MLW)
2010-06-01
speed doubles approximately every 18 months, Nick Bostrom published a study in 1998 that equated computer processing power to that of the human...bits, this equates to 1017 operations per second, or 1011 millions of instructions per second (MIPS), for human brain performance ( Bostrom , 1998). In...estimates based off Moore’s Law put realistic, affordable computer processing power equal to that of humans somewhere in the 2020–2025 timeframe ( Bostrom
A Novel Approach for Modeling Chemical Reaction in Generalized Fluid System Simulation Program
NASA Technical Reports Server (NTRS)
Sozen, Mehmet; Majumdar, Alok
2002-01-01
The Generalized Fluid System Simulation Program (GFSSP) is a computer code developed at NASA Marshall Space Flight Center for analyzing steady state and transient flow rates, pressures, temperatures, and concentrations in a complex flow network. The code, which performs system level simulation, can handle compressible and incompressible flows as well as phase change and mixture thermodynamics. Thermodynamic and thermophysical property programs, GASP, WASP and GASPAK provide the necessary data for fluids such as helium, methane, neon, nitrogen, carbon monoxide, oxygen, argon, carbon dioxide, fluorine, hydrogen, water, a hydrogen, isobutane, butane, deuterium, ethane, ethylene, hydrogen sulfide, krypton, propane, xenon, several refrigerants, nitrogen trifluoride and ammonia. The program which was developed out of need for an easy to use system level simulation tool for complex flow networks, has been used for the following purposes to name a few: Space Shuttle Main Engine (SSME) High Pressure Oxidizer Turbopump Secondary Flow Circuits, Axial Thrust Balance of the Fastrac Engine Turbopump, Pressurized Propellant Feed System for the Propulsion Test Article at Stennis Space Center, X-34 Main Propulsion System, X-33 Reaction Control System and Thermal Protection System, and International Space Station Environmental Control and Life Support System design. There has been an increasing demand for implementing a combustion simulation capability into GFSSP in order to increase its system level simulation capability of a liquid rocket propulsion system starting from the propellant tanks up to the thruster nozzle for spacecraft as well as launch vehicles. The present work was undertaken for addressing this need. The chemical equilibrium equations derived from the second law of thermodynamics and the energy conservation equation derived from the first law of thermodynamics are solved simultaneously by a Newton-Raphson method. The numerical scheme was implemented as a User Subroutine in GFSSP.
CPU SIM: A Computer Simulator for Use in an Introductory Computer Organization-Architecture Class.
ERIC Educational Resources Information Center
Skrein, Dale
1994-01-01
CPU SIM, an interactive low-level computer simulation package that runs on the Macintosh computer, is described. The program is designed for instructional use in the first or second year of undergraduate computer science, to teach various features of typical computer organization through hands-on exercises. (MSE)
NASA Astrophysics Data System (ADS)
Altschuler, Bruce R.; Monson, Keith L.
1998-03-01
Representation of crime scenes as virtual reality 3D computer displays promises to become a useful and important tool for law enforcement evaluation and analysis, forensic identification and pathological study and archival presentation during court proceedings. Use of these methods for assessment of evidentiary materials demands complete accuracy of reproduction of the original scene, both in data collection and in its eventual virtual reality representation. The recording of spatially accurate information as soon as possible after first arrival of law enforcement personnel is advantageous for unstable or hazardous crime scenes and reduces the possibility that either inadvertent measurement error or deliberate falsification may occur or be alleged concerning processing of a scene. Detailed measurements and multimedia archiving of critical surface topographical details in a calibrated, uniform, consistent and standardized quantitative 3D coordinate method are needed. These methods would afford professional personnel in initial contact with a crime scene the means for remote, non-contacting, immediate, thorough and unequivocal documentation of the contents of the scene. Measurements of the relative and absolute global positions of object sand victims, and their dispositions within the scene before their relocation and detailed examination, could be made. Resolution must be sufficient to map both small and large objects. Equipment must be able to map regions at varied resolution as collected from different perspectives. Progress is presented in devising methods for collecting and archiving 3D spatial numerical data from crime scenes, sufficient for law enforcement needs, by remote laser structured light and video imagery. Two types of simulation studies were done. One study evaluated the potential of 3D topographic mapping and 3D telepresence using a robotic platform for explosive ordnance disassembly. The second study involved using the laser mapping system on a fixed optical bench with simulated crime scene models of the people and furniture to assess feasibility, requirements and utility of such a system for crime scene documentation and analysis.
Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane
2017-06-21
Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane
Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, A.; Ravichandran, R.; Park, J. H.
The second-order non-Navier-Fourier constitutive laws, expressed in a compact algebraic mathematical form, were validated for the force-driven Poiseuille gas flow by the deterministic atomic-level microscopic molecular dynamics (MD). Emphasis is placed on how completely different methods (a second-order continuum macroscopic theory based on the kinetic Boltzmann equation, the probabilistic mesoscopic direct simulation Monte Carlo, and, in particular, the deterministic microscopic MD) describe the non-classical physics, and whether the second-order non-Navier-Fourier constitutive laws derived from the continuum theory can be validated using MD solutions for the viscous stress and heat flux calculated directly from the molecular data using the statistical method.more » Peculiar behaviors (non-uniform tangent pressure profile and exotic instantaneous heat conduction from cold to hot [R. S. Myong, “A full analytical solution for the force-driven compressible Poiseuille gas flow based on a nonlinear coupled constitutive relation,” Phys. Fluids 23(1), 012002 (2011)]) were re-examined using atomic-level MD results. It was shown that all three results were in strong qualitative agreement with each other, implying that the second-order non-Navier-Fourier laws are indeed physically legitimate in the transition regime. Furthermore, it was shown that the non-Navier-Fourier constitutive laws are essential for describing non-zero normal stress and tangential heat flux, while the classical and non-classical laws remain similar for shear stress and normal heat flux.« less
The investigation of tethered satellite system dynamics
NASA Technical Reports Server (NTRS)
Lorenzini, E.
1985-01-01
The tether control law to retrieve the satellite was modified in order to have a smooth retrieval trajectory of the satellite that minimizes the thruster activation. The satellite thrusters were added to the rotational dynamics computer code and a preliminary control logic was implemented to simulate them during the retrieval maneuver. The high resolution computer code for modelling the three dimensional dynamics of untensioned tether, SLACK3, was made fully operative and a set of computer simulations of possible tether breakages was run. The distribution of the electric field around an electrodynamic tether in vacuo severed at some length from the shuttle was computed with a three dimensional electrodynamic computer code.
Cournane, S; Sheehy, N; Cooke, J
2014-06-01
Benford's law is an empirical observation which predicts the expected frequency of digits in naturally occurring datasets spanning multiple orders of magnitude, with the law having been most successfully applied as an audit tool in accountancy. This study investigated the sensitivity of the technique in identifying system output changes using simulated changes in interventional radiology Dose-Area-Product (DAP) data, with any deviations from Benford's distribution identified using z-statistics. The radiation output for interventional radiology X-ray equipment is monitored annually during quality control testing; however, for a considerable portion of the year an increased output of the system, potentially caused by engineering adjustments or spontaneous system faults may go unnoticed, leading to a potential increase in the radiation dose to patients. In normal operation recorded examination radiation outputs vary over multiple orders of magnitude rendering the application of normal statistics ineffective for detecting systematic changes in the output. In this work, the annual DAP datasets complied with Benford's first order law for first, second and combinations of the first and second digits. Further, a continuous 'rolling' second order technique was devised for trending simulated changes over shorter timescales. This distribution analysis, the first employment of the method for radiation output trending, detected significant changes simulated on the original data, proving the technique useful in this case. The potential is demonstrated for implementation of this novel analysis for monitoring and identifying change in suitable datasets for the purpose of system process control. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
T.Z. Ye; K.J.S. Jayawickrama; G.R. Johnson
2006-01-01
Using computer simulation, we evaluated the impact of using first-generation information to increase selection efficiency in a second-generation breeding program. Selection efficiency was compared in terms of increase in rank correlation between estimated and true breeding values (i.e., ranking accuracy), reduction in coefficient of variation of correlation...
Application of real-time engine simulations to the development of propulsion system controls
NASA Technical Reports Server (NTRS)
Szuch, J. R.
1975-01-01
The development of digital controls for turbojet and turbofan engines is presented by the use of real-time computer simulations of the engines. The engine simulation provides a test-bed for evaluating new control laws and for checking and debugging control software and hardware prior to engine testing. The development and use of real-time, hybrid computer simulations of the Pratt and Whitney TF30-P-3 and F100-PW-100 augmented turbofans are described in support of a number of controls research programs at the Lewis Research Center. The role of engine simulations in solving the propulsion systems integration problem is also discussed.
NASA Astrophysics Data System (ADS)
Mudunuru, M. K.; Karra, S.; Vesselinov, V. V.
2017-12-01
The efficiency of many hydrogeological applications such as reactive-transport and contaminant remediation vastly depends on the macroscopic mixing occurring in the aquifer. In the case of remediation activities, it is fundamental to enhancement and control of the mixing through impact of the structure of flow field which is impacted by groundwater pumping/extraction, heterogeneity, and anisotropy of the flow medium. However, the relative importance of these hydrogeological parameters to understand mixing process is not well studied. This is partially because to understand and quantify mixing, one needs to perform multiple runs of high-fidelity numerical simulations for various subsurface model inputs. Typically, high-fidelity simulations of existing subsurface models take hours to complete on several thousands of processors. As a result, they may not be feasible to study the importance and impact of model inputs on mixing. Hence, there is a pressing need to develop computationally efficient models to accurately predict the desired QoIs for remediation and reactive-transport applications. An attractive way to construct computationally efficient models is through reduced-order modeling using machine learning. These approaches can substantially improve our capabilities to model and predict remediation process. Reduced-Order Models (ROMs) are similar to analytical solutions or lookup tables. However, the method in which ROMs are constructed is different. Here, we present a physics-informed ML framework to construct ROMs based on high-fidelity numerical simulations. First, random forests, F-test, and mutual information are used to evaluate the importance of model inputs. Second, SVMs are used to construct ROMs based on these inputs. These ROMs are then used to understand mixing under perturbed vortex flows. Finally, we construct scaling laws for certain important QoIs such as degree of mixing and product yield. Scaling law parameters dependence on model inputs are evaluated using cluster analysis. We demonstrate application of the developed method for model analyses of reactive-transport and contaminant remediation at the Los Alamos National Laboratory (LANL) chromium contamination sites. The developed method is directly applicable for analyses of alternative site remediation scenarios.
Impact Angle and Time Control Guidance Under Field-of-View Constraints and Maneuver Limits
NASA Astrophysics Data System (ADS)
Shim, Sang-Wook; Hong, Seong-Min; Moon, Gun-Hee; Tahk, Min-Jea
2018-04-01
This paper proposes a guidance law which considers the constraints of seeker field-of-view (FOV) as well as the requirements on impact angle and time. The proposed guidance law is designed for a constant speed missile against a stationary target. The guidance law consists of two terms of acceleration commands. The first one is to achieve zero-miss distance and the desired impact angle, while the second is to meet the desired impact time. To consider the limits of FOV and lateral maneuver capability, a varying-gain approach is applied on the second term. Reduction of realizable impact times due to these limits is then analyzed by finding the longest course among the feasible ones. The performance of the proposed guidance law is demonstrated by numerical simulation for various engagement conditions.
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
Emulation/Simulation Computer Model (ESCM) computes the transient performance of a Space Station air revitalization subsystem with carbon dioxide removal provided by a solid amine water desorbed subsystem called SAWD. This manual describes the mathematical modeling and equations used in the ESCM. For the system as a whole and for each individual component, the fundamental physical and chemical laws which govern their operations are presented. Assumptions are stated, and when necessary, data is presented to support empirically developed relationships.
MARC calculations for the second WIPP structural benchmark problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, H.S.
1981-05-01
This report describes calculations made with the MARC structural finite element code for the second WIPP structural benchmark problem. Specific aspects of problem implementation such as element choice, slip line modeling, creep law implementation, and thermal-mechanical coupling are discussed in detail. Also included are the computational results specified in the benchmark problem formulation.
Space-Plane Spreadsheet Program
NASA Technical Reports Server (NTRS)
Mackall, Dale
1993-01-01
Basic Hypersonic Data and Equations (HYPERDATA) spreadsheet computer program provides data gained from three analyses of performance of space plane. Equations used to perform analyses derived from Newton's second law of physics, derivation included. First analysis is parametric study of some basic factors affecting ability of space plane to reach orbit. Second includes calculation of thickness of spherical fuel tank. Third produces ratio between volume of fuel and total mass for each of various aircraft. HYPERDATA intended for use on Macintosh(R) series computers running Microsoft Excel 3.0.
Reconsidering Simulations in Science Education at a Distance: Features of Effective Use
ERIC Educational Resources Information Center
Blake, C.; Scanlon, E.
2007-01-01
This paper proposes a reconsideration of use of computer simulations in science education. We discuss three studies of the use of science simulations for undergraduate distance learning students. The first one, "The Driven Pendulum" simulation is a computer-based experiment on the behaviour of a pendulum. The second simulation, "Evolve" is…
NASA Astrophysics Data System (ADS)
Shenker, Orly R.
2004-09-01
In 1867, James Clerk Maxwell proposed a perpetuum mobile of the second kind, that is, a counter example for the Second Law of thermodynamics, which came to be known as "Maxwell's Demon." Unlike any other perpetual motion machine, this one escaped attempts by the best scientists and philosophers to show that the Second Law or its statistical mechanical counterparts are universal after all. "Maxwell's demon lives on. After more than 130 years of uncertain life and at least two pronouncements of death, this fanciful character seems more vibrant than ever." These words of Harvey Leff and Andrew Rex (1990), which open their introduction to Maxwell's Demon 2: Entropy, Classical and Quantum Information, Computing (hereafter MD2) are very true: the Demon is as challenging and as intriguing as ever, and forces us to think and rethink about the foundations of thermodynamics and of statistical mechanics.
High-Performance High-Order Simulation of Wave and Plasma Phenomena
NASA Astrophysics Data System (ADS)
Klockner, Andreas
This thesis presents results aiming to enhance and broaden the applicability of the discontinuous Galerkin ("DG") method in a variety of ways. DG was chosen as a foundation for this work because it yields high-order finite element discretizations with very favorable numerical properties for the treatment of hyperbolic conservation laws. In a first part, I examine progress that can be made on implementation aspects of DG. In adapting the method to mass-market massively parallel computation hardware in the form of graphics processors ("GPUs"), I obtain an increase in computation performance per unit of cost by more than an order of magnitude over conventional processor architectures. Key to this advance is a recipe that adapts DG to a variety of hardware through automated self-tuning. I discuss new parallel programming tools supporting GPU run-time code generation which are instrumental in the DG self-tuning process and contribute to its reaching application floating point throughput greater than 200 GFlops/s on a single GPU and greater than 3 TFlops/s on a 16-GPU cluster in simulations of electromagnetics problems in three dimensions. I further briefly discuss the solver infrastructure that makes this possible. In the second part of the thesis, I introduce a number of new numerical methods whose motivation is partly rooted in the opportunity created by GPU-DG: First, I construct and examine a novel GPU-capable shock detector, which, when used to control an artificial viscosity, helps stabilize DG computations in gas dynamics and a number of other fields. Second, I describe my pursuit of a method that allows the simulation of rarefied plasmas using a DG discretization of the electromagnetic field. Finally, I introduce new explicit multi-rate time integrators for ordinary differential equations with multiple time scales, with a focus on applicability to DG discretizations of time-dependent problems.
Supercritical entanglement in local systems: Counterexample to the area law for quantum matter.
Movassagh, Ramis; Shor, Peter W
2016-11-22
Quantum entanglement is the most surprising feature of quantum mechanics. Entanglement is simultaneously responsible for the difficulty of simulating quantum matter on a classical computer and the exponential speedups afforded by quantum computers. Ground states of quantum many-body systems typically satisfy an "area law": The amount of entanglement between a subsystem and the rest of the system is proportional to the area of the boundary. A system that obeys an area law has less entanglement and can be simulated more efficiently than a generic quantum state whose entanglement could be proportional to the total system's size. Moreover, an area law provides useful information about the low-energy physics of the system. It is widely believed that for physically reasonable quantum systems, the area law cannot be violated by more than a logarithmic factor in the system's size. We introduce a class of exactly solvable one-dimensional physical models which we can prove have exponentially more entanglement than suggested by the area law, and violate the area law by a square-root factor. This work suggests that simple quantum matter is richer and can provide much more quantum resources (i.e., entanglement) than expected. In addition to using recent advances in quantum information and condensed matter theory, we have drawn upon various branches of mathematics such as combinatorics of random walks, Brownian excursions, and fractional matching theory. We hope that the techniques developed herein may be useful for other problems in physics as well.
NASA Astrophysics Data System (ADS)
Juhui, Chen; Yanjia, Tang; Dan, Li; Pengfei, Xu; Huilin, Lu
2013-07-01
Flow behavior of gas and particles is predicted by the large eddy simulation of gas-second order moment of solid model (LES-SOM model) in the simulation of flow behavior in CFB. This study shows that the simulated solid volume fractions along height using a two-dimensional model are in agreement with experiments. The velocity, volume fraction and second-order moments of particles are computed. The second-order moments of clusters are calculated. The solid volume fraction, velocity and second order moments are compared at the three different model constants.
NASA Technical Reports Server (NTRS)
Bedrossian, Nazareth Sarkis
1987-01-01
The correspondence between robotic manipulators and single gimbal Control Moment Gyro (CMG) systems was exploited to aid in the understanding and design of single gimbal CMG Steering laws. A test for null motion near a singular CMG configuration was derived which is able to distinguish between escapable and unescapable singular states. Detailed analysis of the Jacobian matrix null-space was performed and results were used to develop and test a variety of single gimbal CMG steering laws. Computer simulations showed that all existing singularity avoidance methods are unable to avoid Elliptic internal singularities. A new null motion algorithm using the Moore-Penrose pseudoinverse, however, was shown by simulation to avoid Elliptic type singularities under certain conditions. The SR-inverse, with appropriate null motion was proposed as a general approach to singularity avoidance, because of its ability to avoid singularities through limited introduction of torque error. Simulation results confirmed the superior performance of this method compared to the other available and proposed pseudoinverse-based Steering laws.
Chem Lab Simulation #3 and #4.
ERIC Educational Resources Information Center
Pipeline, 1983
1983-01-01
Two copy-protected chemistry simulations (for Apple II) are described. The first demonstrates Hess' law of heat reaction. The second illustrates how heat of vaporization can be used to determine an unknown liquid and shows how to find thermodynamic parameters in an equilibrium reaction. Both are self-instructing and use high-resolution graphics.…
2009-05-14
11 Next Generation Internet Research Act of 1998...performance computing R&D and called for increased interagency planning and coordination. The second, the Next Generation Internet Research Act of...law is available at http://www.nitrd.gov/congressional/laws/pl_102-194.html. 19 Next Generation Internet Research Act of 1998, P.L. 105-305, 15 U.S.C
Second law analysis of a conventional steam power plant
NASA Technical Reports Server (NTRS)
Liu, Geng; Turner, Robert H.; Cengel, Yunus A.
1993-01-01
A numerical investigation of exergy destroyed by operation of a conventional steam power plant is computed via an exergy cascade. An order of magnitude analysis shows that exergy destruction is dominated by combustion and heat transfer across temperature differences inside the boiler, and conversion of energy entering the turbine/generator sets from thermal to electrical. Combustion and heat transfer inside the boiler accounts for 53.83 percent of the total exergy destruction. Converting thermal energy into electrical energy is responsible for 41.34 percent of the total exergy destruction. Heat transfer across the condenser accounts for 2.89 percent of the total exergy destruction. Fluid flow with friction is responsible for 0.50 percent of the total exergy destruction. The boiler feed pump turbine accounts for 0.25 percent of the total exergy destruction. Fluid flow mixing is responsible for 0.23 percent of the total exergy destruction. Other equipment including gland steam condenser, drain cooler, deaerator and heat exchangers are, in the aggregate, responsible for less than one percent of the total exergy destruction. An energy analysis is also given for comparison of exergy cascade to energy cascade. Efficiencies based on both the first law and second law of thermodynamics are calculated for a number of components and for the plant. The results show that high first law efficiency does not mean high second law efficiency. Therefore, the second law analysis has been proven to be a more powerful tool in pinpointing real losses. The procedure used to determine total exergy destruction and second law efficiency can be used in a conceptual design and parametric study to evaluate the performance of other steam power plants and other thermal systems.
Fixed gain and adaptive techniques for rotorcraft vibration control
NASA Technical Reports Server (NTRS)
Roy, R. H.; Saberi, H. A.; Walker, R. A.
1985-01-01
The results of an analysis effort performed to demonstrate the feasibility of employing approximate dynamical models and frequency shaped cost functional control law desgin techniques for helicopter vibration suppression are presented. Both fixed gain and adaptive control designs based on linear second order dynamical models were implemented in a detailed Rotor Systems Research Aircraft (RSRA) simulation to validate these active vibration suppression control laws. Approximate models of fuselage flexibility were included in the RSRA simulation in order to more accurately characterize the structural dynamics. The results for both the fixed gain and adaptive approaches are promising and provide a foundation for pursuing further validation in more extensive simulation studies and in wind tunnel and/or flight tests.
NASA Astrophysics Data System (ADS)
Kettle, L. M.; Mora, P.; Weatherley, D.; Gross, L.; Xing, H.
2006-12-01
Simulations using the Finite Element method are widely used in many engineering applications and for the solution of partial differential equations (PDEs). Computational models based on the solution of PDEs play a key role in earth systems simulations. We present numerical modelling of crustal fault systems where the dynamic elastic wave equation is solved using the Finite Element method. This is achieved using a high level computational modelling language, escript, available as open source software from ACcESS (Australian Computational Earth Systems Simulator), the University of Queensland. Escript is an advanced geophysical simulation software package developed at ACcESS which includes parallel equation solvers, data visualisation and data analysis software. The escript library was implemented to develop a flexible Finite Element model which reliably simulates the mechanism of faulting and the physics of earthquakes. Both 2D and 3D elastodynamic models are being developed to study the dynamics of crustal fault systems. Our final goal is to build a flexible model which can be applied to any fault system with user-defined geometry and input parameters. To study the physics of earthquake processes, two different time scales must be modelled, firstly the quasi-static loading phase which gradually increases stress in the system (~100years), and secondly the dynamic rupture process which rapidly redistributes stress in the system (~100secs). We will discuss the solution of the time-dependent elastic wave equation for an arbitrary fault system using escript. This involves prescribing the correct initial stress distribution in the system to simulate the quasi-static loading of faults to failure; determining a suitable frictional constitutive law which accurately reproduces the dynamics of the stick/slip instability at the faults; and using a robust time integration scheme. These dynamic models generate data and information that can be used for earthquake forecasting.
Quasiperiodic oscillation and possible Second Law violation in a nanosystem
NASA Astrophysics Data System (ADS)
Quick, R.; Singharoy, A.; Ortoleva, P.
2013-05-01
Simulation of a virus-like particle reveals persistent oscillation about a free-energy minimizing structure. For an icosahedral structure of 12 human papillomavirus (HPV) L1 protein pentamers, the period is about 70 picoseconds and has amplitude of about 4 Å at 300 K and pH 7. The pentamers move radially and out-of-phase with their neighbors. As temperature increases the amplitude and period decrease. Since the dynamics are shown to be friction-dominated and free-energy driven, the oscillations are noninertial. These anomalous oscillations are an apparent violation of the Second Law mediated by fluctuations accompanying nanosystem behavior.
Optimized Materials From First Principles Simulations: Are We There Yet?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galli, G; Gygi, F
2005-07-26
In the past thirty years, the use of scientific computing has become pervasive in all disciplines: collection and interpretation of most experimental data is carried out using computers, and physical models in computable form, with various degrees of complexity and sophistication, are utilized in all fields of science. However, full prediction of physical and chemical phenomena based on the basic laws of Nature, using computer simulations, is a revolution still in the making, and it involves some formidable theoretical and computational challenges. We illustrate the progress and successes obtained in recent years in predicting fundamental properties of materials in condensedmore » phases and at the nanoscale, using ab-initio, quantum simulations. We also discuss open issues related to the validation of the approximate, first principles theories used in large scale simulations, and the resulting complex interplay between computation and experiment. Finally, we describe some applications, with focus on nanostructures and liquids, both at ambient and under extreme conditions.« less
ERIC Educational Resources Information Center
School Science Review, 1984
1984-01-01
Presents (1) suggestions on teaching volume and density in the elementary school; (2) ideas for teaching about floating and sinking; (3) a simple computer program on color addition; and (4) an illustration of Newton's second law of motion. (JN)
Refining Pragmatically-Appropriate Oral Communication via Computer-Simulated Conversations
ERIC Educational Resources Information Center
Sydorenko, Tetyana; Daurio, Phoebe; Thorne, Steven L.
2018-01-01
To address the problem of limited opportunities for practicing second language speaking in interaction, especially delicate interactions requiring pragmatic competence, we describe computer simulations designed for the oral practice of extended pragmatic routines and report on the affordances of such simulations for learning pragmatically…
Laboratory study of sonic booms and their scaling laws. [ballistic range simulation
NASA Technical Reports Server (NTRS)
Toong, T. Y.
1974-01-01
This program undertook to seek a basic understanding of non-linear effects associated with caustics, through laboratory simulation experiments of sonic booms in a ballistic range and a coordinated theoretical study of scaling laws. Two cases of superbooms or enhanced sonic booms at caustics have been studied. The first case, referred to as acceleration superbooms, is related to the enhanced sonic booms generated during the acceleration maneuvers of supersonic aircrafts. The second case, referred to as refraction superbooms, involves the superbooms that are generated as a result of atmospheric refraction. Important theoretical and experimental results are briefly reported.
Computer Solution of the Two-Dimensional Tether Ball: Problem to Illustrate Newton's Second Law.
ERIC Educational Resources Information Center
Zimmerman, W. Bruce
Force diagrams involving angular velocity, linear velocity, centripetal force, work, and kinetic energy are given with related equations of motion expressed in polar coordinates. The computer is used to solve differential equations, thus reducing the mathematical requirements of the students. An experiment is conducted using an air table to check…
Evaluation of the entropy consistent euler flux on 1D and 2D test problems
NASA Astrophysics Data System (ADS)
Roslan, Nur Khairunnisa Hanisah; Ismail, Farzad
2012-06-01
Perhaps most CFD simulations may yield good predictions of pressure and velocity when compared to experimental data. Unfortunately, these results will most likely not adhere to the second law of thermodynamics hence comprising the authenticity of predicted data. Currently, the test of a good CFD code is to check how much entropy is generated in a smooth flow and hope that the numerical entropy produced is of the correct sign when a shock is encountered. Herein, a shock capturing code written in C++ based on a recent entropy consistent Euler flux is developed to simulate 1D and 2D flows. Unlike other finite volume schemes in commercial CFD code, this entropy consistent flux (EC) function precisely satisfies the discrete second law of thermodynamics. This EC flux has an entropy-conserved part, preserving entropy for smooth flows and a numerical diffusion part that will accurately produce the proper amount of entropy, consistent with the second law. Several numerical simulations of the entropy consistent flux have been tested on two dimensional test cases. The first case is a Mach 3 flow over a forward facing step. The second case is a flow over a NACA 0012 airfoil while the third case is a hypersonic flow passing over a 2D cylinder. Local flow quantities such as velocity and pressure are analyzed and then compared with mainly the Roe flux. The results herein show that the EC flux does not capture the unphysical rarefaction shock unlike the Roe-flux and does not easily succumb to the carbuncle phenomenon. In addition, the EC flux maintains good performance in cases where the Roe flux is known to be superior.
Thermodynamic analysis of shark skin texture surfaces for microchannel flow
NASA Astrophysics Data System (ADS)
Yu, Hai-Yan; Zhang, Hao-Chun; Guo, Yang-Yu; Tan, He-Ping; Li, Yao; Xie, Gong-Nan
2016-09-01
The studies of shark skin textured surfaces in flow drag reduction provide inspiration to researchers overcoming technical challenges from actual production application. In this paper, three kinds of infinite parallel plate flow models with microstructure inspired by shark skin were established, namely blade model, wedge model and the smooth model, according to cross-sectional shape of microstructure. Simulation was carried out by using FLUENT, which simplified the computation process associated with direct numeric simulations. To get the best performance from simulation results, shear-stress transport k-omega turbulence model was chosen during the simulation. Since drag reduction mechanism is generally discussed from kinetics point of view, which cannot interpret the cause of these losses directly, a drag reduction rate was established based on the second law of thermodynamics. Considering abrasion and fabrication precision in practical applications, three kinds of abraded geometry models were constructed and tested, and the ideal microstructure was found to achieve best performance suited to manufacturing production on the basis of drag reduction rate. It was also believed that bionic shark skin surfaces with mechanical abrasion may draw more attention from industrial designers and gain wide applications with drag-reducing characteristics.
Modeling the pharyngeal pressure during adult nasal high flow therapy.
Kumar, Haribalan; Spence, Callum J T; Tawhai, Merryn H
2015-12-01
Subjects receiving nasal high flow (NHF) via wide-bore nasal cannula may experience different levels of positive pressure depending on the individual response to NHF. In this study, airflow in the nasal airway during NHF-assisted breathing is simulated and nasopharyngeal airway pressure numerically computed, to determine whether the relationship between NHF and pressure can be described by a simple equation. Two geometric models are used for analysis. In the first, 3D airway geometry is reconstructed from computed tomography images of an adult nasal airway. For the second, a simplified geometric model is derived that has the same cross-sectional area as the complex model, but is more readily amenable to analysis. Peak airway pressure is correlated as a function of nasal valve area, nostril area and cannula flow rate, for NHF rates of 20, 40 and 60 L/min. Results show that airway pressure is related by a power law to NHF rate, valve area, and nostril area. Copyright © 2015 Elsevier B.V. All rights reserved.
The investigation of tethered satellite system dynamics
NASA Technical Reports Server (NTRS)
Lorenzini, E.
1985-01-01
Progress in tethered satellite system dynamics research is reported. A retrieval rate control law with no angular feedback to investigate the system's dynamic response was studied. The initial conditions for the computer code which simulates the satellite's rotational dynamics were extended to a generic orbit. The model of the satellite thrusters was modified to simulate a pulsed thrust, by making the SKYHOOK integrator suitable for dealing with delta functions without loosing computational efficiency. Tether breaks were simulated with the high resolution computer code SLACK3. Shuttle's maneuvers were tested. The electric potential around a severed conductive tether with insulator, in the case of a tether breakage at 20 km from the Shuttle, was computed. The electrodynamic hazards due to the breakage of the TSS electrodynamic tether in a plasma are evaluated.
NASA Astrophysics Data System (ADS)
Liu, Jianming; Grant, Steven L.; Benesty, Jacob
2015-12-01
A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mu-law and the l 0 norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter's coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l 0 PAPA and mu-law PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mu-law PAPA, and l 0 PAPA, etc., which makes it very appealing for real-time implementation.
Teaching physiology and the World Wide Web: electrochemistry and electrophysiology on the Internet.
Dwyer, T M; Fleming, J; Randall, J E; Coleman, T G
1997-12-01
Students seek active learning experiences that can rapidly impart relevant information in the most convenient way possible. Computer-assisted education can now use the resources of the World Wide Web to convey the important characteristics of events as elemental as the physical properties of osmotically active particles in the cell and as complex as the nerve action potential or the integrative behavior of the intact organism. We have designed laboratory exercises that introduce first-year medical students to membrane and action potentials, as well as the more complex example of integrative physiology, using the dynamic properties of computer simulations. Two specific examples are presented. The first presents the physical laws that apply to osmotic, chemical, and electrical gradients, leading to the development of the concept of membrane potentials; this module concludes with the simulation of the ability of the sodium-potassium pump to establish chemical gradients and maintain cell volume. The second module simulates the action potential according to the Hodgkin-Huxley model, illustrating the concepts of threshold, inactivation, refractory period, and accommodation. Students can access these resources during the scheduled laboratories or on their own time via our Web site on the Internet (http./(/)phys-main.umsmed.edu) by using the World Wide Web protocol. Accurate version control is possible because one valid, but easily edited, copy of the labs exists at the Web site. A common graphical interface is possible through the use of the Hypertext mark-up language. Platform independence is possible through the logical and arithmetic calculations inherent to graphical browsers and the Javascript computer language. The initial success of this program indicates that medical education can be very effective both by the use of accurate simulations and by the existence of a universally accessible Internet resource.
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
Computer simulation study of the nematic-vapour interface in the Gay-Berne model
NASA Astrophysics Data System (ADS)
Rull, Luis F.; Romero-Enrique, José Manuel
2017-06-01
We present computer simulations of the vapour-nematic interface of the Gay-Berne model. We considered situations which correspond to either prolate or oblate molecules. We determine the anchoring of the nematic phase and correlate it with the intermolecular potential parameters. On the other hand, we evaluate the surface tension associated to this interface. We find a corresponding states law for the surface tension dependence on the temperature, valid for both prolate and oblate molecules.
Viscoelastic Earthquake Cycle Simulation with Memory Variable Method
NASA Astrophysics Data System (ADS)
Hirahara, K.; Ohtani, M.
2017-12-01
There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half layer with the relaxation time of 5 yrs. In a test model where we set the fault at 30-40 km depths, the recurrence time of the EQ cycle is reduced by 1 yr from 27.92 in elastic case to 26.85 yrs. This smaller recurrence time is the same as in Kato (2002), but the effect of the viscoelasticity on the cycles would be larger in the dip-slip fault case than that in the strike-slip one.
Digital autopilots: Design considerations and simulator evaluations
NASA Technical Reports Server (NTRS)
Osder, S.; Neuman, F.; Foster, J.
1971-01-01
The development of a digital autopilot program for a transport aircraft and the evaluation of that system's performance on a transport aircraft simulator is discussed. The digital autopilot includes three axis attitude stabilization, automatic throttle control and flight path guidance functions with emphasis on the mode progression from descent into the terminal area through automatic landing. The study effort involved a sequence of tasks starting with the definition of detailed system block diagrams of control laws followed by a flow charting and programming phase and concluding with performance verification using the transport aircraft simulation. The autopilot control laws were programmed in FORTRAN 4 in order to isolate the design process from requirements peculiar to an individual computer.
NASA Astrophysics Data System (ADS)
Simbanefayi, Innocent; Khalique, Chaudry Masood
2018-03-01
In this work we study the Korteweg-de Vries-Benjamin-Bona-Mahony (KdV-BBM) equation, which describes the two-way propagation of waves. Using Lie symmetry method together with Jacobi elliptic function expansion and Kudryashov methods we construct its travelling wave solutions. Also, we derive conservation laws of the KdV-BBM equation using the variational derivative approach. In this method, we begin by computing second-order multipliers for the KdV-BBM equation followed by a derivation of the respective conservation laws for each multiplier.
Protection Relaying Scheme Based on Fault Reactance Operation Type
NASA Astrophysics Data System (ADS)
Tsuji, Kouichi
The theories of operation of existing relays are roughly divided into two types: one is the current differential types based on Kirchhoff's first law and the other is impedance types based on second law. We can apply the Kirchhoff's laws to strictly formulate fault phenomena, so the circuit equations are represented non linear simultaneous equations with variables fault point k and fault resistance Rf. This method has next two defect. 1) heavy computational burden for the iterative calculation on N-R method, 2) relay operator can not easily understand principle of numerical matrix operation. The new protection relay principles we proposed this paper focuses on the fact that the reactance component on fault point is almost zero. Two reactance Xf(S), Xf(R) on branch both ends are calculated by operation of solving linear equations. If signs of Xf(S) and Xf(R) are not same, it can be judged that the fault point exist in the branch. This reactance Xf corresponds to difference of branch reactance between actual fault point and imaginaly fault point. And so relay engineer can to understand fault location by concept of “distance". The simulation results using this new method indicates the highly precise estimation of fault locations compared with the inspected fault locations on operating transmission lines.
TEACHING PHYSICS: A computer-based revitalization of Atwood's machine
NASA Astrophysics Data System (ADS)
Trumper, Ricardo; Gelbman, Moshe
2000-09-01
Atwood's machine is used in a microcomputer-based experiment to demonstrate Newton's second law with considerable precision. The friction force on the masses and the moment of inertia of the pulley can also be estimated.
Infection Threshold for an Epidemic Model in Site and Bond Percolation Worlds
NASA Astrophysics Data System (ADS)
Sakisaka, Yukio; Yoshimura, Jin; Takeuchi, Yasuhiro; Sugiura, Koji; Tainaka, Kei-ichi
2010-02-01
We investigate an epidemic model on a square lattice with two protection treatments: prevention and quarantine. To explore the effects of both treatments, we apply the site and bond percolations. Computer simulations reveal that the threshold between endemic and disease-free phases can be represented by a single scaling law. The mean-field theory qualitatively predicts such infection dynamics and the scaling law.
Constraint methods that accelerate free-energy simulations of biomolecules.
Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A
2015-12-28
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.
Supercritical entanglement in local systems: Counterexample to the area law for quantum matter
Movassagh, Ramis; Shor, Peter W.
2016-01-01
Quantum entanglement is the most surprising feature of quantum mechanics. Entanglement is simultaneously responsible for the difficulty of simulating quantum matter on a classical computer and the exponential speedups afforded by quantum computers. Ground states of quantum many-body systems typically satisfy an “area law”: The amount of entanglement between a subsystem and the rest of the system is proportional to the area of the boundary. A system that obeys an area law has less entanglement and can be simulated more efficiently than a generic quantum state whose entanglement could be proportional to the total system’s size. Moreover, an area law provides useful information about the low-energy physics of the system. It is widely believed that for physically reasonable quantum systems, the area law cannot be violated by more than a logarithmic factor in the system’s size. We introduce a class of exactly solvable one-dimensional physical models which we can prove have exponentially more entanglement than suggested by the area law, and violate the area law by a square-root factor. This work suggests that simple quantum matter is richer and can provide much more quantum resources (i.e., entanglement) than expected. In addition to using recent advances in quantum information and condensed matter theory, we have drawn upon various branches of mathematics such as combinatorics of random walks, Brownian excursions, and fractional matching theory. We hope that the techniques developed herein may be useful for other problems in physics as well. PMID:27821725
Critical branching neural networks.
Kello, Christopher T
2013-01-01
It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical branching and, in doing so, simulates observed scaling laws as pervasive to neural and behavioral activity. These scaling laws are related to neural and cognitive functions, in that critical branching is shown to yield spiking activity with maximal memory and encoding capacities when analyzed using reservoir computing techniques. The model is also shown to account for findings of pervasive 1/f scaling in speech and cued response behaviors that are difficult to explain by isolable causes. Issues and questions raised by the model and its results are discussed from the perspectives of physics, neuroscience, computer and information sciences, and psychological and cognitive sciences.
Prediction of water loss and viscoelastic deformation of apple tissue using a multiscale model.
Aregawi, Wondwosen A; Abera, Metadel K; Fanta, Solomon W; Verboven, Pieter; Nicolai, Bart
2014-11-19
A two-dimensional multiscale water transport and mechanical model was developed to predict the water loss and deformation of apple tissue (Malus × domestica Borkh. cv. 'Jonagold') during dehydration. At the macroscopic level, a continuum approach was used to construct a coupled water transport and mechanical model. Water transport in the tissue was simulated using a phenomenological approach using Fick's second law of diffusion. Mechanical deformation due to shrinkage was based on a structural mechanics model consisting of two parts: Yeoh strain energy functions to account for non-linearity and Maxwell's rheological model of visco-elasticity. Apparent parameters of the macroscale model were computed from a microscale model. The latter accounted for water exchange between different microscopic structures of the tissue (intercellular space, the cell wall network and cytoplasm) using transport laws with the water potential as the driving force for water exchange between different compartments of tissue. The microscale deformation mechanics were computed using a model where the cells were represented as a closed thin walled structure. The predicted apparent water transport properties of apple cortex tissue from the microscale model showed good agreement with the experimentally measured values. Deviations between calculated and measured mechanical properties of apple tissue were observed at strains larger than 3%, and were attributed to differences in water transport behavior between the experimental compression tests and the simulated dehydration-deformation behavior. Tissue dehydration and deformation in the high relative humidity range ( > 97% RH) could, however, be accurately predicted by the multiscale model. The multiscale model helped to understand the dynamics of the dehydration process and the importance of the different microstructural compartments (intercellular space, cell wall, membrane and cytoplasm) for water transport and mechanical deformation.
NASA Astrophysics Data System (ADS)
Yang, Xuguang; Wang, Lei
In this paper, the magnetic field effects on natural convection of power-law non-Newtonian fluids in rectangular enclosures are numerically studied by the multiple-relaxation-time (MRT) lattice Boltzmann method (LBM). To maintain the locality of the LBM, a local computing scheme for shear rate is used. Thus, all simulations can be easily performed on the Graphics Processing Unit (GPU) using NVIDIA’s CUDA, and high computational efficiency can be achieved. The numerical simulations presented here span a wide range of thermal Rayleigh number (104≤Ra≤106), Hartmann number (0≤Ha≤20), power-law index (0.5≤n≤1.5) and aspect ratio (0.25≤AR≤4.0) to identify the different flow patterns and temperature distributions. The results show that the heat transfer rate is increased with the increase of thermal Rayleigh number, while it is decreased with the increase of Hartmann number, and the average Nusselt number is found to decrease with an increase in the power-law index. Moreover, the effects of aspect ratio have also investigated in detail.
ERIC Educational Resources Information Center
Lindstrom, Peter A.; And Others
This document consists of four units. The first of these views calculus applications to work, area, and distance problems. It is designed to help students gain experience in: 1) computing limits of Riemann sums; 2) computing definite integrals; and 3) solving elementary area, distance, and work problems by integration. The second module views…
Methods for simulation-based analysis of fluid-structure interaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barone, Matthew Franklin; Payne, Jeffrey L.
2005-10-01
Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonalmore » decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.« less
Thermodynamic Analysis of Dual-Mode Scramjet Engine Operation and Performance
NASA Technical Reports Server (NTRS)
Riggins, David; Tacket, Regan; Taylor, Trent; Auslender, Aaron
2006-01-01
Recent analytical advances in understanding the performance continuum (the thermodynamic spectrum) for air-breathing engines based on fundamental second-law considerations have clarified scramjet and ramjet operation, performance, and characteristics. Second-law based analysis is extended specifically in this work to clarify and describe the performance characteristics for dual-mode scramjet operation in the mid-speed range of flight Mach 4 to 7. This is done by a fundamental investigation of the complex but predictable interplay between heat release and irreversibilities in such an engine; results demonstrate the flow and performance character of the dual mode regime and of dual mode transition behavior. Both analytical and computational (multi-dimensional CFD) studies of sample dual-mode flow-fields are performed in order to demonstrate the second-law capability and performance and operability issues. The impact of the dual-mode regime is found to be characterized by decreasing overall irreversibility with increasing heat release, within the operability limits of the system.
NASA Technical Reports Server (NTRS)
Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, Bil; Streett, Craig L; Glass, Christopher E.; Schuster, David M.
2015-01-01
Using the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics code, an unsteady, time-accurate flow field about a Space Launch System configuration was simulated at a transonic wind tunnel condition (Mach = 0.9). Delayed detached eddy simulation combined with Reynolds Averaged Naiver-Stokes and a Spallart-Almaras turbulence model were employed for the simulation. Second order accurate time evolution scheme was used to simulate the flow field, with a minimum of 0.2 seconds of simulated time to as much as 1.4 seconds. Data was collected at 480 pressure taps at locations, 139 of which matched a 3% wind tunnel model, tested in the Transonic Dynamic Tunnel (TDT) facility at NASA Langley Research Center. Comparisons between computation and experiment showed agreement within 5% in terms of location for peak RMS levels, and 20% for frequency and magnitude of power spectral densities. Grid resolution and time step sensitivity studies were performed to identify methods for improved accuracy comparisons to wind tunnel data. With limited computational resources, accurate trends for reduced vibratory loads on the vehicle were observed. Exploratory methods such as determining minimized computed errors based on CFL number and sub-iterations, as well as evaluating frequency content of the unsteady pressures and evaluation of oscillatory shock structures were used in this study to enhance computational efficiency and solution accuracy. These techniques enabled development of a set of best practices, for the evaluation of future flight vehicle designs in terms of vibratory loads.
2010-02-02
Act of 1991.................................................................... 11 Next Generation Internet Research Act of 1998...Computing Act of 1991 P.L. 102-194) and the Next Generation Internet Research Act of 1998 (P.L. 105-305). The laws call for a President’s Information...planning and coordination. The second, the Next Generation Internet Research Act of 1998, P.L. 105-305,21 amended the original law to expand the mission of
Metrics for comparing dynamic earthquake rupture simulations
Barall, Michael; Harris, Ruth A.
2014-01-01
Earthquakes are complex events that involve a myriad of interactions among multiple geologic features and processes. One of the tools that is available to assist with their study is computer simulation, particularly dynamic rupture simulation. A dynamic rupture simulation is a numerical model of the physical processes that occur during an earthquake. Starting with the fault geometry, friction constitutive law, initial stress conditions, and assumptions about the condition and response of the near‐fault rocks, a dynamic earthquake rupture simulation calculates the evolution of fault slip and stress over time as part of the elastodynamic numerical solution (Ⓔ see the simulation description in the electronic supplement to this article). The complexity of the computations in a dynamic rupture simulation make it challenging to verify that the computer code is operating as intended, because there are no exact analytic solutions against which these codes’ results can be directly compared. One approach for checking if dynamic rupture computer codes are working satisfactorily is to compare each code’s results with the results of other dynamic rupture codes running the same earthquake simulation benchmark. To perform such a comparison consistently, it is necessary to have quantitative metrics. In this paper, we present a new method for quantitatively comparing the results of dynamic earthquake rupture computer simulation codes.
ERIC Educational Resources Information Center
Mitchell, Eugene E., Ed.
The simulation of a sampled-data system is described that uses a full parallel hybrid computer. The sampled data system simulated illustrates the proportional-integral-derivative (PID) discrete control of a continuous second-order process representing a stirred-tank. The stirred-tank is simulated using continuous analog components, while PID…
Statistical thermodynamics and the size distributions of tropical convective clouds.
NASA Astrophysics Data System (ADS)
Garrett, T. J.; Glenn, I. B.; Krueger, S. K.; Ferlay, N.
2017-12-01
Parameterizations for sub-grid cloud dynamics are commonly developed by using fine scale modeling or measurements to explicitly resolve the mechanistic details of clouds to the best extent possible, and then to formulating these behaviors cloud state for use within a coarser grid. A second is to invoke physical intuition and some very general theoretical principles from equilibrium statistical thermodynamics. This second approach is quite widely used elsewhere in the atmospheric sciences: for example to explain the heat capacity of air, blackbody radiation, or even the density profile or air in the atmosphere. Here we describe how entrainment and detrainment across cloud perimeters is limited by the amount of available air and the range of moist static energy in the atmosphere, and that constrains cloud perimeter distributions to a power law with a -1 exponent along isentropes and to a Boltzmann distribution across isentropes. Further, the total cloud perimeter density in a cloud field is directly tied to the buoyancy frequency of the column. These simple results are shown to be reproduced within a complex dynamic simulation of a tropical convective cloud field and in passive satellite observations of cloud 3D structures. The implication is that equilibrium tropical cloud structures can be inferred from the bulk thermodynamic structure of the atmosphere without having to analyze computationally expensive dynamic simulations.
NASA Astrophysics Data System (ADS)
van Poppel, Bret; Owkes, Mark; Nelson, Thomas; Lee, Zachary; Sowell, Tyler; Benson, Michael; Vasquez Guzman, Pablo; Fahrig, Rebecca; Eaton, John; Kurman, Matthew; Kweon, Chol-Bum; Bravo, Luis
2014-11-01
In this work, we present high-fidelity Computational Fluid Dynamics (CFD) results of liquid fuel injection from a pressure-swirl atomizer and compare the simulations to experimental results obtained using both shadowgraphy and phase-averaged X-ray computed tomography (CT) scans. The CFD and experimental results focus on the dense near-nozzle region to identify the dominant mechanisms of breakup during primary atomization. Simulations are performed using the NGA code of Desjardins et al (JCP 227 (2008)) and employ the volume of fluid (VOF) method proposed by Owkes and Desjardins (JCP 270 (2013)), a second order accurate, un-split, conservative, three-dimensional VOF scheme providing second order density fluxes and capable of robust and accurate high density ratio simulations. Qualitative features and quantitative statistics are assessed and compared for the simulation and experimental results, including the onset of atomization, spray cone angle, and drop size and distribution.
Yanamadala, Janakinadh; Noetscher, Gregory M; Rathi, Vishal K; Maliye, Saili; Win, Htay A; Tran, Anh L; Jackson, Xavier J; Htet, Aung T; Kozlov, Mikhail; Nazarian, Ara; Louie, Sara; Makarov, Sergey N
2015-01-01
Simulation of the electromagnetic response of the human body relies heavily upon efficient computational models or phantoms. The first objective of this paper is to present a new platform-independent full-body electromagnetic computational model (computational phantom), the Visible Human Project(®) (VHP)-Female v. 2.0 and to describe its distinct features. The second objective is to report phantom simulation performance metrics using the commercial FEM electromagnetic solver ANSYS HFSS.
Design and Testing of Flight Control Laws on the RASCAL Research Helicopter
NASA Technical Reports Server (NTRS)
Frost, Chad R.; Hindson, William S.; Moralez. Ernesto, III; Tucker, George E.; Dryfoos, James B.
2001-01-01
Two unique sets of flight control laws were designed, tested and flown on the Army/NASA Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) JUH-60A Black Hawk helicopter. The first set of control laws used a simple rate feedback scheme, intended to facilitate the first flight and subsequent flight qualification of the RASCAL research flight control system. The second set of control laws comprised a more sophisticated model-following architecture. Both sets of flight control laws were developed and tested extensively using desktop-to-flight modeling, analysis, and simulation tools. Flight test data matched the model predicted responses well, providing both evidence and confidence that future flight control development for RASCAL will be efficient and accurate.
NASA Technical Reports Server (NTRS)
Afjeh, Abdollah A.; Reed, John A.
2003-01-01
The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.
Coupled circuit numerical analysis of eddy currents in an open MRI system.
Akram, Md Shahadat Hossain; Terada, Yasuhiko; Keiichiro, Ishi; Kose, Katsumi
2014-08-01
We performed a new coupled circuit numerical simulation of eddy currents in an open compact magnetic resonance imaging (MRI) system. Following the coupled circuit approach, the conducting structures were divided into subdomains along the length (or width) and the thickness, and by implementing coupled circuit concepts we have simulated transient responses of eddy currents for subdomains in different locations. We implemented the Eigen matrix technique to solve the network of coupled differential equations to speed up our simulation program. On the other hand, to compute the coupling relations between the biplanar gradient coil and any other conducting structure, we implemented the solid angle form of Ampere's law. We have also calculated the solid angle for three dimensions to compute inductive couplings in any subdomain of the conducting structures. Details of the temporal and spatial distribution of the eddy currents were then implemented in the secondary magnetic field calculation by the Biot-Savart law. In a desktop computer (Programming platform: Wolfram Mathematica 8.0®, Processor: Intel(R) Core(TM)2 Duo E7500 @ 2.93GHz; OS: Windows 7 Professional; Memory (RAM): 4.00GB), it took less than 3min to simulate the entire calculation of eddy currents and fields, and approximately 6min for X-gradient coil. The results are given in the time-space domain for both the direct and the cross-terms of the eddy current magnetic fields generated by the Z-gradient coil. We have also conducted free induction decay (FID) experiments of eddy fields using a nuclear magnetic resonance (NMR) probe to verify our simulation results. The simulation results were found to be in good agreement with the experimental results. In this study we have also conducted simulations for transient and spatial responses of secondary magnetic field induced by X-gradient coil. Our approach is fast and has much less computational complexity than the conventional electromagnetic numerical simulation methods. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Phillips, D. T.; Manseur, B.; Foster, J. W.
1982-01-01
Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.
NASA Astrophysics Data System (ADS)
Shang, J. S.; Andrienko, D. A.; Huang, P. G.; Surzhikov, S. T.
2014-06-01
An efficient computational capability for nonequilibrium radiation simulation via the ray tracing technique has been accomplished. The radiative rate equation is iteratively coupled with the aerodynamic conservation laws including nonequilibrium chemical and chemical-physical kinetic models. The spectral properties along tracing rays are determined by a space partition algorithm of the nearest neighbor search process, and the numerical accuracy is further enhanced by a local resolution refinement using the Gauss-Lobatto polynomial. The interdisciplinary governing equations are solved by an implicit delta formulation through the diminishing residual approach. The axisymmetric radiating flow fields over the reentry RAM-CII probe have been simulated and verified with flight data and previous solutions by traditional methods. A computational efficiency gain nearly forty times is realized over that of the existing simulation procedures.
Dimensional analysis, similarity, analogy, and the simulation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, A.A.
1978-01-01
Dimensional analysis, similarity, analogy, and cybernetics are shown to be four consecutive steps in application of the simulation theory. This paper introduces the classes of phenomena which follow the same formal mathematical equations as models of the natural laws and the interior sphere of restraints groups of phenomena in which one can introduce simplfied nondimensional mathematical equations. The simulation by similarity in a specific field of physics, by analogy in two or more different fields of physics, and by cybernetics in nature in two or more fields of mathematics, physics, biology, economics, politics, sociology, etc., appears as a unique theorymore » which permits one to transport the results of experiments from the models, convenably selected to meet the conditions of researches, constructions, and measurements in the laboratories to the originals which are the primary objectives of the researches. Some interesting conclusions which cannot be avoided in the use of simplified nondimensional mathematical equations as models of natural laws are presented. Interesting limitations on the use of simulation theory based on assumed simplifications are recognized. This paper shows as necessary, in scientific research, that one write mathematical models of general laws which will be applied to nature in its entirety. The paper proposes the extent of the second law of thermodynamics as the generalized law of entropy to model life and its activities. This paper shows that the physical studies and philosophical interpretations of phenomena and natural laws cannot be separated in scientific work; they are interconnected and one cannot be put above the others.« less
Real-time dynamics of lattice gauge theories with a few-qubit quantum computer
NASA Astrophysics Data System (ADS)
Martinez, Esteban A.; Muschik, Christine A.; Schindler, Philipp; Nigg, Daniel; Erhard, Alexander; Heyl, Markus; Hauke, Philipp; Dalmonte, Marcello; Monz, Thomas; Zoller, Peter; Blatt, Rainer
2016-06-01
Gauge theories are fundamental to our understanding of interactions between the elementary constituents of matter as mediated by gauge bosons. However, computing the real-time dynamics in gauge theories is a notorious challenge for classical computational methods. This has recently stimulated theoretical effort, using Feynman’s idea of a quantum simulator, to devise schemes for simulating such theories on engineered quantum-mechanical devices, with the difficulty that gauge invariance and the associated local conservation laws (Gauss laws) need to be implemented. Here we report the experimental demonstration of a digital quantum simulation of a lattice gauge theory, by realizing (1 + 1)-dimensional quantum electrodynamics (the Schwinger model) on a few-qubit trapped-ion quantum computer. We are interested in the real-time evolution of the Schwinger mechanism, describing the instability of the bare vacuum due to quantum fluctuations, which manifests itself in the spontaneous creation of electron-positron pairs. To make efficient use of our quantum resources, we map the original problem to a spin model by eliminating the gauge fields in favour of exotic long-range interactions, which can be directly and efficiently implemented on an ion trap architecture. We explore the Schwinger mechanism of particle-antiparticle generation by monitoring the mass production and the vacuum persistence amplitude. Moreover, we track the real-time evolution of entanglement in the system, which illustrates how particle creation and entanglement generation are directly related. Our work represents a first step towards quantum simulation of high-energy theories using atomic physics experiments—the long-term intention is to extend this approach to real-time quantum simulations of non-Abelian lattice gauge theories.
NASA Astrophysics Data System (ADS)
Watanabe, Koji; Matsuno, Kenichi
This paper presents a new method for simulating flows driven by a body traveling with neither restriction on motion nor a limit of a region size. In the present method named 'Moving Computational Domain Method', the whole of the computational domain including bodies inside moves in the physical space without the limit of region size. Since the whole of the grid of the computational domain moves according to the movement of the body, a flow solver of the method has to be constructed on the moving grid system and it is important for the flow solver to satisfy physical and geometric conservation laws simultaneously on moving grid. For this issue, the Moving-Grid Finite-Volume Method is employed as the flow solver. The present Moving Computational Domain Method makes it possible to simulate flow driven by any kind of motion of the body in any size of the region with satisfying physical and geometric conservation laws simultaneously. In this paper, the method is applied to the flow around a high-speed car passing through a hairpin curve. The distinctive flow field driven by the car at the hairpin curve has been demonstrated in detail. The results show the promising feature of the method.
Multicomponent model of deformation and detachment of a biofilm under fluid flow
Tierra, Giordano; Pavissich, Juan P.; Nerenberg, Robert; Xu, Zhiliang; Alber, Mark S.
2015-01-01
A novel biofilm model is described which systemically couples bacteria, extracellular polymeric substances (EPS) and solvent phases in biofilm. This enables the study of contributions of rheology of individual phases to deformation of biofilm in response to fluid flow as well as interactions between different phases. The model, which is based on first and second laws of thermodynamics, is derived using an energetic variational approach and phase-field method. Phase-field coupling is used to model structural changes of a biofilm. A newly developed unconditionally energy-stable numerical splitting scheme is implemented for computing the numerical solution of the model efficiently. Model simulations predict biofilm cohesive failure for the flow velocity between and m s−1 which is consistent with experiments. Simulations predict biofilm deformation resulting in the formation of streamers for EPS exhibiting a viscous-dominated mechanical response and the viscosity of EPS being less than . Higher EPS viscosity provides biofilm with greater resistance to deformation and to removal by the flow. Moreover, simulations show that higher EPS elasticity yields the formation of streamers with complex geometries that are more prone to detachment. These model predictions are shown to be in qualitative agreement with experimental observations. PMID:25808342
Geometry of Conservation Laws for a Class of Parabolic Partial Differential Equations
NASA Astrophysics Data System (ADS)
Clelland, Jeanne Nielsen
1996-08-01
I consider the problem of computing the space of conservation laws for a second-order, parabolic partial differential equation for one function of three independent variables. The PDE is formulated as an exterior differential system {cal I} on a 12 -manifold M, and its conservation laws are identified with the vector space of closed 3-forms in the infinite prolongation of {cal I} modulo the so -called "trivial" conservation laws. I use the tools of exterior differential systems and Cartan's method of equivalence to study the structure of the space of conservation laws. My main result is:. Theorem. Any conservation law for a second-order, parabolic PDE for one function of three independent variables can be represented by a closed 3-form in the differential ideal {cal I} on the original 12-manifold M. I show that if a nontrivial conservation law exists, then {cal I} has a deprolongation to an equivalent system {cal J} on a 7-manifold N, and any conservation law for {cal I} can be expressed as a closed 3-form on N which lies in {cal J}. Furthermore, any such system in the real analytic category is locally equivalent to a system generated by a (parabolic) equation of the formA(u _{xx}u_{yy}-u_sp {xy}{2}) + B_1u_{xx }+2B_2u_{xy} +B_3u_ {yy}+C=0crwhere A, B_{i}, C are functions of x, y, t, u, u_{x}, u _{y}, u_{t}. I compute the space of conservation laws for several examples, and I begin the process of analyzing the general case using Cartan's method of equivalence. I show that the non-linearizable equation u_{t} = {1over2}e ^{-u}(u_{xx}+u_ {yy})has an infinite-dimensional space of conservation laws. This stands in contrast to the two-variable case, for which Bryant and Griffiths showed that any equation whose space of conservation laws has dimension 4 or more is locally equivalent to a linear equation, i.e., is linearizable.
Evaluation of a breast software model for 2D and 3D X-ray imaging studies of the breast.
Baneva, Yanka; Bliznakova, Kristina; Cockmartin, Lesley; Marinov, Stoyko; Buliev, Ivan; Mettivier, Giovanni; Bosmans, Hilde; Russo, Paolo; Marshall, Nicholas; Bliznakov, Zhivko
2017-09-01
In X-ray imaging, test objects reproducing breast anatomy characteristics are realized to optimize issues such as image processing or reconstruction, lesion detection performance, image quality and radiation induced detriment. Recently, a physical phantom with a structured background has been introduced for both 2D mammography and breast tomosynthesis. A software version of this phantom and a few related versions are now available and a comparison between these 3D software phantoms and the physical phantom will be presented. The software breast phantom simulates a semi-cylindrical container filled with spherical beads of different diameters. Four computational breast phantoms were generated with a dedicated software application and for two of these, physical phantoms are also available and they are used for the side by side comparison. Planar projections in mammography and tomosynthesis were simulated under identical incident air kerma conditions. Tomosynthesis slices were reconstructed with an in-house developed reconstruction software. In addition to a visual comparison, parameters like fractal dimension, power law exponent β and second order statistics (skewness, kurtosis) of planar projections and tomosynthesis reconstructed images were compared. Visually, an excellent agreement between simulated and real planar and tomosynthesis images is observed. The comparison shows also an overall very good agreement between parameters evaluated from simulated and experimental images. The computational breast phantoms showed a close match with their physical versions. The detailed mathematical analysis of the images confirms the agreement between real and simulated 2D mammography and tomosynthesis images. The software phantom is ready for optimization purpose and extrapolation of the phantom to other breast imaging techniques. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Kosmidis, Kosmas; Argyrakis, Panos; Macheras, Panos
2003-07-01
To verify the Higuchi law and study the drug release from cylindrical and spherical matrices by means of Monte Carlo computer simulation. A one-dimensional matrix, based on the theoretical assumptions of the derivation of the Higuchi law, was simulated and its time evolution was monitored. Cylindrical and spherical three-dimensional lattices were simulated with sites at the boundary of the lattice having been denoted as leak sites. Particles were allowed to move inside it using the random walk model. Excluded volume interactions between the particles was assumed. We have monitored the system time evolution for different lattice sizes and different initial particle concentrations. The Higuchi law was verified using the Monte Carlo technique in a one-dimensional lattice. It was found that Fickian drug release from cylindrical matrices can be approximated nicely with the Weibull function. A simple linear relation between the Weibull function parameters and the specific surface of the system was found. Drug release from a matrix, as a result of a diffusion process assuming excluded volume interactions between the drug molecules, can be described using a Weibull function. This model, although approximate and semiempirical, has the benefit of providing a simple physical connection between the model parameters and the system geometry, which was something missing from other semiempirical models.
Computer Simulation and ESL Reading.
ERIC Educational Resources Information Center
Wu, Mary A.
It is noted that although two approaches to second language instruction--the communicative approach emphasizing genuine language use and computer assisted instruction--have come together in the form of some lower level reading instruction materials for English as a second language (ESL), advanced level ESL reading materials using computer…
Electrostatics of proteins in dielectric solvent continua. I. Newton's third law marries qE forces
NASA Astrophysics Data System (ADS)
Stork, Martina; Tavan, Paul
2007-04-01
The authors reformulate and revise an electrostatic theory treating proteins surrounded by dielectric solvent continua [B. Egwolf and P. Tavan, J. Chem. Phys. 118, 2039 (2003)] to make the resulting reaction field (RF) forces compatible with Newton's third law. Such a compatibility is required for their use in molecular dynamics (MD) simulations, in which the proteins are modeled by all-atom molecular mechanics force fields. According to the original theory the RF forces, which are due to the electric field generated by the solvent polarization and act on the partial charges of a protein, i.e., the so-called qE forces, can be quite accurately computed from Gaussian RF dipoles localized at the protein atoms. Using a slightly different approximation scheme also the RF energies of given protein configurations are obtained. However, because the qE forces do not account for the dielectric boundary pressure exerted by the solvent continuum on the protein, they do not obey the principle that actio equals reactio as required by Newton's third law. Therefore, their use in MD simulations is severely hampered. An analysis of the original theory has led the authors now to a reformulation removing the main difficulties. By considering the RF energy, which represents the dominant electrostatic contribution to the free energy of solvation for a given protein configuration, they show that its negative configurational gradient yields mean RF forces obeying the reactio principle. Because the evaluation of these mean forces is computationally much more demanding than that of the qE forces, they derive a suggestion how the qE forces can be modified to obey Newton's third law. Various properties of the thus established theory, particularly issues of accuracy and of computational efficiency, are discussed. A sample application to a MD simulation of a peptide in solution is described in the following paper [M. Stork and P. Tavan, J. Chem. Phys., 126, 165106 (2007).
Locomotive crashworthiness research : volume 2 : design concept generation and evaluation
DOT National Transportation Integrated Search
1995-07-01
This is the second volume in a series of four that reports on a study in which computer models were developed and applied to evaluate whether various crashworthiness features, as defined in Public Law 102-365, can provide practical benefit to the occ...
Two inviscid computational simulations of separated flow about airfoils
NASA Technical Reports Server (NTRS)
Barnwell, R. W.
1976-01-01
Two inviscid computational simulations of separated flow about airfoils are described. The basic computational method is the line relaxation finite-difference method. Viscous separation is approximated with inviscid free-streamline separation. The point of separation is specified, and the pressure in the separation region is calculated. In the first simulation, the empiricism of constant pressure in the separation region is employed. This empiricism is easier to implement with the present method than with singularity methods. In the second simulation, acoustic theory is used to determine the pressure in the separation region. The results of both simulations are compared with experiment.
NASA Technical Reports Server (NTRS)
Boland, J. S., III
1975-01-01
A general simulation program is presented (GSP) involving nonlinear state estimation for space vehicle flight navigation systems. A complete explanation of the iterative guidance mode guidance law, derivation of the dynamics, coordinate frames, and state estimation routines are given so as to fully clarify the assumptions and approximations involved so that simulation results can be placed in their proper perspective. A complete set of computer acronyms and their definitions as well as explanations of the subroutines used in the GSP simulator are included. To facilitate input/output, a complete set of compatable numbers, with units, are included to aid in data development. Format specifications, output data phrase meanings and purposes, and computer card data input are clearly spelled out. A large number of simulation and analytical studies were used to determine the validity of the simulator itself as well as various data runs.
'The Monkey and the Hunter' and Other Projectile Motion Experiments with Logo.
ERIC Educational Resources Information Center
Kolodiy, George Oleh
1988-01-01
Presents the LOGO computer language as a source to experience and investigate scientific laws. Discusses aspects and uses of LOGO. Lists two LOGO programs, one to simulate a gravitational field and the other projectile motion. (MVL)
The Use of Computer-Based Simulation to Aid Comprehension and Incidental Vocabulary Learning
ERIC Educational Resources Information Center
Mohsen, Mohammed Ali
2016-01-01
One of the main issues in language learning is to find ways to enable learners to interact with the language input in an involved task. Given that computer-based simulation allows learners to interact with visual modes, this article examines how the interaction of students with an online video simulation affects their second language video…
Flight test experience and controlled impact of a large, four-engine, remotely piloted airplane
NASA Technical Reports Server (NTRS)
Kempel, R. W.; Horton, T. W.
1985-01-01
A controlled impact demonstration (CID) program using a large, four engine, remotely piloted transport airplane was conducted. Closed loop primary flight control was performed from a ground based cockpit and digital computer in conjunction with an up/down telemetry link. Uplink commands were received aboard the airplane and transferred through uplink interface systems to a highly modified Bendix PB-20D autopilot. Both proportional and discrete commands were generated by the ground pilot. Prior to flight tests, extensive simulation was conducted during the development of ground based digital control laws. The control laws included primary control, secondary control, and racetrack and final approach guidance. Extensive ground checks were performed on all remotely piloted systems. However, manned flight tests were the primary method of verification and validation of control law concepts developed from simulation. The design, development, and flight testing of control laws and the systems required to accomplish the remotely piloted mission are discussed.
Closed-Loop HIRF Experiments Performed on a Fault Tolerant Flight Control Computer
NASA Technical Reports Server (NTRS)
Belcastro, Celeste M.
1997-01-01
ABSTRACT Closed-loop HIRF experiments were performed on a fault tolerant flight control computer (FCC) at the NASA Langley Research Center. The FCC used in the experiments was a quad-redundant flight control computer executing B737 Autoland control laws. The FCC was placed in one of the mode-stirred reverberation chambers in the HIRF Laboratory and interfaced to a computer simulation of the B737 flight dynamics, engines, sensors, actuators, and atmosphere in the Closed-Loop Systems Laboratory. Disturbances to the aircraft associated with wind gusts and turbulence were simulated during tests. Electrical isolation between the FCC under test and the simulation computer was achieved via a fiber optic interface for the analog and discrete signals. Closed-loop operation of the FCC enabled flight dynamics and atmospheric disturbances affecting the aircraft to be represented during tests. Upset was induced in the FCC as a result of exposure to HIRF, and the effect of upset on the simulated flight of the aircraft was observed and recorded. This paper presents a description of these closed- loop HIRF experiments, upset data obtained from the FCC during these experiments, and closed-loop effects on the simulated flight of the aircraft.
Rise time of proton cut-off energy in 2D and 3D PIC simulations
NASA Astrophysics Data System (ADS)
Babaei, J.; Gizzi, L. A.; Londrillo, P.; Mirzanejad, S.; Rovelli, T.; Sinigardi, S.; Turchetti, G.
2017-04-01
The Target Normal Sheath Acceleration regime for proton acceleration by laser pulses is experimentally consolidated and fairly well understood. However, uncertainties remain in the analysis of particle-in-cell simulation results. The energy spectrum is exponential with a cut-off, but the maximum energy depends on the simulation time, following different laws in two and three dimensional (2D, 3D) PIC simulations so that the determination of an asymptotic value has some arbitrariness. We propose two empirical laws for the rise time of the cut-off energy in 2D and 3D PIC simulations, suggested by a model in which the proton acceleration is due to a surface charge distribution on the target rear side. The kinetic energy of the protons that we obtain follows two distinct laws, which appear to be nicely satisfied by PIC simulations, for a model target given by a uniform foil plus a contaminant layer that is hydrogen-rich. The laws depend on two parameters: the scaling time, at which the energy starts to rise, and the asymptotic cut-off energy. The values of the cut-off energy, obtained by fitting 2D and 3D simulations for the same target and laser pulse configuration, are comparable. This suggests that parametric scans can be performed with 2D simulations since 3D ones are computationally very expensive, delegating their role only to a correspondence check. In this paper, the simulations are carried out with the PIC code ALaDyn by changing the target thickness L and the incidence angle α, with a fixed a0 = 3. A monotonic dependence, on L for normal incidence and on α for fixed L, is found, as in the experimental results for high temporal contrast pulses.
Analytical investigation of the dynamics of tethered constellations in Earth orbit, phase 2
NASA Technical Reports Server (NTRS)
Lorenzini, E.
1985-01-01
This Quarterly Report deals with the deployment maneuver of a single-axis, vertical constellation with three masses. A new, easy to handle, computer code that simulates the two-dimensional dynamics of the constellation has been implemented. This computer code is used for designing control laws for the deployment maneuver that minimizes the acceleration level of the low-g platform during the maneuver.
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
NASA Astrophysics Data System (ADS)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; Stuehn, Torsten
2017-11-01
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach, the theoretical modeling and scaling laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.
Dynamic response of a collidant impacting a low pressure airbag
NASA Astrophysics Data System (ADS)
Dreher, Peter A.
There are many uses of low pressure airbags, both military and commercial. Many of these applications have been hampered by inadequate and inaccurate modeling tools. This dissertation contains the derivation of a four degree-of-freedom system of differential equations from physical laws of mass and energy conservation, force equilibrium, and the Ideal Gas Law. Kinematic equations were derived to model a cylindrical airbag as a single control volume impacted by a parallelepiped collidant. An efficient numerical procedure was devised to solve the simplified system of equations in a manner amenable to discovering design trends. The largest public airbag experiment, both in scale and scope, was designed and built to collect data on low-pressure airbag responses, otherwise unavailable in the literature. The experimental results were compared to computational simulations to validate the simplified numerical model. Experimental response trends are presented that will aid airbag designers. The two objectives of using a low pressure airbag to demonstrate the feasibility to (1) accelerate a munition to 15 feet per second velocity from a bomb bay, and (2) decelerate humans hitting trucks below the human tolerance level of 50 G's, were both met.
Designing and Introducing Ethical Dilemmas into Computer-Based Business Simulations
ERIC Educational Resources Information Center
Schumann, Paul L.; Scott, Timothy W.; Anderson, Philip H.
2006-01-01
This article makes two contributions to the teaching of business ethics literature. First, it describes the steps involved in developing effective ethical dilemmas to incorporate into a computer-based business simulation. Second, it illustrates these steps by presenting two ethical dilemmas that an instructor can incorporate into any business…
Computational methods and software systems for dynamics and control of large space structures
NASA Technical Reports Server (NTRS)
Park, K. C.; Felippa, C. A.; Farhat, C.; Pramono, E.
1990-01-01
Two key areas of crucial importance to the computer-based simulation of large space structures are discussed. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area involves massively parallel computers.
Modeling and controlling a robotic convoy using guidance laws strategies.
Belkhouche, Fethi; Belkhouche, Boumediene
2005-08-01
This paper deals with the problem of modeling and controlling a robotic convoy. Guidance laws techniques are used to provide a mathematical formulation of the problem. The guidance laws used for this purpose are the velocity pursuit, the deviated pursuit, and the proportional navigation. The velocity pursuit equations model the robot's path under various sensors based control laws. A systematic study of the tracking problem based on this technique is undertaken. These guidance laws are applied to derive decentralized control laws for the angular and linear velocities. For the angular velocity, the control law is directly derived from the guidance laws after considering the relative kinematics equations between successive robots. The second control law maintains the distance between successive robots constant by controlling the linear velocity. This control law is derived by considering the kinematics equations between successive robots under the considered guidance law. Properties of the method are discussed and proven. Simulation results confirm the validity of our approach, as well as the validity of the properties of the method. Index Terms-Guidance laws, relative kinematics equations, robotic convoy, tracking.
NASA Astrophysics Data System (ADS)
Chang, S. L.; Lottes, S. A.; Berry, G. F.
Argonne National Laboratory is investigating the non-reacting jet-gas mixing patterns in a magnetohydrodynamics (MHD) second stage combustor by using a three-dimensional single-phase hydrodynamics computer program. The computer simulation is intended to enhance the understanding of flow and mixing patterns in the combustor, which in turn may improve downstream MHD channel performance. The code is used to examine the three-dimensional effects of the side walls and the distributed jet flows on the non-reacting jet-gas mixing patterns. The code solves the conservation equations of mass, momentum, and energy, and a transport equation of a turbulence parameter and allows permeable surfaces to be specified for any computational cell.
NUMERICAL ANALYSES FOR TREATING DIFFUSION IN SINGLE-, TWO-, AND THREE-PHASE BINARY ALLOY SYSTEMS
NASA Technical Reports Server (NTRS)
Tenney, D. R.
1994-01-01
This package consists of a series of three computer programs for treating one-dimensional transient diffusion problems in single and multiple phase binary alloy systems. An accurate understanding of the diffusion process is important in the development and production of binary alloys. Previous solutions of the diffusion equations were highly restricted in their scope and application. The finite-difference solutions developed for this package are applicable for planar, cylindrical, and spherical geometries with any diffusion-zone size and any continuous variation of the diffusion coefficient with concentration. Special techniques were included to account for differences in modal volumes, initiation and growth of an intermediate phase, disappearance of a phase, and the presence of an initial composition profile in the specimen. In each analysis, an effort was made to achieve good accuracy while minimizing computation time. The solutions to the diffusion equations for single-, two-, and threephase binary alloy systems are numerically calculated by the three programs NAD1, NAD2, and NAD3. NAD1 treats the diffusion between pure metals which belong to a single-phase system. Diffusion in this system is described by a one-dimensional Fick's second law and will result in a continuous composition variation. For computational purposes, Fick's second law is expressed as an explicit second-order finite difference equation. Finite difference calculations are made by choosing the grid spacing small enough to give convergent solutions of acceptable accuracy. NAD2 treats diffusion between pure metals which form a two-phase system. Diffusion in the twophase system is described by two partial differential equations (a Fick's second law for each phase) and an interface-flux-balance equation which describes the location of the interface. Actual interface motion is obtained by a mass conservation procedure. To account for changes in the thicknesses of the two phases as diffusion progresses, a variable grid technique developed by Murray and Landis is employed. These equations are expressed in finite difference form and solved numerically. Program NAD3 treats diffusion between pure metals which form a two-phase system with an intermediate third phase. Diffusion in the three-phase system is described by three partial differential expressions of Fick's second law and two interface-flux-balance equations. As with the two-phase case, a variable grid finite difference is used to numerically solve the diffusion equations. Computation time is minimized without sacrificing solution accuracy by treating the three-phase problem as a two-phase problem when the thickness of the intermediate phase is less than a preset value. Comparisons between these programs and other solutions have shown excellent agreement. The programs are written in FORTRAN IV for batch execution on the CDC 6600 with a central memory requirement of approximately 51K (octal) 60 bit words.
Algorithms for adaptive stochastic control for a class of linear systems
NASA Technical Reports Server (NTRS)
Toda, M.; Patel, R. V.
1977-01-01
Control of linear, discrete time, stochastic systems with unknown control gain parameters is discussed. Two suboptimal adaptive control schemes are derived: one is based on underestimating future control and the other is based on overestimating future control. Both schemes require little on-line computation and incorporate in their control laws some information on estimation errors. The performance of these laws is studied by Monte Carlo simulations on a computer. Two single input, third order systems are considered, one stable and the other unstable, and the performance of the two adaptive control schemes is compared with that of the scheme based on enforced certainty equivalence and the scheme where the control gain parameters are known.
Design Of Combined Stochastic Feedforward/Feedback Control
NASA Technical Reports Server (NTRS)
Halyo, Nesim
1989-01-01
Methodology accommodates variety of control structures and design techniques. In methodology for combined stochastic feedforward/feedback control, main objectives of feedforward and feedback control laws seen clearly. Inclusion of error-integral feedback, dynamic compensation, rate-command control structure, and like integral element of methodology. Another advantage of methodology flexibility to develop variety of techniques for design of feedback control with arbitrary structures to obtain feedback controller: includes stochastic output feedback, multiconfiguration control, decentralized control, or frequency and classical control methods. Control modes of system include capture and tracking of localizer and glideslope, crab, decrab, and flare. By use of recommended incremental implementation, control laws simulated on digital computer and connected with nonlinear digital simulation of aircraft and its systems.
NASA Technical Reports Server (NTRS)
Hanson, Curt; Schaefer, Jacob; Burken, John J.; Johnson, Marcus; Nguyen, Nhan
2011-01-01
National Aeronautics and Space Administration (NASA) researchers have conducted a series of flight experiments designed to study the effects of varying levels of adaptive controller complexity on the performance and handling qualities of an aircraft under various simulated failure or damage conditions. A baseline, nonlinear dynamic inversion controller was augmented with three variations of a model reference adaptive control design. The simplest design consisted of a single adaptive parameter in each of the pitch and roll axes computed using a basic gradient-based update law. A second design was built upon the first by increasing the complexity of the update law. The third and most complex design added an additional adaptive parameter to each axis. Flight tests were conducted using NASA s Full-scale Advanced Systems Testbed, a highly modified F-18 aircraft that contains a research flight control system capable of housing advanced flight controls experiments. Each controller was evaluated against a suite of simulated failures and damage ranging from destabilization of the pitch and roll axes to significant coupling between the axes. Two pilots evaluated the three adaptive controllers as well as the non-adaptive baseline controller in a variety of dynamic maneuvers and precision flying tasks designed to uncover potential deficiencies in the handling qualities of the aircraft, and adverse interactions between the pilot and the adaptive controllers. The work was completed as part of the Integrated Resilient Aircraft Control Project under NASA s Aviation Safety Program.
Comparison of Non-Parabolic Hydrodynamic Simulations for Semiconductor Devices
NASA Technical Reports Server (NTRS)
Smith, A. W.; Brennan, K. F.
1996-01-01
Parabolic drift-diffusion simulators are common engineering level design tools for semiconductor devices. Hydrodynamic simulators, based on the parabolic band approximation, are becoming more prevalent as device dimensions shrink and energy transport effects begin to dominate device characteristic. However, band structure effects present in state-of-the-art devices necessitate relaxing the parabolic band approximation. This paper presents simulations of ballistic diodes, a benchmark device, of Si and GaAs using two different non-parabolic hydrodynamic formulations. The first formulation uses the Kane dispersion relationship in the derivation of the conservation equations. The second model uses a power law dispersion relation {(hk)(exp 2)/2m = xW(exp Y)}. Current-voltage relations show that for the ballistic diodes considered. the non-parabolic formulations predict less current than the parabolic case. Explanations of this will be provided by examination of velocity and energy profiles. At low bias, the simulations based on the Kane formulation predict greater current flow than the power law formulation. As the bias is increased this trend changes and the power law predicts greater current than the Kane formulation. It will be shown that the non-parabolicity and energy range of the hydrodynamic model based on the Kane dispersion relation are limited due to the binomial approximation which was utilized in the derivation.
Lexical Frequency Profiles and Zipf's Law
ERIC Educational Resources Information Center
Edwards, Roderick; Collins, Laura
2011-01-01
Laufer and Nation (1995) proposed that the Lexical Frequency Profile (LFP) can estimate the size of a second-language writer's productive vocabulary. Meara (2005) questioned the sensitivity and the reliability of LFPs for estimating vocabulary sizes, based on the results obtained from probabilistic simulations of LFPs. However, the underlying…
Real Gas Computation Using an Energy Relaxation Method and High-Order WENO Schemes
NASA Technical Reports Server (NTRS)
Montarnal, Philippe; Shu, Chi-Wang
1998-01-01
In this paper, we use a recently developed energy relaxation theory by Coquel and Perthame and high order weighted essentially non-oscillatory (WENO) schemes to simulate the Euler equations of real gas. The main idea is an energy decomposition into two parts: one part is associated with a simpler pressure law and the other part (the nonlinear deviation) is convected with the flow. A relaxation process is performed for each time step to ensure that the original pressure law is satisfied. The necessary characteristic decomposition for the high order WENO schemes is performed on the characteristic fields based on the first part. The algorithm only calls for the original pressure law once per grid point per time step, without the need to compute its derivatives or any Riemann solvers. Both one and two dimensional numerical examples are shown to illustrate the effectiveness of this approach.
Spacecraft flight control with the new phase space control law and optimal linear jet select
NASA Technical Reports Server (NTRS)
Bergmann, E. V.; Croopnick, S. R.; Turkovich, J. J.; Work, C. C.
1977-01-01
An autopilot designed for rotation and translation control of a rigid spacecraft is described. The autopilot uses reaction control jets as control effectors and incorporates a six-dimensional phase space control law as well as a linear programming algorithm for jet selection. The interaction of the control law and jet selection was investigated and a recommended configuration proposed. By means of a simulation procedure the new autopilot was compared with an existing system and was found to be superior in terms of core memory, central processing unit time, firings, and propellant consumption. But it is thought that the cycle time required to perform the jet selection computations might render the new autopilot unsuitable for existing flight computer applications, without modifications. The new autopilot is capable of maintaining attitude control in the presence of a large number of jet failures.
Effect of Ionic Diffusion on Extracellular Potentials in Neural Tissue
Halnes, Geir; Mäki-Marttunen, Tuomo; Keller, Daniel; Pettersen, Klas H.; Andreassen, Ole A.
2016-01-01
Recorded potentials in the extracellular space (ECS) of the brain is a standard measure of population activity in neural tissue. Computational models that simulate the relationship between the ECS potential and its underlying neurophysiological processes are commonly used in the interpretation of such measurements. Standard methods, such as volume-conductor theory and current-source density theory, assume that diffusion has a negligible effect on the ECS potential, at least in the range of frequencies picked up by most recording systems. This assumption remains to be verified. We here present a hybrid simulation framework that accounts for diffusive effects on the ECS potential. The framework uses (1) the NEURON simulator to compute the activity and ionic output currents from multicompartmental neuron models, and (2) the electrodiffusive Kirchhoff-Nernst-Planck framework to simulate the resulting dynamics of the potential and ion concentrations in the ECS, accounting for the effect of electrical migration as well as diffusion. Using this framework, we explore the effect that ECS diffusion has on the electrical potential surrounding a small population of 10 pyramidal neurons. The neural model was tuned so that simulations over ∼100 seconds of biological time led to shifts in ECS concentrations by a few millimolars, similar to what has been seen in experiments. By comparing simulations where ECS diffusion was absent with simulations where ECS diffusion was included, we made the following key findings: (i) ECS diffusion shifted the local potential by up to ∼0.2 mV. (ii) The power spectral density (PSD) of the diffusion-evoked potential shifts followed a 1/f2 power law. (iii) Diffusion effects dominated the PSD of the ECS potential for frequencies up to several hertz. In scenarios with large, but physiologically realistic ECS concentration gradients, diffusion was thus found to affect the ECS potential well within the frequency range picked up in experimental recordings. PMID:27820827
Effect of Ionic Diffusion on Extracellular Potentials in Neural Tissue.
Halnes, Geir; Mäki-Marttunen, Tuomo; Keller, Daniel; Pettersen, Klas H; Andreassen, Ole A; Einevoll, Gaute T
2016-11-01
Recorded potentials in the extracellular space (ECS) of the brain is a standard measure of population activity in neural tissue. Computational models that simulate the relationship between the ECS potential and its underlying neurophysiological processes are commonly used in the interpretation of such measurements. Standard methods, such as volume-conductor theory and current-source density theory, assume that diffusion has a negligible effect on the ECS potential, at least in the range of frequencies picked up by most recording systems. This assumption remains to be verified. We here present a hybrid simulation framework that accounts for diffusive effects on the ECS potential. The framework uses (1) the NEURON simulator to compute the activity and ionic output currents from multicompartmental neuron models, and (2) the electrodiffusive Kirchhoff-Nernst-Planck framework to simulate the resulting dynamics of the potential and ion concentrations in the ECS, accounting for the effect of electrical migration as well as diffusion. Using this framework, we explore the effect that ECS diffusion has on the electrical potential surrounding a small population of 10 pyramidal neurons. The neural model was tuned so that simulations over ∼100 seconds of biological time led to shifts in ECS concentrations by a few millimolars, similar to what has been seen in experiments. By comparing simulations where ECS diffusion was absent with simulations where ECS diffusion was included, we made the following key findings: (i) ECS diffusion shifted the local potential by up to ∼0.2 mV. (ii) The power spectral density (PSD) of the diffusion-evoked potential shifts followed a 1/f2 power law. (iii) Diffusion effects dominated the PSD of the ECS potential for frequencies up to several hertz. In scenarios with large, but physiologically realistic ECS concentration gradients, diffusion was thus found to affect the ECS potential well within the frequency range picked up in experimental recordings.
An Entropy-Based Approach to Nonlinear Stability
NASA Technical Reports Server (NTRS)
Merriam, Marshal L.
1989-01-01
Many numerical methods used in computational fluid dynamics (CFD) incorporate an artificial dissipation term to suppress spurious oscillations and control nonlinear instabilities. The same effect can be accomplished by using upwind techniques, sometimes augmented with limiters to form Total Variation Diminishing (TVD) schemes. An analysis based on numerical satisfaction of the second law of thermodynamics allows many such methods to be compared and improved upon. A nonlinear stability proof is given for discrete scalar equations arising from a conservation law. Solutions to such equations are bounded in the L sub 2 norm if the second law of thermodynamics is satisfied in a global sense over a periodic domain. It is conjectured that an analogous statement is true for discrete equations arising from systems of conservation laws. Analysis and numerical experiments suggest that a more restrictive condition, a positive entropy production rate in each cell, is sufficient to exclude unphysical phenomena such as oscillations and expansion shocks. Construction of schemes which satisfy this condition is demonstrated for linear and nonlinear wave equations and for the one-dimensional Euler equations.
Hsieh, Chih-Chen; Jain, Semant; Larson, Ronald G
2006-01-28
A very stiff finitely extensible nonlinear elastic (FENE)-Fraenkel spring is proposed to replace the rigid rod in the bead-rod model. This allows the adoption of a fast predictor-corrector method so that large time steps can be taken in Brownian dynamics (BD) simulations without over- or understretching the stiff springs. In contrast to the simple bead-rod model, BD simulations with beads and FENE-Fraenkel (FF) springs yield a random-walk configuration at equilibrium. We compare the simulation results of the free-draining bead-FF-spring model with those for the bead-rod model in relaxation, start-up of uniaxial extensional, and simple shear flows, and find that both methods generate nearly identical results. The computational cost per time step for a free-draining BD simulation with the proposed bead-FF-spring model is about twice as high as the traditional bead-rod model with the midpoint algorithm of Liu [J. Chem. Phys. 90, 5826 (1989)]. Nevertheless, computations with the bead-FF-spring model are as efficient as those with the bead-rod model in extensional flow because the former allows larger time steps. Moreover, the Brownian contribution to the stress for the bead-FF-spring model is isotropic and therefore simplifies the calculation of the polymer stresses. In addition, hydrodynamic interaction can more easily be incorporated into the bead-FF-spring model than into the bead-rod model since the metric force arising from the non-Cartesian coordinates used in bead-rod simulations is absent from bead-spring simulations. Finally, with our newly developed bead-FF-spring model, existing computer codes for the bead-spring models can trivially be converted to ones for effective bead-rod simulations merely by replacing the usual FENE or Cohen spring law with a FENE-Fraenkel law, and this convertibility provides a very convenient way to perform multiscale BD simulations.
NASA Astrophysics Data System (ADS)
Hsieh, Chih-Chen; Jain, Semant; Larson, Ronald G.
2006-01-01
A very stiff finitely extensible nonlinear elastic (FENE)-Fraenkel spring is proposed to replace the rigid rod in the bead-rod model. This allows the adoption of a fast predictor-corrector method so that large time steps can be taken in Brownian dynamics (BD) simulations without over- or understretching the stiff springs. In contrast to the simple bead-rod model, BD simulations with beads and FENE-Fraenkel (FF) springs yield a random-walk configuration at equilibrium. We compare the simulation results of the free-draining bead-FF-spring model with those for the bead-rod model in relaxation, start-up of uniaxial extensional, and simple shear flows, and find that both methods generate nearly identical results. The computational cost per time step for a free-draining BD simulation with the proposed bead-FF-spring model is about twice as high as the traditional bead-rod model with the midpoint algorithm of Liu [J. Chem. Phys. 90, 5826 (1989)]. Nevertheless, computations with the bead-FF-spring model are as efficient as those with the bead-rod model in extensional flow because the former allows larger time steps. Moreover, the Brownian contribution to the stress for the bead-FF-spring model is isotropic and therefore simplifies the calculation of the polymer stresses. In addition, hydrodynamic interaction can more easily be incorporated into the bead-FF-spring model than into the bead-rod model since the metric force arising from the non-Cartesian coordinates used in bead-rod simulations is absent from bead-spring simulations. Finally, with our newly developed bead-FF-spring model, existing computer codes for the bead-spring models can trivially be converted to ones for effective bead-rod simulations merely by replacing the usual FENE or Cohen spring law with a FENE-Fraenkel law, and this convertibility provides a very convenient way to perform multiscale BD simulations.
Tsushima, Yoko; Manabe, Syukuro
2013-05-07
In the climate system, two types of radiative feedback are in operation. The feedback of the first kind involves the radiative damping of the vertically uniform temperature perturbation of the troposphere and Earth's surface that approximately follows the Stefan-Boltzmann law of blackbody radiation. The second kind involves the change in the vertical lapse rate of temperature, water vapor, and clouds in the troposphere and albedo of the Earth's surface. Using satellite observations of the annual variation of the outgoing flux of longwave radiation and that of reflected solar radiation at the top of the atmosphere, this study estimates the so-called "gain factor," which characterizes the strength of radiative feedback of the second kind that operates on the annually varying, global-scale perturbation of temperature at the Earth's surface. The gain factor is computed not only for all sky but also for clear sky. The gain factor of so-called "cloud radiative forcing" is then computed as the difference between the two. The gain factors thus obtained are compared with those obtained from 35 models that were used for the fourth and fifth Intergovernmental Panel on Climate Change assessment. Here, we show that the gain factors obtained from satellite observations of cloud radiative forcing are effective for identifying systematic biases of the feedback processes that control the sensitivity of simulated climate, providing useful information for validating and improving a climate model.
2006 - 2016: Ten Years Of Tsunami In French Polynesia
NASA Astrophysics Data System (ADS)
Reymond, D.; Jamelot, A.; Hyvernaud, O.
2016-12-01
Located in South central Pacific and despite of its far field situation, the French Polynesia is very much concerned by the tsunamis generated along the major subduction zones located around the Pacific. At the time of writing, 10 tsunamis have been generated in the Pacific Ocean since 2006; all these events recorded in French Polynesia, produced different levels of warning, starting from a simple seismic warning with an information bulletin, up to an effective tsunami warning with evacuation of the coastal zone. These tsunamigenic events represent an invaluable opportunity of evolutions and tests of the tsunami warning system developed in French Polynesia: during the last ten years, the warning rules had evolved from a simple criterion of magnitudes up to the computation of the main seismic source parameters (location, slowness determinant (Newman & Okal, 1998) and focal geometry) using two independent methods: the first one uses an inversion of W-phases (Kanamori & Rivera, 2012) and the second one performs an inversion of long period surface waves (Clément & Reymond, 2014); the source parameters such estimated allow to compute in near real time the expected distributions of tsunami heights (with the help of a super-computer and parallelized codes of numerical simulations). Furthermore, two kinds of numerical modeling are used: the first one, very rapid (performed in about 5minutes of computation time) is based on the Green's law (Jamelot & Reymond, 2015), and a more detailed and precise one that uses classical numerical simulations through nested grids (about 45 minutes of computation time). Consequently, the criteria of tsunami warning are presently based on the expected tsunami heights in the different archipelagos and islands of French Polynesia. This major evolution allows to differentiate and use different levels of warning for the different archipelagos,working in tandem with the Civil Defense. We present the comparison of the historical observed tsunami heights (instrumental records, including deep ocean measurements provided by DART buoys and measured of tsunamis run-up) to the computed ones. In addition, the sites known for their amplification and resonance effects are well reproduced by the numerical simulations.
SPAMCART: a code for smoothed particle Monte Carlo radiative transfer
NASA Astrophysics Data System (ADS)
Lomax, O.; Whitworth, A. P.
2016-10-01
We present a code for generating synthetic spectral energy distributions and intensity maps from smoothed particle hydrodynamics simulation snapshots. The code is based on the Lucy Monte Carlo radiative transfer method, I.e. it follows discrete luminosity packets as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The sources can be extended and/or embedded, and discrete and/or diffuse. The density is not mapped on to a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Secondly, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.
Pendulums, Pedagogy, and Matter: Lessons from the Editing of Newton's Principia
NASA Astrophysics Data System (ADS)
Biener, Zvi; Smeenk, Chris
Teaching Newtonian physics involves the replacement of students'' ideas about physical situations with precise concepts appropriate for mathematical applications. This paper focuses on the concepts of `matter'' and `mass''. We suggest that students, like some pre-Newtonian scientists we examine, use these terms in a way that conflicts with their Newtonian meaning. Specifically, `matter''and `mass'' indicate to them the sorts of things that are tangible,bulky, and take up space. In Newtonian mechanics, however, the terms are defined by Newton's Second Law: `mass'' is simply a measure of the acceleration generated by an impressed force. We examine the relationship between these conceptions as it was discussed by Newton and his editor, Roger Cotes, when analyzing a series of pendulum experiments. We suggest that these experiments, as well as more sophisticated computer simulations, can be used in the classroom to sufficiently differentiate the colloquial and precise meaning of these terms.
ERIC Educational Resources Information Center
Risley, John S.
1983-01-01
Reviews "Laws of Motion" computer program produced by Educational Materials and Equipment Company. The program (language unknown), for Apple II/II+, is a simulation of an inclined plane, free fall, and Atwood machine in Newtonian/Aristotelian worlds. Suggests use as supplement to discussion of motion by teacher who fully understands the…
NASA Astrophysics Data System (ADS)
Sato, Aki-Hiro
2004-04-01
Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.
Sato, Aki-Hiro
2004-04-01
Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.
NASA Technical Reports Server (NTRS)
Rising, J. J.; Kairys, A. A.; Maass, C. A.; Siegart, C. D.; Rakness, W. L.; Mijares, R. D.; King, R. W.; Peterson, R. S.; Hurley, S. R.; Wickson, D.
1982-01-01
A limited authority pitch active control system (PACS) was developed for a wide body jet transport (L-1011) with a flying horizontal stabilizer. Two dual channel digital computers and the associated software provide command signals to a dual channel series servo which controls the stabilizer power actuators. Input sensor signals to the computer are pitch rate, column-trim position, and dynamic pressure. Control laws are given for the PACS and the system architecture is defined. The piloted flight simulation and vehicle system simulation tests performed to verify control laws and system operation prior to installation on the aircraft are discussed. Modifications to the basic aircraft are described. Flying qualities of the aircraft with the PACS on and off were evaluated. Handling qualities for cruise and high speed flight conditions with the c.g. at 39% mac ( + 1% stability margin) and PACS operating were judged to be as good as the handling qualities with the c.g. at 25% (+15% stability margin) and PACS off.
Extrapolating Single Organic Ion Solvation Thermochemistry from Simulated Water Nanodroplets.
Coles, Jonathan P; Houriez, Céline; Meot-Ner Mautner, Michael; Masella, Michel
2016-09-08
We compute the ion/water interaction energies of methylated ammonium cations and alkylated carboxylate anions solvated in large nanodroplets of 10 000 water molecules using 10 ns molecular dynamics simulations and an all-atom polarizable force-field approach. Together with our earlier results concerning the solvation of these organic ions in nanodroplets whose molecular sizes range from 50 to 1000, these new data allow us to discuss the reliability of extrapolating absolute single-ion bulk solvation energies from small ion/water droplets using common power-law functions of cluster size. We show that reliable estimates of these energies can be extrapolated from a small data set comprising the results of three droplets whose sizes are between 100 and 1000 using a basic power-law function of droplet size. This agrees with an earlier conclusion drawn from a model built within the mean spherical framework and paves the road toward a theoretical protocol to systematically compute the solvation energies of complex organic ions.
NASA Astrophysics Data System (ADS)
Cassiani, G.; dalla, E.; Brovelli, A.; Pitea, D.; Binley, A. M.
2003-04-01
The development of reliable constitutive laws to translate geophysical properties into hydrological ones is the fundamental step for successful applications of hydrogeophysical techniques. Many such laws have been proposed and applied, particularly with regard to two types of relationships: (a) between moisture content and dielectric properties, and (b) between electrical resistivity, rock structure and water saturation. The classical Archie's law belongs to this latter category. Archie's relationship has been widely used, starting from borehole logs applications, to translate geoelectrical measurements into estimates of saturation. However, in spite of its popularity, it remains an empirical relationship, the parameters of which must be calibrated case by case, e.g. on laboratory data. Pore-scale models have been recently recognized and used as powerful tools to investigate the constitutive relations of multiphase soils from a pore-scale point of view, because they bridge the microscopic and macroscopic scales. In this project, we develop and validate a three-dimensional pore-scale method to compute electrical properties of unsaturated and saturated porous media. First we simulate a random packing of spheres [1] that obeys the grain-size distribution and porosity of an experimental porous medium system; then we simulate primary drainage with a morphological approach [2]; finally, for each state of saturation during the drainage process, we solve the electrical conduction equation within the grain structure with a new numerical model and compute the apparent electrical resistivity of the porous medium. We apply the new method to a semi-consolidated Permo-Triassic Sandstone from the UK (Sherwood Sandstone) for which both pressure-saturation (Van Genuchten) and Archie's law parameters have been measured on laboratory samples. A comparison between simulated and measured relationships has been performed.
Sub-discretized surface model with application to contact mechanics in multi-body simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, S; Williams, J
2008-02-28
The mechanics of contact between rough and imperfectly spherical adhesive powder grains are often complicated by a variety of factors, including several which vary over sub-grain length scales. These include several traction factors that vary spatially over the surface of the individual grains, including high energy electron and acceptor sites (electrostatic), hydrophobic and hydrophilic sites (electrostatic and capillary), surface energy (general adhesion), geometry (van der Waals and mechanical), and elasto-plastic deformation (mechanical). For mechanical deformation and reaction, coupled motions, such as twisting with bending and sliding, as well as surface roughness add an asymmetry to the contact force which invalidatesmore » assumptions for popular models of contact, such as the Hertzian and its derivatives, for the non-adhesive case, and the JKR and DMT models for adhesive contacts. Though several contact laws have been offered to ameliorate these drawbacks, they are often constrained to particular loading paths (most often normal loading) and are relatively complicated for computational implementation. This paper offers a simple and general computational method for augmenting contact law predictions in multi-body simulations through characterization of the contact surfaces using a hierarchically-defined surface sub-discretization. For the case of adhesive contact between powder grains in low stress regimes, this technique can allow a variety of existing contact laws to be resolved across scales, allowing for moments and torques about the contact area as well as normal and tangential tractions to be resolved. This is especially useful for multi-body simulation applications where the modeler desires statistical distributions and calibration for parameters in contact laws commonly used for resolving near-surface contact mechanics. The approach is verified against analytical results for the case of rough, elastic spheres.« less
CloudMC: a cloud computing application for Monte Carlo simulation.
Miras, H; Jiménez, R; Miras, C; Gomà, C
2013-04-21
This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.
Simulations of relativistic quantum plasmas using real-time lattice scalar QED
NASA Astrophysics Data System (ADS)
Shi, Yuan; Xiao, Jianyuan; Qin, Hong; Fisch, Nathaniel J.
2018-05-01
Real-time lattice quantum electrodynamics (QED) provides a unique tool for simulating plasmas in the strong-field regime, where collective plasma scales are not well separated from relativistic-quantum scales. As a toy model, we study scalar QED, which describes self-consistent interactions between charged bosons and electromagnetic fields. To solve this model on a computer, we first discretize the scalar-QED action on a lattice, in a way that respects geometric structures of exterior calculus and U(1)-gauge symmetry. The lattice scalar QED can then be solved, in the classical-statistics regime, by advancing an ensemble of statistically equivalent initial conditions in time, using classical field equations obtained by extremizing the discrete action. To demonstrate the capability of our numerical scheme, we apply it to two example problems. The first example is the propagation of linear waves, where we recover analytic wave dispersion relations using numerical spectrum. The second example is an intense laser interacting with a one-dimensional plasma slab, where we demonstrate natural transition from wakefield acceleration to pair production when the wave amplitude exceeds the Schwinger threshold. Our real-time lattice scheme is fully explicit and respects local conservation laws, making it reliable for long-time dynamics. The algorithm is readily parallelized using domain decomposition, and the ensemble may be computed using quantum parallelism in the future.
Simulation and Experimentation in an Astronomy Laboratory, Part II
NASA Astrophysics Data System (ADS)
Maloney, F. P.; Maurone, P. A.; Hones, M.
1995-12-01
The availability of low-cost, high-performance computing hardware and software has transformed the manner by which astronomical concepts can be re-discovered and explored in a laboratory that accompanies an astronomy course for non-scientist students. We report on a strategy for allowing each student to understand fundamental scientific principles by interactively confronting astronomical and physical phenomena, through direct observation and by computer simulation. Direct observation of physical phenomena, such as Hooke's Law, begins by using a computer and hardware interface as a data-collection and presentation tool. In this way, the student is encouraged to explore the physical conditions of the experiment and re-discover the fundamentals involved. The hardware frees the student from the tedium of manual data collection and presentation, and permits experimental design which utilizes data that would otherwise be too fleeting, too imprecise, or too voluminous. Computer simulation of astronomical phenomena allows the student to travel in time and space, freed from the vagaries of weather, to re-discover such phenomena as the daily and yearly cycles, the reason for the seasons, the saros, and Kepler's Laws. By integrating the knowledge gained by experimentation and simulation, the student can understand both the scientific concepts and the methods by which they are discovered and explored. Further, students are encouraged to place these discoveries in an historical context, by discovering, for example, the night sky as seen by the survivors of the sinking Titanic, or Halley's comet as depicted on the Bayeux tapestry. We report on the continuing development of these laboratory experiments. Futher details and the text for the experiments are available at the following site: http://astro4.ast.vill.edu/ This work is supported by a grant from The Pew Charitable Trusts.
NASA Astrophysics Data System (ADS)
Bilyeu, David
This dissertation presents an extension of the Conservation Element Solution Element (CESE) method from second- to higher-order accuracy. The new method retains the favorable characteristics of the original second-order CESE scheme, including (i) the use of the space-time integral equation for conservation laws, (ii) a compact mesh stencil, (iii) the scheme will remain stable up to a CFL number of unity, (iv) a fully explicit, time-marching integration scheme, (v) true multidimensionality without using directional splitting, and (vi) the ability to handle two- and three-dimensional geometries by using unstructured meshes. This algorithm has been thoroughly tested in one, two and three spatial dimensions and has been shown to obtain the desired order of accuracy for solving both linear and non-linear hyperbolic partial differential equations. The scheme has also shown its ability to accurately resolve discontinuities in the solutions. Higher order unstructured methods such as the Discontinuous Galerkin (DG) method and the Spectral Volume (SV) methods have been developed for one-, two- and three-dimensional application. Although these schemes have seen extensive development and use, certain drawbacks of these methods have been well documented. For example, the explicit versions of these two methods have very stringent stability criteria. This stability criteria requires that the time step be reduced as the order of the solver increases, for a given simulation on a given mesh. The research presented in this dissertation builds upon the work of Chang, who developed a fourth-order CESE scheme to solve a scalar one-dimensional hyperbolic partial differential equation. The completed research has resulted in two key deliverables. The first is a detailed derivation of a high-order CESE methods on unstructured meshes for solving the conservation laws in two- and three-dimensional spaces. The second is the code implementation of these numerical methods in a computer code. For code development, a one-dimensional solver for the Euler equations was developed. This work is an extension of Chang's work on the fourth-order CESE method for solving a one-dimensional scalar convection equation. A generic formulation for the nth-order CESE method, where n ≥ 4, was derived. Indeed, numerical implementation of the scheme confirmed that the order of convergence was consistent with the order of the scheme. For the two- and three-dimensional solvers, SOLVCON was used as the basic framework for code implementation. A new solver kernel for the fourth-order CESE method has been developed and integrated into the framework provided by SOLVCON. The main part of SOLVCON, which deals with unstructured meshes and parallel computing, remains intact. The SOLVCON code for data transmission between computer nodes for High Performance Computing (HPC). To validate and verify the newly developed high-order CESE algorithms, several one-, two- and three-dimensional simulations where conducted. For the arbitrary order, one-dimensional, CESE solver, three sets of governing equations were selected for simulation: (i) the linear convection equation, (ii) the linear acoustic equations, (iii) the nonlinear Euler equations. All three systems of equations were used to verify the order of convergence through mesh refinement. In addition the Euler equations were used to solve the Shu-Osher and Blastwave problems. These two simulations demonstrated that the new high-order CESE methods can accurately resolve discontinuities in the flow field.For the two-dimensional, fourth-order CESE solver, the Euler equation was employed in four different test cases. The first case was used to verify the order of convergence through mesh refinement. The next three cases demonstrated the ability of the new solver to accurately resolve discontinuities in the flows. This was demonstrated through: (i) the interaction between acoustic waves and an entropy pulse, (ii) supersonic flow over a circular blunt body, (iii) supersonic flow over a guttered wedge. To validate and verify the three-dimensional, fourth-order CESE solver, two different simulations where selected. The first used the linear convection equations to demonstrate fourth-order convergence. The second used the Euler equations to simulate supersonic flow over a spherical body to demonstrate the scheme's ability to accurately resolve shocks. All test cases used are well known benchmark problems and as such, there are multiple sources available to validate the numerical results. Furthermore, the simulations showed that the high-order CESE solver was stable at a CFL number near unity.
Some issues related to simulation of the tracking and communications computer network
NASA Technical Reports Server (NTRS)
Lacovara, Robert C.
1989-01-01
The Communications Performance and Integration branch of the Tracking and Communications Division has an ongoing involvement in the simulation of its flight hardware for Space Station Freedom. Specifically, the communication process between central processor(s) and orbital replaceable units (ORU's) is simulated with varying degrees of fidelity. The results of investigations into three aspects of this simulation effort are given. The most general area involves the use of computer assisted software engineering (CASE) tools for this particular simulation. The second area of interest is simulation methods for systems of mixed hardware and software. The final area investigated is the application of simulation methods to one of the proposed computer network protocols for space station, specifically IEEE 802.4.
Some issues related to simulation of the tracking and communications computer network
NASA Astrophysics Data System (ADS)
Lacovara, Robert C.
1989-12-01
The Communications Performance and Integration branch of the Tracking and Communications Division has an ongoing involvement in the simulation of its flight hardware for Space Station Freedom. Specifically, the communication process between central processor(s) and orbital replaceable units (ORU's) is simulated with varying degrees of fidelity. The results of investigations into three aspects of this simulation effort are given. The most general area involves the use of computer assisted software engineering (CASE) tools for this particular simulation. The second area of interest is simulation methods for systems of mixed hardware and software. The final area investigated is the application of simulation methods to one of the proposed computer network protocols for space station, specifically IEEE 802.4.
Tsunami hazard assessments with consideration of uncertain earthquakes characteristics
NASA Astrophysics Data System (ADS)
Sepulveda, I.; Liu, P. L. F.; Grigoriu, M. D.; Pritchard, M. E.
2017-12-01
The uncertainty quantification of tsunami assessments due to uncertain earthquake characteristics faces important challenges. First, the generated earthquake samples must be consistent with the properties observed in past events. Second, it must adopt an uncertainty propagation method to determine tsunami uncertainties with a feasible computational cost. In this study we propose a new methodology, which improves the existing tsunami uncertainty assessment methods. The methodology considers two uncertain earthquake characteristics, the slip distribution and location. First, the methodology considers the generation of consistent earthquake slip samples by means of a Karhunen Loeve (K-L) expansion and a translation process (Grigoriu, 2012), applicable to any non-rectangular rupture area and marginal probability distribution. The K-L expansion was recently applied by Le Veque et al. (2016). We have extended the methodology by analyzing accuracy criteria in terms of the tsunami initial conditions. Furthermore, and unlike this reference, we preserve the original probability properties of the slip distribution, by avoiding post sampling treatments such as earthquake slip scaling. Our approach is analyzed and justified in the framework of the present study. Second, the methodology uses a Stochastic Reduced Order model (SROM) (Grigoriu, 2009) instead of a classic Monte Carlo simulation, which reduces the computational cost of the uncertainty propagation. The methodology is applied on a real case. We study tsunamis generated at the site of the 2014 Chilean earthquake. We generate earthquake samples with expected magnitude Mw 8. We first demonstrate that the stochastic approach of our study generates consistent earthquake samples with respect to the target probability laws. We also show that the results obtained from SROM are more accurate than classic Monte Carlo simulations. We finally validate the methodology by comparing the simulated tsunamis and the tsunami records for the 2014 Chilean earthquake. Results show that leading wave measurements fall within the tsunami sample space. At later times, however, there are mismatches between measured data and the simulated results, suggesting that other sources of uncertainty are as relevant as the uncertainty of the studied earthquake characteristics.
Digital simulation of hybrid loop operation in RFI backgrounds.
NASA Technical Reports Server (NTRS)
Ziemer, R. E.; Nelson, D. R.
1972-01-01
A digital computer model for Monte-Carlo simulation of an imperfect second-order hybrid phase-locked loop (PLL) operating in radio-frequency interference (RFI) and Gaussian noise backgrounds has been developed. Characterization of hybrid loop performance in terms of cycle slipping statistics and phase error variance, through computer simulation, indicates that the hybrid loop has desirable performance characteristics in RFI backgrounds over the conventional PLL or the costas loop.
Computer simulation of random variables and vectors with arbitrary probability distribution laws
NASA Technical Reports Server (NTRS)
Bogdan, V. M.
1981-01-01
Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.
NASA Astrophysics Data System (ADS)
Gailler, A.; Loevenbruck, A.; Hebert, H.
2013-12-01
Numerical tsunami propagation and inundation models are well developed and have now reached an impressive level of accuracy, especially in locations such as harbors where the tsunami waves are mostly amplified. In the framework of tsunami warning under real-time operational conditions, the main obstacle for the routine use of such numerical simulations remains the slowness of the numerical computation, which is strengthened when detailed grids are required for the precise modeling of the coastline response of an individual harbor. Thus only tsunami offshore propagation modeling tools using a single sparse bathymetric computation grid are presently included within the French Tsunami Warning Center (CENALT), providing rapid estimation of tsunami warning at western Mediterranean and NE Atlantic basins scale. We present here a preliminary work that performs quick estimates of the inundation at individual harbors from these high sea forecasting tsunami simulations. The method involves an empirical correction based on theoretical amplification laws (either Green's or Synolakis laws). The main limitation is that its application to a given coastal area would require a large database of previous observations, in order to define the empirical parameters of the correction equation. As no such data (i.e., historical tide gage records of significant tsunamis) are available for the western Mediterranean and NE Atlantic basins, we use a set of synthetic mareograms, calculated for both fake and well-known historical tsunamigenic earthquakes in the area. This synthetic dataset is obtained through accurate numerical tsunami propagation and inundation modeling by using several nested bathymetric grids of increasingly fine resolution close to the shores (down to a grid cell size of 3m in some Mediterranean harbors). Non linear shallow water tsunami modeling performed on a single 2' coarse bathymetric grid are compared to the values given by time-consuming nested grids simulations (and observation when available), in order to check to which extent the simple approach based on the amplification laws can explain the data. The idea is to fit tsunami data with numerical modeling carried out without any refined coastal bathymetry/topography. To this end several parameters are discussed, namely the bathymetric depth to which model results must be extrapolated (using the Green's law), or the mean bathymetric slope to consider near the studied coast (when using the Synolakis law).
Quantum adiabatic computation with a constant gap is not useful in one dimension.
Hastings, M B
2009-07-31
We show that it is possible to use a classical computer to efficiently simulate the adiabatic evolution of a quantum system in one dimension with a constant spectral gap, starting the adiabatic evolution from a known initial product state. The proof relies on a recently proven area law for such systems, implying the existence of a good matrix product representation of the ground state, combined with an appropriate algorithm to update the matrix product state as the Hamiltonian is changed. This implies that adiabatic evolution with such Hamiltonians is not useful for universal quantum computation. Therefore, adiabatic algorithms which are useful for universal quantum computation either require a spectral gap tending to zero or need to be implemented in more than one dimension (we leave open the question of the computational power of adiabatic simulation with a constant gap in more than one dimension).
Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla
2016-11-01
Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.
Computational Nanoelectronics and Nanotechnology at NASA ARC
NASA Technical Reports Server (NTRS)
Saini, Subhash; Kutler, Paul (Technical Monitor)
1998-01-01
Both physical and economic considerations indicate that the scaling era of CMOS will run out of steam around the year 2010. However, physical laws also indicate that it is possible to compute at a rate of a billion times present speeds with the expenditure of only one Watt of electrical power. NASA has long-term needs where ultra-small semiconductor devices are needed for critical applications: high performance, low power, compact computers for intelligent autonomous vehicles and Petaflop computing technology are some key examples. To advance the design, development, and production of future generation micro- and nano-devices, IT Modeling and Simulation Group has been started at NASA Ames with a goal to develop an integrated simulation environment that addresses problems related to nanoelectronics and molecular nanotechnology. Overview of nanoelectronics and nanotechnology research activities being carried out at Ames Research Center will be presented. We will also present the vision and the research objectives of the IT Modeling and Simulation Group including the applications of nanoelectronic based devices relevant to NASA missions.
Computational Nanoelectronics and Nanotechnology at NASA ARC
NASA Technical Reports Server (NTRS)
Saini, Subhash
1998-01-01
Both physical and economic considerations indicate that the scaling era of CMOS will run out of steam around the year 2010. However, physical laws also indicate that it is possible to compute at a rate of a billion times present speeds with the expenditure of only one Watt of electrical power. NASA has long-term needs where ultra-small semiconductor devices are needed for critical applications: high performance, low power, compact computers for intelligent autonomous vehicles and Petaflop computing technolpgy are some key examples. To advance the design, development, and production of future generation micro- and nano-devices, IT Modeling and Simulation Group has been started at NASA Ames with a goal to develop an integrated simulation environment that addresses problems related to nanoelectronics and molecular nanotecnology. Overview of nanoelectronics and nanotechnology research activities being carried out at Ames Research Center will be presented. We will also present the vision and the research objectives of the IT Modeling and Simulation Group including the applications of nanoelectronic based devices relevant to NASA missions.
Structured Overlapping Grid Simulations of Contra-rotating Open Rotor Noise
NASA Technical Reports Server (NTRS)
Housman, Jeffrey A.; Kiris, Cetin C.
2015-01-01
Computational simulations using structured overlapping grids with the Launch Ascent and Vehicle Aerodynamics (LAVA) solver framework are presented for predicting tonal noise generated by a contra-rotating open rotor (CROR) propulsion system. A coupled Computational Fluid Dynamics (CFD) and Computational AeroAcoustics (CAA) numerical approach is applied. Three-dimensional time-accurate hybrid Reynolds Averaged Navier-Stokes/Large Eddy Simulation (RANS/LES) CFD simulations are performed in the inertial frame, including dynamic moving grids, using a higher-order accurate finite difference discretization on structured overlapping grids. A higher-order accurate free-stream preserving metric discretization with discrete enforcement of the Geometric Conservation Law (GCL) on moving curvilinear grids is used to create an accurate, efficient, and stable numerical scheme. The aeroacoustic analysis is based on a permeable surface Ffowcs Williams-Hawkings (FW-H) approach, evaluated in the frequency domain. A time-step sensitivity study was performed using only the forward row of blades to determine an adequate time-step. The numerical approach is validated against existing wind tunnel measurements.
ERIC Educational Resources Information Center
Facao, M.; Lopes, A.; Silva, A. L.; Silva, P.
2011-01-01
We propose an undergraduate numerical project for simulating the results of the second-order correlation function as obtained by an intensity interference experiment for two kinds of light, namely bunched light with Gaussian or Lorentzian power density spectrum and antibunched light obtained from single-photon sources. While the algorithm for…
Computer simulation of the classical entanglement of U-shaped particles in three dimensions
NASA Astrophysics Data System (ADS)
Maddock, Brian; Lindner, John
2014-03-01
Classical entanglement is important in a wide range of phenomena, such as velcro hook-and-loop-fasteners, seed dispersal by animal fur, and bent liquid crystal molecules. We present a computer simulation of the entanglement of U-shaped particles in three dimensions. We represent the particles by phenomenological potentials and evolve them by integrating Newton's laws of motion. We drop them into a virtual cylinder, shake them, and ultimately release the cylinder. As the particle piles relax, we quantify their entanglement by the exponential decay times of their heights, which we correlate to the particles' height-to-length ratios.
Palmer, Jeremy C; Car, Roberto; Debenedetti, Pablo G
2013-01-01
We investigate the metastable phase behaviour of the ST2 water model under deeply supercooled conditions. The phase behaviour is examined using umbrella sampling (US) and well-tempered metadynamics (WT-MetaD) simulations to compute the reversible free energy surface parameterized by density and bond-orientation order. We find that free energy surfaces computed with both techniques clearly show two liquid phases in coexistence, in agreement with our earlier US and grand canonical Monte Carlo calculations [Y. Liu, J. C. Palmer, A. Z. Panagiotopoulos and P. G. Debenedetti, J Chem Phys, 2012, 137, 214505; Y. Liu, A. Z. Panagiotopoulos and P. G. Debenedetti, J Chem Phys, 2009, 131, 104508]. While we demonstrate that US and WT-MetaD produce consistent results, the latter technique is estimated to be more computationally efficient by an order of magnitude. As a result, we show that WT-MetaD can be used to study the finite-size scaling behaviour of the free energy barrier separating the two liquids for systems containing 192, 300 and 400 ST2 molecules. Although our results are consistent with the expected N(2/3) scaling law, we conclude that larger systems must be examined to provide conclusive evidence of a first-order phase transition and associated second critical point.
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; ...
2017-11-27
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less
Flight test experience and controlled impact of a remotely piloted jet transport aircraft
NASA Technical Reports Server (NTRS)
Horton, Timothy W.; Kempel, Robert W.
1988-01-01
The Dryden Flight Research Center Facility of NASA Ames Research Center (Ames-Dryden) and the FAA conducted the controlled impact demonstration (CID) program using a large, four-engine, remotely piloted jet transport airplane. Closed-loop primary flight was controlled through the existing onboard PB-20D autopilot which had been modified for the CID program. Uplink commands were sent from a ground-based cockpit and digital computer in conjunction with an up-down telemetry link. These uplink commands were received aboard the airplane and transferred through uplink interface systems to the modified PB-20D autopilot. Both proportional and discrete commands were produced by the ground system. Prior to flight tests, extensive simulation was conducted during the development of ground-based digital control laws. The control laws included primary control, secondary control, and racetrack and final approach guidance. Extensive ground checks were performed on all remotely piloted systems; however, piloted flight tests were the primary method and validation of control law concepts developed from simulation. The design, development, and flight testing of control laws and systems required to accomplish the remotely piloted mission are discussed.
ELSI Bibliography: Ethical legal and social implications of the Human Genome Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yesley, M.S.
This second edition of the ELSI Bibliography provides a current and comprehensive resource for identifying publications on the major topics related to the ethical, legal and social issues (ELSI) of the Human Genome Project. Since the first edition of the ELSI Bibliography was printed last year, new publications and earlier ones identified by additional searching have doubled our computer database of ELSI publications to over 5600 entries. The second edition of the ELSI Bibliography reflects this growth of the underlying computer database. Researchers should note that an extensive collection of publications in the database is available for public use atmore » the General Law Library of Los Alamos National Laboratory (LANL).« less
ERIC Educational Resources Information Center
Cavanagh, Sean
2009-01-01
This article reports that young inventors at a Maryland high school are not only learning scientific principles, but also teamwork and the tenets of patent law. Twice a week, 10 members of the Clarksburg High School's Coyote Inventors Club gather in a second-floor computer lab to peck away at building a deceptively simple device: a cable that…
Static and moving solid/gas interface modeling in a hybrid rocket engine
NASA Astrophysics Data System (ADS)
Mangeot, Alexandre; William-Louis, Mame; Gillard, Philippe
2018-07-01
A numerical model was developed with CFD-ACE software to study the working condition of an oxygen-nitrogen/polyethylene hybrid rocket combustor. As a first approach, a simplified numerical model is presented. It includes a compressible transient gas phase in which a two-step combustion mechanism is implemented coupled to a radiative model. The solid phase from the fuel grain is a semi-opaque material with its degradation process modeled by an Arrhenius type law. Two versions of the model were tested. The first considers the solid/gas interface with a static grid while the second uses grid deformation during the computation to follow the asymmetrical regression. The numerical results are obtained with two different regression kinetics originating from ThermoGravimetry Analysis and test bench results. In each case, the fuel surface temperature is retrieved within a range of 5% error. However, good results are only found using kinetics from the test bench. The regression rate is found within 0.03 mm s-1 and average combustor pressure and its variation over time have the same intensity than the measurements conducted on the test bench. The simulation that uses grid deformation to follow the regression shows a good stability over a 10 s simulated time simulation.
Multi-component fluid flow through porous media by interacting lattice gas computer simulation
NASA Astrophysics Data System (ADS)
Cueva-Parra, Luis Alberto
In this work we study structural and transport properties such as power-law behavior of trajectory of each constituent and their center of mass, density profile, mass flux, permeability, velocity profile, phase separation, segregation, and mixing of miscible and immiscible multicomponent fluid flow through rigid and non-consolidated porous media. The considered parameters are the mass ratio of the components, temperature, external pressure, and porosity. Due to its solid theoretical foundation and computational simplicity, the selected approaches are the Interacting Lattice Gas with Monte Carlo Method (Metropolis Algorithm) and direct sampling, combined with particular collision rules. The percolation mechanism is used for modeling initial random porous media. The introduced collision rules allow to model non-consolidated porous media, because part of the kinetic energy of the fluid particles is transfered to barrier particles, which are the components of the porous medium. Having gained kinetic energy, the barrier particles can move. A number of interesting results are observed. Some findings include, (i) phase separation in immiscible fluid flow through a medium with no barrier particles (porosity p P = 1). (ii) For the flow of miscible fluids through rigid porous medium with porosity close to percolation threshold (p C), the flux density (measure of permeability) shows a power law increase ∝ (pC - p) mu with mu = 2.0, and the density profile is found to decay with height ∝ exp(-mA/Bh), consistent with the barometric height law. (iii) Sedimentation and driving of barrier particles in fluid flow through non-consolidated porous medium. This study involves developing computer simulation models with efficient serial and parallel codes, extensive data analysis via graphical utilities, and computer visualization techniques.
A defect stream function, law of the wall/wake method for compressible turbulent boundary layers
NASA Technical Reports Server (NTRS)
Barnwell, Richard W.; Dejarnette, Fred R.; Wahls, Richard A.
1989-01-01
The application of the defect stream function to the solution of the two-dimensional, compressible boundary layer is examined. A law of the wall/law of the wake formulation for the inner part of the boundary layer is presented which greatly simplifies the computational task near the wall and eliminates the need for an eddy viscosity model in this region. The eddy viscosity model in the outer region is arbitrary. The modified Crocco temperature-velocity relationship is used as a simplification of the differential energy equation. Formulations for both equilibrium and nonequilibrium boundary layers are presented including a constrained zero-order form which significantly reduces the computational workload while retaining the significant physics of the flow. A formulation for primitive variables is also presented. Results are given for the constrained zero-order and second-order equilibrium formulations and are compared with experimental data. A compressible wake function valid near the wall has been developed from the present results.
ERIC Educational Resources Information Center
Fogarty, Ian; Geelan, David
2013-01-01
Students in 4 Canadian high school physics classes completed instructional sequences in two key physics topics related to motion--Straight Line Motion and Newton's First Law. Different sequences of laboratory investigation, teacher explanation (lecture) and the use of computer-based scientific visualizations (animations and simulations) were…
A novel guidance law using fast terminal sliding mode control with impact angle constraints.
Sun, Lianghua; Wang, Weihong; Yi, Ran; Xiong, Shaofeng
2016-09-01
This paper is concerned with the question of, for a missile interception with impact angle constraints, how to design a guidance law. Firstly, missile interception with impact angle constraints is modeled; secondly, a novel guidance law using fast terminal sliding mode control based on extended state observer is proposed to optimize the trajectory and time of interception; finally, for stationary targets, constant velocity targets and maneuvering targets, the guidance law and the stability of the closed loop system is analyzed and the stability of the closed loop system is analyzed, respectively. Simulation results show that when missile and target are on a collision course, the novel guidance law using fast terminal sliding mode control with extended state observer has more optimized trajectory and effectively reduces the time of interception which has a great significance in modern warfare. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Quick realization of a ship steering training simulation system by virtual reality
NASA Astrophysics Data System (ADS)
Sun, Jifeng; Zhi, Pinghua; Nie, Weiguo
2003-09-01
This paper addresses two problems of a ship handling simulator. Firstly, 360 scene generation, especially 3D dynamic sea wave modeling, is described. Secondly, a multi-computer complementation of ship handling simulator. This paper also gives the experimental results of the proposed ship handling simulator.
NASA Astrophysics Data System (ADS)
Ruzhitskaya, Lanika; French, R. S.; Speck, A.
2009-05-01
We report first results from a multi-faceted study employing the lab "Revolution of the Moons of Jupiter" from the CLEA group (Contemporary Laboratory Experiences in Astronomy) in an introductory astronomy laboratory course for nonscience majors. Four laboratory sections participated in the study: two at a traditional four-year public institution in Missouri and two at a two-year community college in California. Students in all sections took identical pre- and post-tests and used the same simulation software. In all sections, students were assigned randomly to work either in pairs or individually. One section at both schools was given a brief mini-lecture on Kepler's laws and introduction to the exercise while the other section at both schools was given no instructions whatsoever. The data allow comparisons between the impact of the simulation with and without instructions and on the influences of peer interactions on learning outcomes.
Computer Simulations Improve University Instructional Laboratories1
2004-01-01
Laboratory classes are commonplace and essential in biology departments but can sometimes be cumbersome, unreliable, and a drain on time and resources. As university intakes increase, pressure on budgets and staff time can often lead to reduction in practical class provision. Frequently, the ability to use laboratory equipment, mix solutions, and manipulate test animals are essential learning outcomes, and “wet” laboratory classes are thus appropriate. In others, however, interpretation and manipulation of the data are the primary learning outcomes, and here, computer-based simulations can provide a cheaper, easier, and less time- and labor-intensive alternative. We report the evaluation of two computer-based simulations of practical exercises: the first in chromosome analysis, the second in bioinformatics. Simulations can provide significant time savings to students (by a factor of four in our first case study) without affecting learning, as measured by performance in assessment. Moreover, under certain circumstances, performance can be improved by the use of simulations (by 7% in our second case study). We concluded that the introduction of these simulations can significantly enhance student learning where consideration of the learning outcomes indicates that it might be appropriate. In addition, they can offer significant benefits to teaching staff. PMID:15592599
Order and chaos in the one-dimensional ϕ4 model: N-dependence and the Second Law of Thermodynamics
NASA Astrophysics Data System (ADS)
Hoover, William Graham; Aoki, Kenichiro
2017-08-01
We revisit the equilibrium one-dimensional ϕ4 model from the dynamical systems point of view. We find an infinite number of periodic orbits which are computationally stable. At the same time some of the orbits are found to exhibit positive Lyapunov exponents! The periodic orbits confine every particle in a periodic chain to trace out either the same or a mirror-image trajectory in its two-dimensional phase space. These ;computationally stable; sets of pairs of single-particle orbits are either symmetric or antisymmetric to the very last computational bit. In such a periodic chain the odd-numbered and even-numbered particles' coordinates and momenta are either identical or differ only in sign. ;Positive Lyapunov exponents; can and do result if an infinitesimal perturbation breaking a perfect two-dimensional antisymmetry is introduced so that the motion expands into a four-dimensional phase space. In that extended space a positive exponent results. We formulate a standard initial condition for the investigation of the microcanonical chaotic number dependence of the model. We speculate on the uniqueness of the model's chaotic sea and on the connection of such collections of deterministic and time-reversible states to the Second Law of Thermodynamics.
Computer considerations for real time simulation of a generalized rotor model
NASA Technical Reports Server (NTRS)
Howe, R. M.; Fogarty, L. E.
1977-01-01
Scaled equations were developed to meet requirements for real time computer simulation of the rotor system research aircraft. These equations form the basis for consideration of both digital and hybrid mechanization for real time simulation. For all digital simulation estimates of the required speed in terms of equivalent operations per second are developed based on the complexity of the equations and the required intergration frame rates. For both conventional hybrid simulation and hybrid simulation using time-shared analog elements the amount of required equipment is estimated along with a consideration of the dynamic errors. Conventional hybrid mechanization using analog simulation of those rotor equations which involve rotor-spin frequencies (this consititutes the bulk of the equations) requires too much analog equipment. Hybrid simulation using time-sharing techniques for the analog elements appears possible with a reasonable amount of analog equipment. All-digital simulation with affordable general-purpose computers is not possible because of speed limitations, but specially configured digital computers do have the required speed and consitute the recommended approach.
Chnafa, C; Brina, O; Pereira, V M; Steinman, D A
2018-02-01
Computational fluid dynamics simulations of neurovascular diseases are impacted by various modeling assumptions and uncertainties, including outlet boundary conditions. Many studies of intracranial aneurysms, for example, assume zero pressure at all outlets, often the default ("do-nothing") strategy, with no physiological basis. Others divide outflow according to the outlet diameters cubed, nominally based on the more physiological Murray's law but still susceptible to subjective choices about the segmented model extent. Here we demonstrate the limitations and impact of these outflow strategies, against a novel "splitting" method introduced here. With our method, the segmented lumen is split into its constituent bifurcations, where flow divisions are estimated locally using a power law. Together these provide the global outflow rate boundary conditions. The impact of outflow strategy on flow rates was tested for 70 cases of MCA aneurysm with 0D simulations. The impact on hemodynamic indices used for rupture status assessment was tested for 10 cases with 3D simulations. Differences in flow rates among the various strategies were up to 70%, with a non-negligible impact on average and oscillatory wall shear stresses in some cases. Murray-law and splitting methods gave flow rates closest to physiological values reported in the literature; however, only the splitting method was insensitive to arbitrary truncation of the model extent. Cerebrovascular simulations can depend strongly on the outflow strategy. The default zero-pressure method should be avoided in favor of Murray-law or splitting methods, the latter being released as an open-source tool to encourage the standardization of outflow strategies. © 2018 by American Journal of Neuroradiology.
Second Forest Vegetation Simulator Conference; February 12-14, 2002; Fort Collins, CO.
Nicholas L. Crookston; Robert N. Havis
2002-01-01
The Forest Vegetation Simulator (FVS) is a computer program that projects the development of forest stands in the United States and British Columbia, Canada. The proceedings of the second FVS conference, held in Fort Collins, CO, includes 34 papers dealing with applications of FVS that range from the stand-level through full-scale landscape analyses. Forecasts ranging...
A steering law for a roof-type configuration for a single-gimbal control moment gyro system
NASA Technical Reports Server (NTRS)
Yoshikawa, T.
1974-01-01
Single-Gimbal Control Moment Gyro (SGCMG) systems have been investigated for attitude control of the Large Space Telescope (LST) and the High Energy Astronomy Observatory (HEAO). However, various proposed steering laws for the SGCMG systems thus far have some defects because of singular states of the system. In this report, a steering law for a roof-type SGCMG system is proposed which is based on a new momentum distribution scheme that makes all the singular states unstable. This momentum distribution scheme is formulated by a treatment of the system as a sampled-data system. From analytical considerations, it is shown that this steering law gives control performance which is satisfactory for practical applications. Results of the preliminary computer simulation entirely support this premise.
Corresponding-states laws for protein solutions.
Katsonis, Panagiotis; Brandon, Simon; Vekilov, Peter G
2006-09-07
The solvent around protein molecules in solutions is structured and this structuring introduces a repulsion in the intermolecular interaction potential at intermediate separations. We use Monte Carlo simulations with isotropic, pair-additive systems interacting with such potentials. We test if the liquid-liquid and liquid-solid phase lines in model protein solutions can be predicted from universal curves and a pair of experimentally determined parameters, as done for atomic and colloid materials using several laws of corresponding states. As predictors, we test three properties at the critical point for liquid-liquid separation: temperature, as in the original van der Waals law, the second virial coefficient, and a modified second virial coefficient, all paired with the critical volume fraction. We find that the van der Waals law is best obeyed and appears more general than its original formulation: A single universal curve describes all tested nonconformal isotropic pair-additive systems. Published experimental data for the liquid-liquid equilibrium for several proteins at various conditions follow a single van der Waals curve. For the solid-liquid equilibrium, we find that no single system property serves as its predictor. We go beyond corresponding-states correlations and put forth semiempirical laws, which allow prediction of the critical temperature and volume fraction solely based on the range of attraction of the intermolecular interaction potential.
NASA Technical Reports Server (NTRS)
Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, William L.; Glass, Christopher E.; Streett, Craig L.; Schuster, David M.
2015-01-01
A transonic flow field about a Space Launch System (SLS) configuration was simulated with the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics (CFD) code at wind tunnel conditions. Unsteady, time-accurate computations were performed using second-order Delayed Detached Eddy Simulation (DDES) for up to 1.5 physical seconds. The surface pressure time history was collected at 619 locations, 169 of which matched locations on a 2.5 percent wind tunnel model that was tested in the 11 ft. x 11 ft. test section of the NASA Ames Research Center's Unitary Plan Wind Tunnel. Comparisons between computation and experiment showed that the peak surface pressure RMS level occurs behind the forward attach hardware, and good agreement for frequency and power was obtained in this region. Computational domain, grid resolution, and time step sensitivity studies were performed. These included an investigation of pseudo-time sub-iteration convergence. Using these sensitivity studies and experimental data comparisons, a set of best practices to date have been established for FUN3D simulations for SLS launch vehicle analysis. To the author's knowledge, this is the first time DDES has been used in a systematic approach and establish simulation time needed, to analyze unsteady pressure loads on a space launch vehicle such as the NASA SLS.
ERIC Educational Resources Information Center
Peterson, Julie Ellen
2009-01-01
The first purpose of this experimental study was to determine if there were effects on achievement between traditional pencil-and-paper instructional strategies and computer simulated instructional strategies used to teach interior design business ethics. The second purpose was to determine the level of engagement of interior design students using…
Parallel-Processing Test Bed For Simulation Software
NASA Technical Reports Server (NTRS)
Blech, Richard; Cole, Gary; Townsend, Scott
1996-01-01
Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).
Beyond the constraints underlying Kolmogorov-Johnson-Mehl-Avrami theory related to the growth laws.
Tomellini, M; Fanfoni, M
2012-02-01
The theory of Kolmogorov-Johnson-Mehl-Avrami for phase transition kinetics is subjected to severe limitations concerning the functional form of the growth law. This paper is devoted to sidestepping this drawback through the use of the correlation function approach. Moreover, we put forward an easy-to-handle formula, written in terms of the experimentally accessible actual extended volume fraction, which is found to match several types of growths. Computer simulations have been performed for corroborating the theoretical approach. © 2012 American Physical Society
Simulation/Gaming and the Acquisition of Communicative Competence in Another Language.
ERIC Educational Resources Information Center
Garcia-Carbonell, Amparo; Rising, Beverly; Montero, Begona; Watts, Frances
2001-01-01
Discussion of communicative competence in second language acquisition focuses on a theoretical and practical meshing of simulation and gaming methodology with theories of foreign language acquisition, including task-based learning, interaction, and comprehensible input. Describes experiments conducted with computer-assisted simulations in…
Ahn, Hyo-Sung; Kim, Byeong-Yeon; Lim, Young-Hun; Lee, Byung-Hun; Oh, Kwang-Kyo
2018-03-01
This paper proposes three coordination laws for optimal energy generation and distribution in energy network, which is composed of physical flow layer and cyber communication layer. The physical energy flows through the physical layer; but all the energies are coordinated to generate and flow by distributed coordination algorithms on the basis of communication information. First, distributed energy generation and energy distribution laws are proposed in a decoupled manner without considering the interactive characteristics between the energy generation and energy distribution. Second, a joint coordination law to treat the energy generation and energy distribution in a coupled manner taking account of the interactive characteristics is designed. Third, to handle over- or less-energy generation cases, an energy distribution law for networks with batteries is designed. The coordination laws proposed in this paper are fully distributed in the sense that they are decided optimally only using relative information among neighboring nodes. Through numerical simulations, the validity of the proposed distributed coordination laws is illustrated.
A Comparison of Military and Law Enforcement Body Armour.
Orr, Robin; Schram, Ben; Pope, Rodney
2018-02-14
Law-enforcement officers increasingly wear body armour for protection; wearing body armour is common practice in military populations. Law-enforcement and military occupational demands are vastly different and military-styled body armour may not be suitable for law-enforcement. This study investigated differences between selected military body armour (MBA: 6.4 kg) and law-enforcement body armour (LEBA: 2.1 kg) in impacts on postural sway, vertical jump, agility, a functional movement screen (FMS), task simulations (vehicle exit; victim recovery), and subjective measures. Ten volunteer police officers (six females, four males) were randomly allocated to one of the designs on each of two days. Body armour type did not significantly affect postural sway, vertical jump, vehicle exit and 5 m sprint times, or victim recovery times. Both armour types increased sway velocity and sway-path length in the final five seconds compared to the first 5 s of a balance task. The MBA was associated with significantly slower times to complete the agility task, poorer FMS total scores, and poorer subjective ratings of performance and comfort. The LEBA was perceived as more comfortable and received more positive performance ratings during the agility test and task simulations. The impacts of MBA and LEBA differed significantly and they should not be considered interchangeable.
A Comparison of Military and Law Enforcement Body Armour
Pope, Rodney
2018-01-01
Law-enforcement officers increasingly wear body armour for protection; wearing body armour is common practice in military populations. Law-enforcement and military occupational demands are vastly different and military-styled body armour may not be suitable for law-enforcement. This study investigated differences between selected military body armour (MBA: 6.4 kg) and law-enforcement body armour (LEBA: 2.1 kg) in impacts on postural sway, vertical jump, agility, a functional movement screen (FMS), task simulations (vehicle exit; victim recovery), and subjective measures. Ten volunteer police officers (six females, four males) were randomly allocated to one of the designs on each of two days. Body armour type did not significantly affect postural sway, vertical jump, vehicle exit and 5 m sprint times, or victim recovery times. Both armour types increased sway velocity and sway-path length in the final five seconds compared to the first 5 s of a balance task. The MBA was associated with significantly slower times to complete the agility task, poorer FMS total scores, and poorer subjective ratings of performance and comfort. The LEBA was perceived as more comfortable and received more positive performance ratings during the agility test and task simulations. The impacts of MBA and LEBA differed significantly and they should not be considered interchangeable. PMID:29443905
Simulation model of a twin-tail, high performance airplane
NASA Technical Reports Server (NTRS)
Buttrill, Carey S.; Arbuckle, P. Douglas; Hoffler, Keith D.
1992-01-01
The mathematical model and associated computer program to simulate a twin-tailed high performance fighter airplane (McDonnell Douglas F/A-18) are described. The simulation program is written in the Advanced Continuous Simulation Language. The simulation math model includes the nonlinear six degree-of-freedom rigid-body equations, an engine model, sensors, and first order actuators with rate and position limiting. A simplified form of the F/A-18 digital control laws (version 8.3.3) are implemented. The simulated control law includes only inner loop augmentation in the up and away flight mode. The aerodynamic forces and moments are calculated from a wind-tunnel-derived database using table look-ups with linear interpolation. The aerodynamic database has an angle-of-attack range of -10 to +90 and a sideslip range of -20 to +20 degrees. The effects of elastic deformation are incorporated in a quasi-static-elastic manner. Elastic degrees of freedom are not actively simulated. In the engine model, the throttle-commanded steady-state thrust level and the dynamic response characteristics of the engine are based on airflow rate as determined from a table look-up. Afterburner dynamics are switched in at a threshold based on the engine airflow and commanded thrust.
An efficient formulation of robot arm dynamics for control and computer simulation
NASA Astrophysics Data System (ADS)
Lee, C. S. G.; Nigam, R.
This paper describes an efficient formulation of the dynamic equations of motion of industrial robots based on the Lagrange formulation of d'Alembert's principle. This formulation, as applied to a PUMA robot arm, results in a set of closed form second order differential equations with cross product terms. They are not as efficient in computation as those formulated by the Newton-Euler method, but provide a better analytical model for control analysis and computer simulation. Computational complexities of this dynamic model together with other models are tabulated for discussion.
On the Asymmetric Zero-Range in the Rarefaction Fan
NASA Astrophysics Data System (ADS)
Gonçalves, Patrícia
2014-02-01
We consider one-dimensional asymmetric zero-range processes starting from a step decreasing profile leading, in the hydrodynamic limit, to the rarefaction fan of the associated hydrodynamic equation. Under that initial condition, and for totally asymmetric jumps, we show that the weighted sum of joint probabilities for second class particles sharing the same site is convergent and we compute its limit. For partially asymmetric jumps, we derive the Law of Large Numbers for a second class particle, under the initial configuration in which all positive sites are empty, all negative sites are occupied with infinitely many first class particles and there is a single second class particle at the origin. Moreover, we prove that among the infinite characteristics emanating from the position of the second class particle it picks randomly one of them. The randomness is given in terms of the weak solution of the hydrodynamic equation, through some sort of renormalization function. By coupling the constant-rate totally asymmetric zero-range with the totally asymmetric simple exclusion, we derive limiting laws for more general initial conditions.
Effect of Finite Computational Domain on Turbulence Scaling Law in Both Physical and Spectral Spaces
NASA Technical Reports Server (NTRS)
Hou, Thomas Y.; Wu, Xiao-Hui; Chen, Shiyi; Zhou, Ye
1998-01-01
The well-known translation between the power law of energy spectrum and that of the correlation function or the second order structure function has been widely used in analyzing random data. Here, we show that the translation is valid only in proper scaling regimes. The regimes of valid translation are different for the correlation function and the structure function. Indeed, they do not overlap. Furthermore, in practice, the power laws exist only for a finite range of scales. We show that this finite range makes the translation inexact even in the proper scaling regime. The error depends on the scaling exponent. The current findings are applicable to data analysis in fluid turbulence and other stochastic systems.
NASA Astrophysics Data System (ADS)
Aydiner, Ekrem; Cherstvy, Andrey G.; Metzler, Ralf
2018-01-01
We study by Monte Carlo simulations a kinetic exchange trading model for both fixed and distributed saving propensities of the agents and rationalize the person and wealth distributions. We show that the newly introduced wealth distribution - that may be more amenable in certain situations - features a different power-law exponent, particularly for distributed saving propensities of the agents. For open agent-based systems, we analyze the person and wealth distributions and find that the presence of trap agents alters their amplitude, leaving however the scaling exponents nearly unaffected. For an open system, we show that the total wealth - for different trap agent densities and saving propensities of the agents - decreases in time according to the classical Kohlrausch-Williams-Watts stretched exponential law. Interestingly, this decay does not depend on the trap agent density, but rather on saving propensities. The system relaxation for fixed and distributed saving schemes are found to be different.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nash, C.; Williams, M.; Restivo, M.
All prior testing with SuperLig® 639 has been done with the aqueous concentration of LAW at ~5 M [Na+], where the resin sinks, and can be used in a conventional down-flow column orientation. However, the aqueous LAW stream from the Waste Treatment Plant is expected to be ~8 M [Na+]. The resin would float in this higher density liquid, potentially disrupting the ability to achieve a good decontamination due to poor packing of the resin that leads to channeling. Testing was completed with a higher salt concentration in the feed simulant (7.8 M [Na+]) in an engineering-scale apparatus with twomore » columns, each containing ~0.9 L of resin. Testing of this system used a simulant of the LAW solution, and substituted ReO4 - as a surrogate for TcO4 -. Results were then compared using computer modeling. Bench-scale testing was also performed, and examined an unconstrained resin bed, while engineering-scale tests used both constrained and unconstrained beds in a two-column, lead and lag sequential arrangement.« less
Brain injury tolerance limit based on computation of axonal strain.
Sahoo, Debasis; Deck, Caroline; Willinger, Rémy
2016-07-01
Traumatic brain injury (TBI) is the leading cause of death and permanent impairment over the last decades. In both the severe and mild TBIs, diffuse axonal injury (DAI) is the most common pathology and leads to axonal degeneration. Computation of axonal strain by using finite element head model in numerical simulation can enlighten the DAI mechanism and help to establish advanced head injury criteria. The main objective of this study is to develop a brain injury criterion based on computation of axonal strain. To achieve the objective a state-of-the-art finite element head model with enhanced brain and skull material laws, was used for numerical computation of real world head trauma. The implementation of new medical imaging data such as, fractional anisotropy and axonal fiber orientation from Diffusion Tensor Imaging (DTI) of 12 healthy patients into the finite element brain model was performed to improve the brain constitutive material law with more efficient heterogeneous anisotropic visco hyper-elastic material law. The brain behavior has been validated in terms of brain deformation against Hardy et al. (2001), Hardy et al. (2007), and in terms of brain pressure against Nahum et al. (1977) and Trosseille et al. (1992) experiments. Verification of model stability has been conducted as well. Further, 109 well-documented TBI cases were simulated and axonal strain computed to derive brain injury tolerance curve. Based on an in-depth statistical analysis of different intra-cerebral parameters (brain axonal strain rate, axonal strain, first principal strain, Von Mises strain, first principal stress, Von Mises stress, CSDM (0.10), CSDM (0.15) and CSDM (0.25)), it was shown that axonal strain was the most appropriate candidate parameter to predict DAI. The proposed brain injury tolerance limit for a 50% risk of DAI has been established at 14.65% of axonal strain. This study provides a key step for a realistic novel injury metric for DAI. Copyright © 2016 Elsevier Ltd. All rights reserved.
Optimal guidance law development for an advanced launch system
NASA Technical Reports Server (NTRS)
Calise, Anthony J.; Hodges, Dewey H.
1990-01-01
A regular perturbation analysis is presented. Closed-loop simulations were performed with a first order correction including all of the atmospheric terms. In addition, a method was developed for independently checking the accuracy of the analysis and the rather extensive programming required to implement the complete first order correction with all of the aerodynamic effects included. This amounted to developing an equivalent Hamiltonian computed from the first order analysis. A second order correction was also completed for the neglected spherical Earth and back-pressure effects. Finally, an analysis was begun on a method for dealing with control inequality constraints. The results on including higher order corrections do show some improvement for this application; however, it is not known at this stage if significant improvement will result when the aerodynamic forces are included. The weak formulation for solving optimal problems was extended in order to account for state inequality constraints. The formulation was tested on three example problems and numerical results were compared to the exact solutions. Development of a general purpose computational environment for the solution of a large class of optimal control problems is under way. An example, along with the necessary input and the output, is given.
Monte Carlo simulation of Ising models by multispin coding on a vector computer
NASA Astrophysics Data System (ADS)
Wansleben, Stephan; Zabolitzky, John G.; Kalle, Claus
1984-11-01
Rebbi's efficient multispin coding algorithm for Ising models is combined with the use of the vector computer CDC Cyber 205. A speed of 21.2 million updates per second is reached. This is comparable to that obtained by special- purpose computers.
NASA Astrophysics Data System (ADS)
Cioaca, Alexandru
A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimila- tion is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration.
Formation Flying Control of Multiple Spacecraft
NASA Technical Reports Server (NTRS)
Hadaegh, F. Y.; Lau, Kenneth; Wang, P. K. C.
1997-01-01
The problem of coordination and control of multiple spacecraft (MS) moving in formation is considered. Here, each MS is modeled by a rigid body with fixed center of mass. First, various schemes for generating the desired formation patterns are discussed, Then, explicit control laws for formation-keeping and relative attitude alignment based on nearest neighbor-tracking are derived. The necessary data which must be communicated between the MS to achieve effective control are examined. The time-domain behavior of the feedback-controlled MS formation for typical low-Earth orbits is studied both analytically and via computer simulation. The paper concludes with a discussion of the implementation of the derived control laws, and the integration of the MS formation coordination and control system with a proposed inter-spacecraft communication/computing network.
NASA Astrophysics Data System (ADS)
Ignatova, V. A.; Möller, W.; Conard, T.; Vandervorst, W.; Gijbels, R.
2005-06-01
The TRIDYN collisional computer simulation has been modified to account for emission of ionic species and molecules during sputter depth profiling, by introducing a power law dependence of the ion yield as a function of the oxygen surface concentration and by modelling the sputtering of monoxide molecules. The results are compared to experimental data obtained with dual beam TOF SIMS depth profiling of ZrO2/SiO2/Si high-k dielectric stacks with thicknesses of the SiO2 interlayer of 0.5, 1, and 1.5 nm. Reasonable agreement between the experiment and the computer simulation is obtained for most of the experimental features, demonstrating the effects of ion-induced atomic relocation, i.e., atomic mixing and recoil implantation, and preferential sputtering. The depth scale of the obtained profiles is significantly distorted by recoil implantation and the depth-dependent ionization factor. A pronounced double-peak structure in the experimental profiles related to Zr is not explained by the computer simulation, and is attributed to ion-induced bond breaking and diffusion, followed by a decoration of the interfaces by either mobile Zr or O.
Stabilized linear semi-implicit schemes for the nonlocal Cahn-Hilliard equation
NASA Astrophysics Data System (ADS)
Du, Qiang; Ju, Lili; Li, Xiao; Qiao, Zhonghua
2018-06-01
Comparing with the well-known classic Cahn-Hilliard equation, the nonlocal Cahn-Hilliard equation is equipped with a nonlocal diffusion operator and can describe more practical phenomena for modeling phase transitions of microstructures in materials. On the other hand, it evidently brings more computational costs in numerical simulations, thus efficient and accurate time integration schemes are highly desired. In this paper, we propose two energy-stable linear semi-implicit methods with first and second order temporal accuracies respectively for solving the nonlocal Cahn-Hilliard equation. The temporal discretization is done by using the stabilization technique with the nonlocal diffusion term treated implicitly, while the spatial discretization is carried out by the Fourier collocation method with FFT-based fast implementations. The energy stabilities are rigorously established for both methods in the fully discrete sense. Numerical experiments are conducted for a typical case involving Gaussian kernels. We test the temporal convergence rates of the proposed schemes and make a comparison of the nonlocal phase transition process with the corresponding local one. In addition, long-time simulations of the coarsening dynamics are also performed to predict the power law of the energy decay.
A Second Law Based Unstructured Finite Volume Procedure for Generalized Flow Simulation
NASA Technical Reports Server (NTRS)
Majumdar, Alok
1998-01-01
An unstructured finite volume procedure has been developed for steady and transient thermo-fluid dynamic analysis of fluid systems and components. The procedure is applicable for a flow network consisting of pipes and various fittings where flow is assumed to be one dimensional. It can also be used to simulate flow in a component by modeling a multi-dimensional flow using the same numerical scheme. The flow domain is discretized into a number of interconnected control volumes located arbitrarily in space. The conservation equations for each control volume account for the transport of mass, momentum and entropy from the neighboring control volumes. In addition, they also include the sources of each conserved variable and time dependent terms. The source term of entropy equation contains entropy generation due to heat transfer and fluid friction. Thermodynamic properties are computed from the equation of state of a real fluid. The system of equations is solved by a hybrid numerical method which is a combination of simultaneous Newton-Raphson and successive substitution schemes. The paper also describes the application and verification of the procedure by comparing its predictions with the analytical and numerical solution of several benchmark problems.
A high-quality high-fidelity visualization of the September 11 attack on the World Trade Center.
Rosen, Paul; Popescu, Voicu; Hoffmann, Christoph; Irfanoglu, Ayhan
2008-01-01
In this application paper, we describe the efforts of a multidisciplinary team towards producing a visualization of the September 11 Attack on the North Tower of New York's World Trade Center. The visualization was designed to meet two requirements. First, the visualization had to depict the impact with high fidelity, by closely following the laws of physics. Second, the visualization had to be eloquent to a nonexpert user. This was achieved by first designing and computing a finite-element analysis (FEA) simulation of the impact between the aircraft and the top 20 stories of the building, and then by visualizing the FEA results with a state-of-the-art commercial animation system. The visualization was enabled by an automatic translator that converts the simulation data into an animation system 3D scene. We built upon a previously developed translator. The translator was substantially extended to enable and control visualization of fire and of disintegrating elements, to better scale with the number of nodes and number of states, to handle beam elements with complex profiles, and to handle smoothed particle hydrodynamics liquid representation. The resulting translator is a powerful automatic and scalable tool for high-quality visualization of FEA results.
Attitude guidance and simulation with animation of a land-survey satellite motion
NASA Astrophysics Data System (ADS)
Somova, Tatyana
2017-01-01
We consider problems of synthesis of the vector spline attitude guidance laws for a land-survey satellite and an in-flight support of the satellite attitude control system with the use of computer animation of its motion. We have presented the results on the efficiency of the developed algorithms.
ERIC Educational Resources Information Center
Fisher, Diane
2005-01-01
In the case of cars and other engineered objects, humans go about the design process in a very intentional way. They pretty much know what they are aiming for. The activity described in this article demonstrates how a computer can simulate biological evolution and the laws of natural selection. The article is divided into the following sections:…
Blackbody Radiation from an Incandescent Lamp
ERIC Educational Resources Information Center
Ribeiro, C. I.
2014-01-01
In this article we propose an activity aimed at introductory students to help them understand the Stefan-Boltzmann and Wien's displacement laws. It only requires simple materials that are available at any school: an incandescent lamp, a variable dc energy supply, and a computer to run an interactive simulation of the blackbody spectrum.…
PyFly: A fast, portable aerodynamics simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, Daniel; Ghommem, M.; Collier, Nathaniel O.
Here, we present a fast, user-friendly implementation of a potential flow solver based on the unsteady vortex lattice method (UVLM), namely PyFly. UVLM computes the aerodynamic loads applied on lifting surfaces while capturing the unsteady effects such as the added mass forces, the growth of bound circulation, and the wake while assuming that the flow separation location is known a priori. This method is based on discretizing the body surface into a lattice of vortex rings and relies on the Biot–Savart law to construct the velocity field at every point in the simulated domain. We introduce the pointwise approximation approachmore » to simulate the interactions of the far-field vortices to overcome the computational burden associated with the classical implementation of UVLM. The computational framework uses the Python programming language to provide an easy to handle user interface while the computational kernels are written in Fortran. The mixed language approach enables high performance regarding solution time and great flexibility concerning easiness of code adaptation to different system configurations and applications. The computational tool predicts the unsteady aerodynamic behavior of multiple moving bodies (e.g., flapping wings, rotating blades, suspension bridges) subject to incoming air. The aerodynamic simulator can also deal with enclosure effects, multi-body interactions, and B-spline representation of body shapes. Finally, we simulate different aerodynamic problems to illustrate the usefulness and effectiveness of PyFly.« less
PyFly: A fast, portable aerodynamics simulator
Garcia, Daniel; Ghommem, M.; Collier, Nathaniel O.; ...
2018-03-14
Here, we present a fast, user-friendly implementation of a potential flow solver based on the unsteady vortex lattice method (UVLM), namely PyFly. UVLM computes the aerodynamic loads applied on lifting surfaces while capturing the unsteady effects such as the added mass forces, the growth of bound circulation, and the wake while assuming that the flow separation location is known a priori. This method is based on discretizing the body surface into a lattice of vortex rings and relies on the Biot–Savart law to construct the velocity field at every point in the simulated domain. We introduce the pointwise approximation approachmore » to simulate the interactions of the far-field vortices to overcome the computational burden associated with the classical implementation of UVLM. The computational framework uses the Python programming language to provide an easy to handle user interface while the computational kernels are written in Fortran. The mixed language approach enables high performance regarding solution time and great flexibility concerning easiness of code adaptation to different system configurations and applications. The computational tool predicts the unsteady aerodynamic behavior of multiple moving bodies (e.g., flapping wings, rotating blades, suspension bridges) subject to incoming air. The aerodynamic simulator can also deal with enclosure effects, multi-body interactions, and B-spline representation of body shapes. Finally, we simulate different aerodynamic problems to illustrate the usefulness and effectiveness of PyFly.« less
NASA Astrophysics Data System (ADS)
Larsen, J. D.; Schaap, M. G.
2013-12-01
Recent advances in computing technology and experimental techniques have made it possible to observe and characterize fluid dynamics at the micro-scale. Many computational methods exist that can adequately simulate fluid flow in porous media. Lattice Boltzmann methods provide the distinct advantage of tracking particles at the microscopic level and returning macroscopic observations. While experimental methods can accurately measure macroscopic fluid dynamics, computational efforts can be used to predict and gain insight into fluid dynamics by utilizing thin sections or computed micro-tomography (CMT) images of core sections. Although substantial effort have been made to advance non-invasive imaging methods such as CMT, fluid dynamics simulations, and microscale analysis, a true three dimensional image segmentation technique has not been developed until recently. Many competing segmentation techniques are utilized in industry and research settings with varying results. In this study lattice Boltzmann method is used to simulate stokes flow in a macroporous soil column. Two dimensional CMT images were used to reconstruct a three dimensional representation of the original sample. Six competing segmentation standards were used to binarize the CMT volumes which provide distinction between solid phase and pore space. The permeability of the reconstructed samples was calculated, with Darcy's Law, from lattice Boltzmann simulations of fluid flow in the samples. We compare simulated permeability from differing segmentation algorithms to experimental findings.
Computational Fluid Dynamics Demonstration of Rigid Bodies in Motion
NASA Technical Reports Server (NTRS)
Camarena, Ernesto; Vu, Bruce T.
2011-01-01
The Design Analysis Branch (NE-Ml) at the Kennedy Space Center has not had the ability to accurately couple Rigid Body Dynamics (RBD) and Computational Fluid Dynamics (CFD). OVERFLOW-D is a flow solver that has been developed by NASA to have the capability to analyze and simulate dynamic motions with up to six Degrees of Freedom (6-DOF). Two simulations were prepared over the course of the internship to demonstrate 6DOF motion of rigid bodies under aerodynamic loading. The geometries in the simulations were based on a conceptual Space Launch System (SLS). The first simulation that was prepared and computed was the motion of a Solid Rocket Booster (SRB) as it separates from its core stage. To reduce computational time during the development of the simulation, only half of the physical domain with respect to the symmetry plane was simulated. Then a full solution was prepared and computed. The second simulation was a model of the SLS as it departs from a launch pad under a 20 knot crosswind. This simulation was reduced to Two Dimensions (2D) to reduce both preparation and computation time. By allowing 2-DOF for translations and 1-DOF for rotation, the simulation predicted unrealistic rotation. The simulation was then constrained to only allow translations.
Teaching Physiology and the World Wide Web: Electrochemistry and Electrophysiology on the Internet.
ERIC Educational Resources Information Center
Dwyer, Terry M.; Fleming, John; Randall, James E.; Coleman, Thomas G.
1997-01-01
Presents two examples of laboratory exercises using the World Wide Web for first-year medical students. The first example introduces the physical laws that apply to osmotic, chemical, and electrical gradients and a simulation of the ability of the sodium-potassium pump to establish chemical gradients and maintain cell volume. The second module…
Computing nonhydrostatic shallow-water flow over steep terrain
Denlinger, R.P.; O'Connell, D. R. H.
2008-01-01
Flood and dambreak hazards are not limited to moderate terrain, yet most shallow-water models assume that flow occurs over gentle slopes. Shallow-water flow over rugged or steep terrain often generates significant nonhydrostatic pressures, violating the assumption of hydrostatic pressure made in most shallow-water codes. In this paper, we adapt a previously published nonhydrostatic granular flow model to simulate shallow-water flow, and we solve conservation equations using a finite volume approach and an Harten, Lax, Van Leer, and Einfeldt approximate Riemann solver that is modified for a sloping bed and transient wetting and drying conditions. To simulate bed friction, we use the law of the wall. We test the model by comparison with an analytical solution and with results of experiments in flumes that have steep (31??) or shallow (0.3??) slopes. The law of the wall provides an accurate prediction of the effect of bed roughness on mean flow velocity over two orders of magnitude of bed roughness. Our nonhydrostatic, law-of-the-wall flow simulation accurately reproduces flume measurements of front propagation speed, flow depth, and bed-shear stress for conditions of large bed roughness. ?? 2008 ASCE.
Quantum Gauss-Jordan Elimination and Simulation of Accounting Principles on Quantum Computers
NASA Astrophysics Data System (ADS)
Diep, Do Ngoc; Giang, Do Hoang; Van Minh, Nguyen
2017-06-01
The paper is devoted to a version of Quantum Gauss-Jordan Elimination and its applications. In the first part, we construct the Quantum Gauss-Jordan Elimination (QGJE) Algorithm and estimate the complexity of computation of Reduced Row Echelon Form (RREF) of N × N matrices. The main result asserts that QGJE has computation time is of order 2 N/2. The second part is devoted to a new idea of simulation of accounting by quantum computing. We first expose the actual accounting principles in a pure mathematics language. Then, we simulate the accounting principles on quantum computers. We show that, all accounting actions are exhousted by the described basic actions. The main problems of accounting are reduced to some system of linear equations in the economic model of Leontief. In this simulation, we use our constructed Quantum Gauss-Jordan Elimination to solve the problems and the complexity of quantum computing is a square root order faster than the complexity in classical computing.
Predictive IP controller for robust position control of linear servo system.
Lu, Shaowu; Zhou, Fengxing; Ma, Yajie; Tang, Xiaoqi
2016-07-01
Position control is a typical application of linear servo system. In this paper, to reduce the system overshoot, an integral plus proportional (IP) controller is used in the position control implementation. To further improve the control performance, a gain-tuning IP controller based on a generalized predictive control (GPC) law is proposed. Firstly, to represent the dynamics of the position loop, a second-order linear model is used and its model parameters are estimated on-line by using a recursive least squares method. Secondly, based on the GPC law, an optimal control sequence is obtained by using receding horizon, then directly supplies the IP controller with the corresponding control parameters in the real operations. Finally, simulation and experimental results are presented to show the efficiency of proposed scheme. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Study of nonequilibrium work distributions from a fluctuating lattice Boltzmann model.
Nasarayya Chari, S Siva; Murthy, K P N; Inguva, Ramarao
2012-04-01
A system of ideal gas is switched from an initial equilibrium state to a final state not necessarily in equilibrium, by varying a macroscopic control variable according to a well-defined protocol. The distribution of work performed during the switching process is obtained. The equilibrium free energy difference, ΔF, is determined from the work fluctuation relation. Some of the work values in the ensemble shall be less than ΔF. We term these as ones that "violate" the second law of thermodynamics. A fluctuating lattice Boltzmann model has been employed to carry out the simulation of the switching experiment. Our results show that the probability of violation of the second law increases with the increase of switching time (τ) and tends to one-half in the reversible limit of τ→∞.
Control Laws for a Dual-Spin Stabilized Platform
NASA Technical Reports Server (NTRS)
Lim, K. B.; Moerder, D. D.
2008-01-01
This paper describes two attitude control laws suitable for atmospheric flight vehicles with a steady angular momentum bias in the vehicle yaw axis. This bias is assumed to be provided by an internal flywheel, and is introduced to enhance roll and pitch stiffness. The first control law is based on Lyapunov stability theory, and stability proofs are given. The second control law, which assumes that the angular momentum bias is large, is based on a classical PID control. It is shown that the large yaw-axis bias requires that the PI feedback component on the roll and pitch angle errors be cross-fed. Both control laws are applied to a vehicle simulation in the presence of disturbances for several values of yaw-axis angular momentum bias. It is seen that both control laws provide a significant improvement in attitude performance when the bias is sufficiently large, but the nonlinear control law is also able to provide improved performance for a small value of bias. This is important because the smaller bias corresponds to a smaller requirement for mass to be dedicated to the flywheel.
3D simulations of early blood vessel formation
NASA Astrophysics Data System (ADS)
Cavalli, F.; Gamba, A.; Naldi, G.; Semplice, M.; Valdembri, D.; Serini, G.
2007-08-01
Blood vessel networks form by spontaneous aggregation of individual cells migrating toward vascularization sites (vasculogenesis). A successful theoretical model of two-dimensional experimental vasculogenesis has been recently proposed, showing the relevance of percolation concepts and of cell cross-talk (chemotactic autocrine loop) to the understanding of this self-aggregation process. Here we study the natural 3D extension of the computational model proposed earlier, which is relevant for the investigation of the genuinely three-dimensional process of vasculogenesis in vertebrate embryos. The computational model is based on a multidimensional Burgers equation coupled with a reaction diffusion equation for a chemotactic factor and a mass conservation law. The numerical approximation of the computational model is obtained by high order relaxed schemes. Space and time discretization are performed by using TVD schemes and, respectively, IMEX schemes. Due to the computational costs of realistic simulations, we have implemented the numerical algorithm on a cluster for parallel computation. Starting from initial conditions mimicking the experimentally observed ones, numerical simulations produce network-like structures qualitatively similar to those observed in the early stages of in vivo vasculogenesis. We develop the computation of critical percolative indices as a robust measure of the network geometry as a first step towards the comparison of computational and experimental data.
Effectiveness of Simulation in a Hybrid and Online Networking Course.
ERIC Educational Resources Information Center
Cameron, Brian H.
2003-01-01
Reports on a study that compares the performance of students enrolled in two sections of a Web-based computer networking course: one utilizing a simulation package and the second utilizing a static, graphical software package. Analysis shows statistically significant improvements in performance in the simulation group compared to the…
Analysis, preliminary design and simulation systems for control-structure interaction problems
NASA Technical Reports Server (NTRS)
Park, K. C.; Alvin, Kenneth F.
1991-01-01
Software aspects of control-structure interaction (CSI) analysis are discussed. The following subject areas are covered: (1) implementation of a partitioned algorithm for simulation of large CSI problems; (2) second-order discrete Kalman filtering equations for CSI simulations; and (3) parallel computations and control of adaptive structures.
NASA Astrophysics Data System (ADS)
Zhang, Shuying; Zhou, Xiaoqing; Qin, Zhuanping; Zhao, Huijuan
2011-02-01
This article aims at the development of the fast inverse Monte Carlo (MC) simulation for the reconstruction of optical properties (absorption coefficient μs and scattering coefficient μs) of cylindrical tissue, such as a cervix, from the measurement of near infrared diffuse light on frequency domain. Frequency domain information (amplitude and phase) is extracted from the time domain MC with a modified method. To shorten the computation time in reconstruction of optical properties, efficient and fast forward MC has to be achieved. To do this, firstly, databases of the frequency-domain information under a range of μa and μs were pre-built by combining MC simulation with Lambert-Beer's law. Then, a double polynomial model was adopted to quickly obtain the frequency-domain information in any optical properties. Based on the fast forward MC, the optical properties can be quickly obtained in a nonlinear optimization scheme. Reconstruction resulting from simulated data showed that the developed inverse MC method has the advantages in both the reconstruction accuracy and computation time. The relative errors in reconstruction of the μs and μs are less than +/-6% and +/-12% respectively, while another coefficient (μs or μs) is in a fixed value. When both μs and μs are unknown, the relative errors in reconstruction of the reduced scattering coefficient and absorption coefficient are mainly less than +/-10% in range of 45< μs <80 cm-1 and 0.25< a μ <0.55 cm-1. With the rapid reconstruction strategy developed in this article the computation time for reconstructing one set of the optical properties is less than 0.5 second. Endoscopic measurement on two tubular solid phantoms were also carried out to evaluate the system and the inversion scheme. The results demonstrated that less than 20% relative error can be achieved.
The second laws of quantum thermodynamics.
Brandão, Fernando; Horodecki, Michał; Ng, Nelly; Oppenheim, Jonathan; Wehner, Stephanie
2015-03-17
The second law of thermodynamics places constraints on state transformations. It applies to systems composed of many particles, however, we are seeing that one can formulate laws of thermodynamics when only a small number of particles are interacting with a heat bath. Is there a second law of thermodynamics in this regime? Here, we find that for processes which are approximately cyclic, the second law for microscopic systems takes on a different form compared to the macroscopic scale, imposing not just one constraint on state transformations, but an entire family of constraints. We find a family of free energies which generalize the traditional one, and show that they can never increase. The ordinary second law relates to one of these, with the remainder imposing additional constraints on thermodynamic transitions. We find three regimes which determine which family of second laws govern state transitions, depending on how cyclic the process is. In one regime one can cause an apparent violation of the usual second law, through a process of embezzling work from a large system which remains arbitrarily close to its original state. These second laws are relevant for small systems, and also apply to individual macroscopic systems interacting via long-range interactions. By making precise the definition of thermal operations, the laws of thermodynamics are unified in this framework, with the first law defining the class of operations, the zeroth law emerging as an equivalence relation between thermal states, and the remaining laws being monotonicity of our generalized free energies.
The second laws of quantum thermodynamics
Brandão, Fernando; Horodecki, Michał; Ng, Nelly; Oppenheim, Jonathan; Wehner, Stephanie
2015-01-01
The second law of thermodynamics places constraints on state transformations. It applies to systems composed of many particles, however, we are seeing that one can formulate laws of thermodynamics when only a small number of particles are interacting with a heat bath. Is there a second law of thermodynamics in this regime? Here, we find that for processes which are approximately cyclic, the second law for microscopic systems takes on a different form compared to the macroscopic scale, imposing not just one constraint on state transformations, but an entire family of constraints. We find a family of free energies which generalize the traditional one, and show that they can never increase. The ordinary second law relates to one of these, with the remainder imposing additional constraints on thermodynamic transitions. We find three regimes which determine which family of second laws govern state transitions, depending on how cyclic the process is. In one regime one can cause an apparent violation of the usual second law, through a process of embezzling work from a large system which remains arbitrarily close to its original state. These second laws are relevant for small systems, and also apply to individual macroscopic systems interacting via long-range interactions. By making precise the definition of thermal operations, the laws of thermodynamics are unified in this framework, with the first law defining the class of operations, the zeroth law emerging as an equivalence relation between thermal states, and the remaining laws being monotonicity of our generalized free energies. PMID:25675476
Numerical simulation of steady supersonic flow. [spatial marching
NASA Technical Reports Server (NTRS)
Schiff, L. B.; Steger, J. L.
1981-01-01
A noniterative, implicit, space-marching, finite-difference algorithm was developed for the steady thin-layer Navier-Stokes equations in conservation-law form. The numerical algorithm is applicable to steady supersonic viscous flow over bodies of arbitrary shape. In addition, the same code can be used to compute supersonic inviscid flow or three-dimensional boundary layers. Computed results from two-dimensional and three-dimensional versions of the numerical algorithm are in good agreement with those obtained from more costly time-marching techniques.
Hall-Effect Thruster Simulations with 2-D Electron Transport and Hydrodynamic Ions
NASA Technical Reports Server (NTRS)
Mikellides, Ioannis G.; Katz, Ira; Hofer, Richard H.; Goebel, Dan M.
2009-01-01
A computational approach that has been used extensively in the last two decades for Hall thruster simulations is to solve a diffusion equation and energy conservation law for the electrons in a direction that is perpendicular to the magnetic field, and use discrete-particle methods for the heavy species. This "hybrid" approach has allowed for the capture of bulk plasma phenomena inside these thrusters within reasonable computational times. Regions of the thruster with complex magnetic field arrangements (such as those near eroded walls and magnets) and/or reduced Hall parameter (such as those near the anode and the cathode plume) challenge the validity of the quasi-one-dimensional assumption for the electrons. This paper reports on the development of a computer code that solves numerically the 2-D axisymmetric vector form of Ohm's law, with no assumptions regarding the rate of electron transport in the parallel and perpendicular directions. The numerical challenges related to the large disparity of the transport coefficients in the two directions are met by solving the equations in a computational mesh that is aligned with the magnetic field. The fully-2D approach allows for a large physical domain that extends more than five times the thruster channel length in the axial direction, and encompasses the cathode boundary. Ions are treated as an isothermal, cold (relative to the electrons) fluid, accounting for charge-exchange and multiple-ionization collisions in the momentum equations. A first series of simulations of two Hall thrusters, namely the BPT-4000 and a 6-kW laboratory thruster, quantifies the significance of ion diffusion in the anode region and the importance of the extended physical domain on studies related to the impact of the transport coefficients on the electron flow field.
Non-Newtonian Aspects of Artificial Intelligence
NASA Astrophysics Data System (ADS)
Zak, Michail
2016-05-01
The challenge of this work is to connect physics with the concept of intelligence. By intelligence we understand a capability to move from disorder to order without external resources, i.e., in violation of the second law of thermodynamics. The objective is to find such a mathematical object described by ODE that possesses such a capability. The proposed approach is based upon modification of the Madelung version of the Schrodinger equation by replacing the force following from quantum potential with non-conservative forces that link to the concept of information. A mathematical formalism suggests that a hypothetical intelligent particle, besides the capability to move against the second law of thermodynamics, acquires such properties like self-image, self-awareness, self-supervision, etc. that are typical for Livings. However since this particle being a quantum-classical hybrid acquires non-Newtonian and non-quantum properties, it does not belong to the physics matter as we know it: the modern physics should be complemented with the concept of the information force that represents a bridge to intelligent particle. As a follow-up of the proposed concept, the following question is addressed: can artificial intelligence (AI) system composed only of physical components compete with a human? The answer is proven to be negative if the AI system is based only on simulations, and positive if digital devices are included. It has been demonstrated that there exists such a quantum neural net that performs simulations combined with digital punctuations. The universality of this quantum-classical hybrid is in capability to violate the second law of thermodynamics by moving from disorder to order without external resources. This advanced capability is illustrated by examples. In conclusion, a mathematical machinery of the perception that is the fundamental part of a cognition process as well as intelligence is introduced and discussed.
NASA Astrophysics Data System (ADS)
Immanuel, Y.; Pullepu, Bapuji; Sambath, P.
2018-04-01
A two dimensional mathematical model is formulated for the transitive laminar free convective, incompressible viscous fluid flow over vertical cone with variable surface heat flux combined with the effects of heat generation and absorption is considered . using a powerful computational method based on thermoelectric analogy called Network Simulation Method (NSM0, the solutions of governing nondimensionl coupled, unsteady and nonlinear partial differential conservation equations of the flow that are obtained. The numerical technique is always stable and convergent which establish high efficiency and accuracy by employing network simulator computer code Pspice. The effects of velocity and temperature profiles have been analyzed for various factors, namely Prandtl number Pr, heat flux power law exponent n and heat generation/absorption parameter Δ are analyzed graphically.
Realistic natural atmospheric phenomena and weather effects for interactive virtual environments
NASA Astrophysics Data System (ADS)
McLoughlin, Leigh
Clouds and the weather are important aspects of any natural outdoor scene, but existing dynamic techniques within computer graphics only offer the simplest of cloud representations. The problem that this work looks to address is how to provide a means of simulating clouds and weather features such as precipitation, that are suitable for virtual environments. Techniques for cloud simulation are available within the area of meteorology, but numerical weather prediction systems are computationally expensive, give more numerical accuracy than we require for graphics and are restricted to the laws of physics. Within computer graphics, we often need to direct and adjust physical features or to bend reality to meet artistic goals, which is a key difference between the subjects of computer graphics and physical science. Pure physically-based simulations, however, evolve their solutions according to pre-set rules and are notoriously difficult to control. The challenge then is for the solution to be computationally lightweight and able to be directed in some measure while at the same time producing believable results. This work presents a lightweight physically-based cloud simulation scheme that simulates the dynamic properties of cloud formation and weather effects. The system simulates water vapour, cloud water, cloud ice, rain, snow and hail. The water model incorporates control parameters and the cloud model uses an arbitrary vertical temperature profile, with a tool described to allow the user to define this. The result of this work is that clouds can now be simulated in near real-time complete with precipitation. The temperature profile and tool then provide a means of directing the resulting formation..
Stark, Austin C.; Andrews, Casey T.
2013-01-01
Coarse-grained (CG) simulation methods are now widely used to model the structure and dynamics of large biomolecular systems. One important issue for using such methods – especially with regard to using them to model, for example, intracellular environments – is to demonstrate that they can reproduce experimental data on the thermodynamics of protein-protein interactions in aqueous solutions. To examine this issue, we describe here simulations performed using the popular coarse-grained MARTINI force field, aimed at computing the thermodynamics of lysozyme and chymotrypsinogen self-interactions in aqueous solution. Using molecular dynamics simulations to compute potentials of mean force between a pair of protein molecules, we show that the original parameterization of the MARTINI force field is likely to significantly overestimate the strength of protein-protein interactions to the extent that the computed osmotic second virial coefficients are orders of magnitude more negative than experimental estimates. We then show that a simple down-scaling of the van der Waals parameters that describe the interactions between protein pseudo-atoms can bring the simulated thermodynamics into much closer agreement with experiment. Overall, the work shows that it is feasible to test explicit-solvent CG force fields directly against thermodynamic data for proteins in aqueous solutions, and highlights the potential usefulness of osmotic second virial coefficient measurements for fully parameterizing such force fields. PMID:24223529
Stark, Austin C; Andrews, Casey T; Elcock, Adrian H
2013-09-10
Coarse-grained (CG) simulation methods are now widely used to model the structure and dynamics of large biomolecular systems. One important issue for using such methods - especially with regard to using them to model, for example, intracellular environments - is to demonstrate that they can reproduce experimental data on the thermodynamics of protein-protein interactions in aqueous solutions. To examine this issue, we describe here simulations performed using the popular coarse-grained MARTINI force field, aimed at computing the thermodynamics of lysozyme and chymotrypsinogen self-interactions in aqueous solution. Using molecular dynamics simulations to compute potentials of mean force between a pair of protein molecules, we show that the original parameterization of the MARTINI force field is likely to significantly overestimate the strength of protein-protein interactions to the extent that the computed osmotic second virial coefficients are orders of magnitude more negative than experimental estimates. We then show that a simple down-scaling of the van der Waals parameters that describe the interactions between protein pseudo-atoms can bring the simulated thermodynamics into much closer agreement with experiment. Overall, the work shows that it is feasible to test explicit-solvent CG force fields directly against thermodynamic data for proteins in aqueous solutions, and highlights the potential usefulness of osmotic second virial coefficient measurements for fully parameterizing such force fields.
Generalized Arcsine Laws for Fractional Brownian Motion
NASA Astrophysics Data System (ADS)
Sadhu, Tridib; Delorme, Mathieu; Wiese, Kay Jörg
2018-01-01
The three arcsine laws for Brownian motion are a cornerstone of extreme-value statistics. For a Brownian Bt starting from the origin, and evolving during time T , one considers the following three observables: (i) the duration t+ the process is positive, (ii) the time tlast the process last visits the origin, and (iii) the time tmax when it achieves its maximum (or minimum). All three observables have the same cumulative probability distribution expressed as an arcsine function, thus the name arcsine laws. We show how these laws change for fractional Brownian motion Xt, a non-Markovian Gaussian process indexed by the Hurst exponent H . It generalizes standard Brownian motion (i.e., H =1/2 ). We obtain the three probabilities using a perturbative expansion in ɛ =H -1/2 . While all three probabilities are different, this distinction can only be made at second order in ɛ . Our results are confirmed to high precision by extensive numerical simulations.
NASA Astrophysics Data System (ADS)
Bini, Donato; Damour, Thibault; Geralico, Andrea
2016-03-01
We analytically compute, through the six-and-a-half post-Newtonian order, the second-order-in-eccentricity piece of the Detweiler-Barack-Sago gauge-invariant redshift function for a small mass in eccentric orbit around a Schwarzschild black hole. Using the first law of mechanics for eccentric orbits [A. Le Tiec, First law of mechanics for compact binaries on eccentric orbits, Phys. Rev. D 92, 084021 (2015).] we transcribe our result into a correspondingly accurate knowledge of the second radial potential of the effective-one-body formalism [A. Buonanno and T. Damour, Effective one-body approach to general relativistic two-body dynamics, Phys. Rev. D 59, 084006 (1999).]. We compare our newly acquired analytical information to several different numerical self-force data and find good agreement, within estimated error bars. We also obtain, for the first time, independent analytical checks of the recently derived, comparable-mass fourth-post-Newtonian order dynamics [T. Damour, P. Jaranowski, and G. Schaefer, Nonlocal-in-time action for the fourth post-Newtonian conservative dynamics of two-body systems, Phys. Rev. D 89, 064058 (2014).].
Inverse free steering law for small satellite attitude control and power tracking with VSCMGs
NASA Astrophysics Data System (ADS)
Malik, M. S. I.; Asghar, Sajjad
2014-01-01
Recent developments in integrated power and attitude control systems (IPACSs) for small satellite, has opened a new dimension to more complex and demanding space missions. This paper presents a new inverse free steering approach for integrated power and attitude control systems using variable-speed single gimbal control moment gyroscope. The proposed inverse free steering law computes the VSCMG steering commands (gimbal rates and wheel accelerations) such that error signal (difference in command and output) in feedback loop is driven to zero. H∞ norm optimization approach is employed to synthesize the static matrix elements of steering law for a static state of VSCMG. Later these matrix elements are suitably made dynamic in order for the adaptation. In order to improve the performance of proposed steering law while passing through a singular state of CMG cluster (no torque output), the matrix element of steering law is suitably modified. Therefore, this steering law is capable of escaping internal singularities and using the full momentum capacity of CMG cluster. Finally, two numerical examples for a satellite in a low earth orbit are simulated to test the proposed steering law.
Rapid inundation estimates using coastal amplification laws in the western Mediterranean basin
NASA Astrophysics Data System (ADS)
Gailler, Audrey; Loevenbruck, Anne; Hébert, Hélène
2014-05-01
Numerical tsunami propagation and inundation models are well developed and have now reached an impressive level of accuracy, especially in locations such as harbors where the tsunami waves are mostly amplified. In the framework of tsunami warning under real-time operational conditions, the main obstacle for the routine use of such numerical simulations remains the slowness of the numerical computation, which is strengthened when detailed grids are required for the precise modeling of the coastline response of an individual harbor. Thus only tsunami offshore propagation modeling tools using a single sparse bathymetric computation grid are presently included within the French Tsunami Warning Center (CENALT), providing rapid estimation of tsunami warning at western Mediterranean and NE Atlantic basins scale. We present here a preliminary work that performs quick estimates of the inundation at individual harbors from these high sea forecasting tsunami simulations. The method involves an empirical correction based on theoretical amplification laws (either Green's or Synolakis laws). The main limitation is that its application to a given coastal area would require a large database of previous observations, in order to define the empirical parameters of the correction equation. As no such data (i.e., historical tide gage records of significant tsunamis) are available for the western Mediterranean and NE Atlantic basins, we use a set of synthetic mareograms, calculated for both fake events and well-known historical tsunamigenic earthquakes in the area. This synthetic dataset is obtained through accurate numerical tsunami propagation and inundation modeling by using several nested bathymetric grids of increasingly fine resolution close to the shores (down to a grid cell size of 3m in some Mediterranean harbors). Non linear shallow water tsunami modeling performed on a single 2' coarse bathymetric grid are compared to the values given by time-consuming nested grids simulations (and observation when available), in order to check to which extent the simple approach based on the amplification laws can explain the data. The idea is to fit tsunami data with numerical modeling carried out without any refined coastal bathymetry/topography. To this end several parameters are discussed, namely the bathymetric depth to which model results must be extrapolated (using the Green's law), or the mean bathymetric slope to consider near the studied coast (when using the Synolakis law).
NASA Technical Reports Server (NTRS)
Ha Minh, H.; Viegas, J. R.; Rubesin, M. W.; Spalart, P.; Vandromme, D. D.
1989-01-01
The turbulent boundary layer under a freestream whose velocity varies sinusoidally in time around a zero mean is computed using two second order turbulence closure models. The time or phase dependent behavior of the Reynolds stresses are analyzed and results are compared to those of a previous SPALART-BALDWIN direct simulation. Comparisons show that the second order modeling is quite satisfactory for almost all phase angles, except in the relaminarization period where the computations lead to a relatively high wall shear stress.
BEARCLAW: Boundary Embedded Adaptive Refinement Conservation LAW package
NASA Astrophysics Data System (ADS)
Mitran, Sorin
2011-04-01
The BEARCLAW package is a multidimensional, Eulerian AMR-capable computational code written in Fortran to solve hyperbolic systems for astrophysical applications. It is part of AstroBEAR, a hydrodynamic & magnetohydrodynamic code environment designed for a variety of astrophysical applications which allows simulations in 2, 2.5 (i.e., cylindrical), and 3 dimensions, in either cartesian or curvilinear coordinates.
Essentially Entropic Lattice Boltzmann Model
NASA Astrophysics Data System (ADS)
Atif, Mohammad; Kolluru, Praveen Kumar; Thantanapally, Chakradhar; Ansumali, Santosh
2017-12-01
The entropic lattice Boltzmann model (ELBM), a discrete space-time kinetic theory for hydrodynamics, ensures nonlinear stability via the discrete time version of the second law of thermodynamics (the H theorem). Compliance with the H theorem is numerically enforced in this methodology and involves a search for the maximal discrete path length corresponding to the zero dissipation state by iteratively solving a nonlinear equation. We demonstrate that an exact solution for the path length can be obtained by assuming a natural criterion of negative entropy change, thereby reducing the problem to solving an inequality. This inequality is solved by creating a new framework for construction of Padé approximants via quadrature on appropriate convex function. This exact solution also resolves the issue of indeterminacy in case of nonexistence of the entropic involution step. Since our formulation is devoid of complex mathematical library functions, the computational cost is drastically reduced. To illustrate this, we have simulated a model setup of flow over the NACA-0012 airfoil at a Reynolds number of 2.88 ×106.
A state-trajectory control law for dc-to-dc converters
NASA Technical Reports Server (NTRS)
Burns, W. W., III; Wilson, T. G.
1978-01-01
Mathematical representations of a state-plane switching boundary employed in a state-trajectory control law for dc-to-dc converters are derived. Several levels of approximation to the switching boundary equations are presented, together with an evaluation of the effects of nonideal operating characteristics of converter power stage components on the shape and location of the boundary and the behavior of a system controlled by it. Digital computer simulations of dc-to-dc converters operating in conjunction with each of these levels of control are presented and evaluated with respect to changes in transient and steady-state performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Oishik, E-mail: oishik-sen@uiowa.edu; Gaul, Nicholas J., E-mail: nicholas-gaul@ramdosolutions.com; Choi, K.K., E-mail: kyung-choi@uiowa.edu
Macro-scale computations of shocked particulate flows require closure laws that model the exchange of momentum/energy between the fluid and particle phases. Closure laws are constructed in this work in the form of surrogate models derived from highly resolved mesoscale computations of shock-particle interactions. The mesoscale computations are performed to calculate the drag force on a cluster of particles for different values of Mach Number and particle volume fraction. Two Kriging-based methods, viz. the Dynamic Kriging Method (DKG) and the Modified Bayesian Kriging Method (MBKG) are evaluated for their ability to construct surrogate models with sparse data; i.e. using the leastmore » number of mesoscale simulations. It is shown that if the input data is noise-free, the DKG method converges monotonically; convergence is less robust in the presence of noise. The MBKG method converges monotonically even with noisy input data and is therefore more suitable for surrogate model construction from numerical experiments. This work is the first step towards a full multiscale modeling of interaction of shocked particle laden flows.« less
X-38 Experimental Controls Laws
NASA Technical Reports Server (NTRS)
Munday, Steve; Estes, Jay; Bordano, Aldo J.
2000-01-01
X-38 Experimental Control Laws X-38 is a NASA JSC/DFRC experimental flight test program developing a series of prototypes for an International Space Station (ISS) Crew Return Vehicle, often called an ISS "lifeboat." X- 38 Vehicle 132 Free Flight 3, currently scheduled for the end of this month, will be the first flight test of a modem FCS architecture called Multi-Application Control-Honeywell (MACH), originally developed by the Honeywell Technology Center. MACH wraps classical P&I outer attitude loops around a modem dynamic inversion attitude rate loop. The dynamic inversion process requires that the flight computer have an onboard aircraft model of expected vehicle dynamics based upon the aerodynamic database. Dynamic inversion is computationally intensive, so some timing modifications were made to implement MACH on the slower flight computers of the subsonic test vehicles. In addition to linear stability margin analyses and high fidelity 6-DOF simulation, hardware-in-the-loop testing is used to verify the implementation of MACH and its robustness to aerodynamic and environmental uncertainties and disturbances.
Entropy generation method to quantify thermal comfort.
Boregowda, S C; Tiwari, S N; Chaturvedi, S K
2001-12-01
The present paper presents a thermodynamic approach to assess the quality of human-thermal environment interaction and quantify thermal comfort. The approach involves development of entropy generation term by applying second law of thermodynamics to the combined human-environment system. The entropy generation term combines both human thermal physiological responses and thermal environmental variables to provide an objective measure of thermal comfort. The original concepts and definitions form the basis for establishing the mathematical relationship between thermal comfort and entropy generation term. As a result of logic and deterministic approach, an Objective Thermal Comfort Index (OTCI) is defined and established as a function of entropy generation. In order to verify the entropy-based thermal comfort model, human thermal physiological responses due to changes in ambient conditions are simulated using a well established and validated human thermal model developed at the Institute of Environmental Research of Kansas State University (KSU). The finite element based KSU human thermal computer model is being utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal responses to different environmental conditions. The output from the simulation, which include human thermal responses and input data consisting of environmental conditions are fed into the thermal comfort model. Continuous monitoring of thermal comfort in comfortable and extreme environmental conditions is demonstrated. The Objective Thermal Comfort values obtained from the entropy-based model are validated against regression based Predicted Mean Vote (PMV) values. Using the corresponding air temperatures and vapor pressures that were used in the computer simulation in the regression equation generates the PMV values. The preliminary results indicate that the OTCI and PMV values correlate well under ideal conditions. However, an experimental study is needed in the future to fully establish the validity of the OTCI formula and the model. One of the practical applications of this index is that could it be integrated in thermal control systems to develop human-centered environmental control systems for potential use in aircraft, mass transit vehicles, intelligent building systems, and space vehicles.
Entropy generation method to quantify thermal comfort
NASA Technical Reports Server (NTRS)
Boregowda, S. C.; Tiwari, S. N.; Chaturvedi, S. K.
2001-01-01
The present paper presents a thermodynamic approach to assess the quality of human-thermal environment interaction and quantify thermal comfort. The approach involves development of entropy generation term by applying second law of thermodynamics to the combined human-environment system. The entropy generation term combines both human thermal physiological responses and thermal environmental variables to provide an objective measure of thermal comfort. The original concepts and definitions form the basis for establishing the mathematical relationship between thermal comfort and entropy generation term. As a result of logic and deterministic approach, an Objective Thermal Comfort Index (OTCI) is defined and established as a function of entropy generation. In order to verify the entropy-based thermal comfort model, human thermal physiological responses due to changes in ambient conditions are simulated using a well established and validated human thermal model developed at the Institute of Environmental Research of Kansas State University (KSU). The finite element based KSU human thermal computer model is being utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal responses to different environmental conditions. The output from the simulation, which include human thermal responses and input data consisting of environmental conditions are fed into the thermal comfort model. Continuous monitoring of thermal comfort in comfortable and extreme environmental conditions is demonstrated. The Objective Thermal Comfort values obtained from the entropy-based model are validated against regression based Predicted Mean Vote (PMV) values. Using the corresponding air temperatures and vapor pressures that were used in the computer simulation in the regression equation generates the PMV values. The preliminary results indicate that the OTCI and PMV values correlate well under ideal conditions. However, an experimental study is needed in the future to fully establish the validity of the OTCI formula and the model. One of the practical applications of this index is that could it be integrated in thermal control systems to develop human-centered environmental control systems for potential use in aircraft, mass transit vehicles, intelligent building systems, and space vehicles.
Teaching Scientific Methodology through Microcomputer Simulations in Genetics. Final Project Report.
ERIC Educational Resources Information Center
Kellogg, Ted; Latson, Jon
There are two major concerns about the teaching of high school biology. One is the degree to which students memorize laws, facts, and principles, and the second involves the role of the classroom teacher. These aspects result in a discrepancy between the theory and practice of science education. The purpose of this report is to provide: (1) a…
NASA Astrophysics Data System (ADS)
Chandrakanth, Balaji; Venkatesan, G; Prakash Kumar, L. S. S; Jalihal, Purnima; Iniyan, S
2018-03-01
The present work discusses the design and selection of a shell and tube condenser used in Low Temperature Thermal Desalination (LTTD). To optimize the key geometrical and process parameters of the condenser with multiple parameters and levels, a design of an experiment approach using Taguchi method was chosen. An orthogonal array (OA) of 25 designs was selected for this study. The condenser was designed, analysed using HTRI software and the heat transfer area with respective tube side pressure drop were computed using the same, as these two objective functions determine the capital and running cost of the condenser. There was a complex trade off between the heat transfer area and pressure drop in the analysis, however second law analysis was worked out for determining the optimal heat transfer area vs pressure drop for condensing the required heat load.
On probability-possibility transformations
NASA Technical Reports Server (NTRS)
Klir, George J.; Parviz, Behzad
1992-01-01
Several probability-possibility transformations are compared in terms of the closeness of preserving second-order properties. The comparison is based on experimental results obtained by computer simulation. Two second-order properties are involved in this study: noninteraction of two distributions and projections of a joint distribution.
Example of Second-Law efficiency of solar-thermal cavity receivers
NASA Technical Reports Server (NTRS)
Moynihan, P. I.
1986-01-01
Properly quantified performance of a solar-thermal cavity receiver must not only account for the energy gains and losses as dictated by the First Law of thermodynamics, but it must also account for the quality of the energy. Energy quality can only be determined from the Second Law. In this paper, an equation developed for the Second-Law efficiency of a cavity receiver is presented as an evolution from the definition of available energy or availability (occasionally called exergy). The variables required are all either known or readily determined. The importance of considering the Second-Law is emphasized by a comparison of the First- and Second-Law efficiencies around an example of data collected from two receivers that were designed for different purposes, where the attempt was made to demonstrate that a Second-Law approach to quantifying the performance of a solar-thermal cavity receiver lends more complete insight than does the conventional solely applied First-Law approach.
Fluctuation Theorem for Many-Body Pure Quantum States.
Iyoda, Eiki; Kaneko, Kazuya; Sagawa, Takahiro
2017-09-08
We prove the second law of thermodynamics and the nonequilibrium fluctuation theorem for pure quantum states. The entire system obeys reversible unitary dynamics, where the initial state of the heat bath is not the canonical distribution but is a single energy eigenstate that satisfies the eigenstate-thermalization hypothesis. Our result is mathematically rigorous and based on the Lieb-Robinson bound, which gives the upper bound of the velocity of information propagation in many-body quantum systems. The entanglement entropy of a subsystem is shown connected to thermodynamic heat, highlighting the foundation of the information-thermodynamics link. We confirmed our theory by numerical simulation of hard-core bosons, and observed dynamical crossover from thermal fluctuations to bare quantum fluctuations. Our result reveals a universal scenario that the second law emerges from quantum mechanics, and can be experimentally tested by artificial isolated quantum systems such as ultracold atoms.
Fluctuation Theorem for Many-Body Pure Quantum States
NASA Astrophysics Data System (ADS)
Iyoda, Eiki; Kaneko, Kazuya; Sagawa, Takahiro
2017-09-01
We prove the second law of thermodynamics and the nonequilibrium fluctuation theorem for pure quantum states. The entire system obeys reversible unitary dynamics, where the initial state of the heat bath is not the canonical distribution but is a single energy eigenstate that satisfies the eigenstate-thermalization hypothesis. Our result is mathematically rigorous and based on the Lieb-Robinson bound, which gives the upper bound of the velocity of information propagation in many-body quantum systems. The entanglement entropy of a subsystem is shown connected to thermodynamic heat, highlighting the foundation of the information-thermodynamics link. We confirmed our theory by numerical simulation of hard-core bosons, and observed dynamical crossover from thermal fluctuations to bare quantum fluctuations. Our result reveals a universal scenario that the second law emerges from quantum mechanics, and can be experimentally tested by artificial isolated quantum systems such as ultracold atoms.
Estimates of olivine-basaltic melt electrical conductivity using a digital rock physics approach
NASA Astrophysics Data System (ADS)
Miller, Kevin J.; Montési, Laurent G. J.; Zhu, Wen-lu
2015-12-01
Estimates of melt content beneath fast-spreading mid-ocean ridges inferred from magnetotelluric tomography (MT) vary between 0.01 and 0.10. Much of this variation may stem from a lack of understanding of how the grain-scale melt geometry influences the bulk electrical conductivity of a partially molten rock, especially at low melt fraction. We compute bulk electrical conductivity of olivine-basalt aggregates over 0.02 to 0.20 melt fraction by simulating electric current in experimentally obtained partially molten geometries. Olivine-basalt aggregates were synthesized by hot-pressing San Carlos olivine and high-alumina basalt in a solid-medium piston-cylinder apparatus. Run conditions for experimental charges were 1.5 GPa and 1350 °C. Upon completion, charges were quenched and cored. Samples were imaged using synchrotron X-ray micro-computed tomography (μ-CT). The resulting high-resolution, 3-dimensional (3-D) image of the melt distribution constitutes a digital rock sample, on which numerical simulations were conducted to estimate material properties. To compute bulk electrical conductivity, we simulated a direct current measurement by solving the current continuity equation, assuming electrical conductivities for olivine and melt. An application of Ohm's Law yields the bulk electrical conductivity of the partially molten region. The bulk electrical conductivity values for nominally dry materials follow a power-law relationship σbulk = Cσmeltϕm with fit parameters m = 1.3 ± 0.3 and C = 0.66 ± 0.06. Laminar fluid flow simulations were conducted on the same partially molten geometries to obtain permeability, and the respective pathways for electrical current and fluid flow over the same melt geometry were compared. Our results indicate that the pathways for flow fluid are different from those for electric current. Electrical tortuosity is lower than fluid flow tortuosity. The simulation results are compared to existing experimental data, and the potential influence of volatiles and melt films on electrical conductivity of partially molten rocks is discussed.
Physically-Based Modelling and Real-Time Simulation of Fluids.
NASA Astrophysics Data System (ADS)
Chen, Jim Xiong
1995-01-01
Simulating physically realistic complex fluid behaviors presents an extremely challenging problem for computer graphics researchers. Such behaviors include the effects of driving boats through water, blending differently colored fluids, rain falling and flowing on a terrain, fluids interacting in a Distributed Interactive Simulation (DIS), etc. Such capabilities are useful in computer art, advertising, education, entertainment, and training. We present a new method for physically-based modeling and real-time simulation of fluids in computer graphics and dynamic virtual environments. By solving the 2D Navier -Stokes equations using a CFD method, we map the surface into 3D using the corresponding pressures in the fluid flow field. This achieves realistic real-time fluid surface behaviors by employing the physical governing laws of fluids but avoiding extensive 3D fluid dynamics computations. To complement the surface behaviors, we calculate fluid volume and external boundary changes separately to achieve full 3D general fluid flow. To simulate physical activities in a DIS, we introduce a mechanism which uses a uniform time scale proportional to the clock-time and variable time-slicing to synchronize physical models such as fluids in the networked environment. Our approach can simulate many different fluid behaviors by changing the internal or external boundary conditions. It can model different kinds of fluids by varying the Reynolds number. It can simulate objects moving or floating in fluids. It can also produce synchronized general fluid flows in a DIS. Our model can serve as a testbed to simulate many other fluid phenomena which have never been successfully modeled previously.
Dynamics of proteins aggregation. II. Dynamic scaling in confined media
NASA Astrophysics Data System (ADS)
Zheng, Size; Shing, Katherine S.; Sahimi, Muhammad
2018-03-01
In this paper, the second in a series devoted to molecular modeling of protein aggregation, a mesoscale model of proteins together with extensive discontinuous molecular dynamics simulation is used to study the phenomenon in a confined medium. The medium, as a model of a crowded cellular environment, is represented by a spherical cavity, as well as cylindrical tubes with two aspect ratios. The aggregation process leads to the formation of β sheets and eventually fibrils, whose deposition on biological tissues is believed to be a major factor contributing to many neuro-degenerative diseases, such as Alzheimer's, Parkinson's, and amyotrophic lateral sclerosis diseases. Several important properties of the aggregation process, including dynamic evolution of the total number of the aggregates, the mean aggregate size, and the number of peptides that contribute to the formation of the β sheets, have been computed. We show, similar to the unconfined media studied in Paper I [S. Zheng et al., J. Chem. Phys. 145, 134306 (2016)], that the computed properties follow dynamic scaling, characterized by power laws. The existence of such dynamic scaling in unconfined media was recently confirmed by experiments. The exponents that characterize the power-law dependence on time of the properties of the aggregation process in spherical cavities are shown to agree with those in unbounded fluids at the same protein density, while the exponents for aggregation in the cylindrical tubes exhibit sensitivity to the geometry of the system. The effects of the number of amino acids in the protein, as well as the size of the confined media, have also been studied. Similarities and differences between aggregation in confined and unconfined media are described, including the possibility of no fibril formation, if confinement is severe.
Parameter-Space Survey of Linear G-mode and Interchange in Extended Magnetohydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howell, E. C.; Sovinec, C. R.
The extended magnetohydrodynamic stability of interchange modes is studied in two configurations. In slab geometry, a local dispersion relation for the gravitational interchange mode (g-mode) with three different extensions of the MHD model [P. Zhu, et al., Phys. Rev. Lett. 101, 085005 (2008)] is analyzed. Our results delineate where drifts stablize the g-mode with gyroviscosity alone and with a two-fluid Ohm’s law alone. Including the two-fluid Ohm’s law produces an ion drift wave that interacts with the g-mode. This interaction then gives rise to a second instability at finite k y. A second instability is also observed in numerical extended MHD computations of linear interchange in cylindrical screw-pinch equilibria, the second configuration. Particularly with incomplete models, this mode limits the regions of stability for physically realistic conditions. But, applying a consistent two-temperature extended MHD model that includes the diamagnetic heat flux density (more » $$\\vec{q}$$ *) makes the onset of the second mode occur at larger Hall parameter. For conditions relevant to the SSPX experiment [E.B. Hooper, Plasma Phys. Controlled Fusion 54, 113001 (2012)], significant stabilization is observed for Suydam parameters as large as unity (D s≲1).« less
Parameter-Space Survey of Linear G-mode and Interchange in Extended Magnetohydrodynamics
Howell, E. C.; Sovinec, C. R.
2017-09-11
The extended magnetohydrodynamic stability of interchange modes is studied in two configurations. In slab geometry, a local dispersion relation for the gravitational interchange mode (g-mode) with three different extensions of the MHD model [P. Zhu, et al., Phys. Rev. Lett. 101, 085005 (2008)] is analyzed. Our results delineate where drifts stablize the g-mode with gyroviscosity alone and with a two-fluid Ohm’s law alone. Including the two-fluid Ohm’s law produces an ion drift wave that interacts with the g-mode. This interaction then gives rise to a second instability at finite k y. A second instability is also observed in numerical extended MHD computations of linear interchange in cylindrical screw-pinch equilibria, the second configuration. Particularly with incomplete models, this mode limits the regions of stability for physically realistic conditions. But, applying a consistent two-temperature extended MHD model that includes the diamagnetic heat flux density (more » $$\\vec{q}$$ *) makes the onset of the second mode occur at larger Hall parameter. For conditions relevant to the SSPX experiment [E.B. Hooper, Plasma Phys. Controlled Fusion 54, 113001 (2012)], significant stabilization is observed for Suydam parameters as large as unity (D s≲1).« less
Study of boundary-layer transition using transonic-cone preston tube data
NASA Technical Reports Server (NTRS)
Reed, T. D.; Moretti, P. M.
1980-01-01
The laminar boundary layer on a 10 degree cone in a transonic wind tunnel was studied. The inviscid flow and boundary layer development were simulated by computer programs. The effects of pitch and yaw angles on the boundary layer were examined. Preston-tube data, taken on the boundary-layer-transition cone in the NASA Ames 11 ft transonic wind tunnel, were used to develope a correlation which relates the measurements to theoretical values of laminar skin friction. The recommended correlation is based on a compressible form of the classical law-of-the-wall. The computer codes successfully simulates the laminar boundary layer for near-zero pitch and yaw angles. However, in cases of significant pitch and/or yaw angles, the flow is three dimensional and the boundary layer computer code used here cannot provide a satisfactory model. The skin-friction correlation is thought to be valid for body geometries other than cones.
Simulating and assessing boson sampling experiments with phase-space representations
NASA Astrophysics Data System (ADS)
Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.
2018-04-01
The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Westbrook, C K; Mizobuchi, Y; Poinsot, T J
2004-08-26
Progress in the field of computational combustion over the past 50 years is reviewed. Particular attention is given to those classes of models that are common to most system modeling efforts, including fluid dynamics, chemical kinetics, liquid sprays, and turbulent flame models. The developments in combustion modeling are placed into the time-dependent context of the accompanying exponential growth in computer capabilities and Moore's Law. Superimposed on this steady growth, the occasional sudden advances in modeling capabilities are identified and their impacts are discussed. Integration of submodels into system models for spark ignition, diesel and homogeneous charge, compression ignition engines, surfacemore » and catalytic combustion, pulse combustion, and detonations are described. Finally, the current state of combustion modeling is illustrated by descriptions of a very large jet lifted 3D turbulent hydrogen flame with direct numerical simulation and 3D large eddy simulations of practical gas burner combustion devices.« less
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
The Finer Details: Climate Modeling
NASA Technical Reports Server (NTRS)
2000-01-01
If you want to know whether you will need sunscreen or an umbrella for tomorrow's picnic, you can simply read the local weather report. However, if you are calculating the impact of gas combustion on global temperatures, or anticipating next year's rainfall levels to set water conservation policy, you must conduct a more comprehensive investigation. Such complex matters require long-range modeling techniques that predict broad trends in climate development rather than day-to-day details. Climate models are built from equations that calculate the progression of weather-related conditions over time. Based on the laws of physics, climate model equations have been developed to predict a number of environmental factors, for example: 1. Amount of solar radiation that hits the Earth. 2. Varying proportions of gases that make up the air. 3. Temperature at the Earth's surface. 4. Circulation of ocean and wind currents. 5. Development of cloud cover. Numerical modeling of the climate can improve our understanding of both the past and, the future. A model can confirm the accuracy of environmental measurements taken. in, the past and can even fill in gaps in those records. In addition, by quantifying the relationship between different aspects of climate, scientists can estimate how a future change in one aspect may alter the rest of the world. For example, could an increase in the temperature of the Pacific Ocean somehow set off a drought on the other side of the world? A computer simulation could lead to an answer for this and other questions. Quantifying the chaotic, nonlinear activities that shape our climate is no easy matter. You cannot run these simulations on your desktop computer and expect results by the time you have finished checking your morning e-mail. Efficient and accurate climate modeling requires powerful computers that can process billions of mathematical calculations in a single second. The NCCS exists to provide this degree of vast computing capability.
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
The collaboration of grouping laws in vision.
Grompone von Gioi, Rafael; Delon, Julie; Morel, Jean-Michel
2012-01-01
Gestalt theory gives a list of geometric grouping laws that could in principle give a complete account of human image perception. Based on an extensive thesaurus of clever graphical images, this theory discusses how grouping laws collaborate, and conflict toward a global image understanding. Unfortunately, as shown in the bibliographical analysis herewith, the attempts to formalize the grouping laws in computer vision and psychophysics have at best succeeded to compute individual partial structures (or partial gestalts), such as alignments or symmetries. Nevertheless, we show here that a never formalized clever Gestalt experimental procedure, the Nachzeichnung suggests a numerical set up to implement and test the collaboration of partial gestalts. The new computational procedure proposed here analyzes a digital image, and performs a numerical simulation that we call Nachtanz or Gestaltic dance. In this dance, the analyzed digital image is gradually deformed in a random way, but maintaining the detected partial gestalts. The resulting dancing images should be perceptually indistinguishable if and only if the grouping process was complete. Like the Nachzeichnung, the Nachtanz permits a visual exploration of the degrees of freedom still available to a figure after all partial groups (or gestalts) have been detected. In the new proposed procedure, instead of drawing themselves, subjects will be shown samples of the automatic Gestalt dances and required to evaluate if the figures are similar. Several numerical preliminary results with this new Gestaltic experimental setup are thoroughly discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Geisler, J. E.; Fowlis, W. W.
1980-01-01
The effect of a power law gravity field on baroclinic instability is examined, with a focus on the case of inverse fifth power gravity, since this is the power law produced when terrestrial gravity is simulated in spherical geometry by a dielectric force. Growth rates are obtained of unstable normal modes as a function of parameters of the problem by solving a second order differential equation numerically. It is concluded that over the range of parameter space explored, there is no significant change in the character of theoretical regime diagrams if the vertically averaged gravity is used as parameter.
A Partially-Stirred Batch Reactor Model for Under-Ventilated Fire Dynamics
NASA Astrophysics Data System (ADS)
McDermott, Randall; Weinschenk, Craig
2013-11-01
A simple discrete quadrature method is developed for closure of the mean chemical source term in large-eddy simulations (LES) and implemented in the publicly available fire model, Fire Dynamics Simulator (FDS). The method is cast as a partially-stirred batch reactor model for each computational cell. The model has three distinct components: (1) a subgrid mixing environment, (2) a mixing model, and (3) a set of chemical rate laws. The subgrid probability density function (PDF) is described by a linear combination of Dirac delta functions with quadrature weights set to satisfy simple integral constraints for the computational cell. It is shown that under certain limiting assumptions, the present method reduces to the eddy dissipation concept (EDC). The model is used to predict carbon monoxide concentrations in direct numerical simulation (DNS) of a methane slot burner and in LES of an under-ventilated compartment fire.
Analytical solutions for coagulation and condensation kinetics of composite particles
NASA Astrophysics Data System (ADS)
Piskunov, Vladimir N.
2013-04-01
The processes of composite particles formation consisting of a mixture of different materials are essential for many practical problems: for analysis of the consequences of accidental releases in atmosphere; for simulation of precipitation formation in clouds; for description of multi-phase processes in chemical reactors and industrial facilities. Computer codes developed for numerical simulation of these processes require optimization of computational methods and verification of numerical programs. Kinetic equations of composite particle formation are given in this work in a concise form (impurity integrated). Coagulation, condensation and external sources associated with nucleation are taken into account. Analytical solutions were obtained in a number of model cases. The general laws for fraction redistribution of impurities were defined. The results can be applied to develop numerical algorithms considerably reducing the simulation effort, as well as to verify the numerical programs for calculation of the formation kinetics of composite particles in the problems of practical importance.
Material point method modeling in oil and gas reservoirs
Vanderheyden, William Brian; Zhang, Duan
2016-06-28
A computer system and method of simulating the behavior of an oil and gas reservoir including changes in the margins of frangible solids. A system of equations including state equations such as momentum, and conservation laws such as mass conservation and volume fraction continuity, are defined and discretized for at least two phases in a modeled volume, one of which corresponds to frangible material. A material point model technique for numerically solving the system of discretized equations, to derive fluid flow at each of a plurality of mesh nodes in the modeled volume, and the velocity of at each of a plurality of particles representing the frangible material in the modeled volume. A time-splitting technique improves the computational efficiency of the simulation while maintaining accuracy on the deformation scale. The method can be applied to derive accurate upscaled model equations for larger volume scale simulations.
Inai, Takuma; Takabayashi, Tomoya; Edama, Mutsuaki; Kubo, Masayoshi
2018-04-27
The association between repetitive hip moment impulse and the progression of hip osteoarthritis is a recently recognized area of study. A sit-to-stand movement is essential for daily life and requires hip extension moment. Although a change in the sit-to-stand movement time may influence the hip moment impulse in the sagittal plane, this effect has not been examined. The purpose of this study was to clarify the relationship between sit-to-stand movement time and hip moment impulse in the sagittal plane. Twenty subjects performed the sit-to-stand movement at a self-selected natural speed. The hip, knee, and ankle joint angles obtained from experimental trials were used to perform two computer simulations. In the first simulation, the actual sit-to-stand movement time obtained from the experiment was entered. In the second simulation, sit-to-stand movement times ranging from 0.5 to 4.0 s at intervals of 0.25 s were entered. Hip joint moments and hip moment impulses in the sagittal plane during sit-to-stand movements were calculated for both computer simulations. The reliability of the simulation model was confirmed, as indicated by the similarities in the hip joint moment waveforms (r = 0.99) and the hip moment impulses in the sagittal plane between the first computer simulation and the experiment. In the second computer simulation, the hip moment impulse in the sagittal plane decreased with a decrease in the sit-to-stand movement time, although the peak hip extension moment increased with a decrease in the movement time. These findings clarify the association between the sit-to-stand movement time and hip moment impulse in the sagittal plane and may contribute to the prevention of the progression of hip osteoarthritis.
Kron, Frederick W; Fetters, Michael D; Scerbo, Mark W; White, Casey B; Lypson, Monica L; Padilla, Miguel A; Gliva-McConvey, Gayle A; Belfore, Lee A; West, Temple; Wallace, Amelia M; Guetterman, Timothy C; Schleicher, Lauren S; Kennedy, Rebecca A; Mangrulkar, Rajesh S; Cleary, James F; Marsella, Stacy C; Becker, Daniel M
2017-04-01
To assess advanced communication skills among second-year medical students exposed either to a computer simulation (MPathic-VR) featuring virtual humans, or to a multimedia computer-based learning module, and to understand each group's experiences and learning preferences. A single-blinded, mixed methods, randomized, multisite trial compared MPathic-VR (N=210) to computer-based learning (N=211). Primary outcomes: communication scores during repeat interactions with MPathic-VR's intercultural and interprofessional communication scenarios and scores on a subsequent advanced communication skills objective structured clinical examination (OSCE). Multivariate analysis of variance was used to compare outcomes. student attitude surveys and qualitative assessments of their experiences with MPathic-VR or computer-based learning. MPathic-VR-trained students improved their intercultural and interprofessional communication performance between their first and second interactions with each scenario. They also achieved significantly higher composite scores on the OSCE than computer-based learning-trained students. Attitudes and experiences were more positive among students trained with MPathic-VR, who valued its providing immediate feedback, teaching nonverbal communication skills, and preparing them for emotion-charged patient encounters. MPathic-VR was effective in training advanced communication skills and in enabling knowledge transfer into a more realistic clinical situation. MPathic-VR's virtual human simulation offers an effective and engaging means of advanced communication training. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Kron, Frederick W.; Fetters, Michael D.; Scerbo, Mark W.; White, Casey B.; Lypson, Monica L.; Padilla, Miguel A.; Gliva-McConvey, Gayle A.; Belfore, Lee A.; West, Temple; Wallace, Amelia M.; Guetterman, Timothy C.; Schleicher, Lauren S.; Kennedy, Rebecca A.; Mangrulkar, Rajesh S.; Cleary, James F.; Marsella, Stacy C.; Becker, Daniel M.
2016-01-01
Objectives To assess advanced communication skills among second-year medical students exposed either to a computer simulation (MPathic-VR) featuring virtual humans, or to a multimedia computer-based learning module, and to understand each group’s experiences and learning preferences. Methods A single-blinded, mixed methods, randomized, multisite trial compared MPathic-VR (N=210) to computer-based learning (N=211). Primary outcomes: communication scores during repeat interactions with MPathic-VR’s intercultural and interprofessional communication scenarios and scores on a subsequent advanced communication skills objective structured clinical examination (OSCE). Multivariate analysis of variance was used to compare outcomes. Secondary outcomes: student attitude surveys and qualitative assessments of their experiences with MPathic-VR or computer-based learning. Results MPathic-VR-trained students improved their intercultural and interprofessional communication performance between their first and second interactions with each scenario. They also achieved significantly higher composite scores on the OSCE than computer-based learning-trained students. Attitudes and experiences were more positive among students trained with MPathic-VR, who valued its providing immediate feedback, teaching nonverbal communication skills, and preparing them for emotion-charged patient encounters. Conclusions MPathic-VR was effective in training advanced communication skills and in enabling knowledge transfer into a more realistic clinical situation. Practice Implications MPathic-VR’s virtual human simulation offers an effective and engaging means of advanced communication training. PMID:27939846
Determining the Requisite Components of Visual Threat Detection to Improve Operational Performance
2014-04-01
cognitive processes, and may be enhanced by focusing training development on the principle components such as causal reasoning. The second report will...discuss the development and evaluation of a research-based training exemplar. Visual threat detection pervades many military contexts, but is also... developing computer-controlled exercises to study the primary components of visual threat detection. Similarly, civilian law enforcement officers were
ERIC Educational Resources Information Center
Miller, Teresa N.; Shoop, Robert J.
2004-01-01
Gloria, a first-year principal at Sunflower High School, sighed as she stared at her computer screen. She had been asked to write letters of reference for three teachers who were leaving her school. The first resigned among rumors of misconduct with a student--but before an investigation began. The second was asked to resign after a school…
Scaling laws for impact fragmentation of spherical solids.
Timár, G; Kun, F; Carmona, H A; Herrmann, H J
2012-07-01
We investigate the impact fragmentation of spherical solid bodies made of heterogeneous brittle materials by means of a discrete element model. Computer simulations are carried out for four different system sizes varying the impact velocity in a broad range. We perform a finite size scaling analysis to determine the critical exponents of the damage-fragmentation phase transition and deduce scaling relations in terms of radius R and impact velocity v(0). The scaling analysis demonstrates that the exponent of the power law distributed fragment mass does not depend on the impact velocity; the apparent change of the exponent predicted by recent simulations can be attributed to the shifting cutoff and to the existence of unbreakable discrete units. Our calculations reveal that the characteristic time scale of the breakup process has a power law dependence on the impact speed and on the distance from the critical speed in the damaged and fragmented states, respectively. The total amount of damage is found to have a similar behavior, which is substantially different from the logarithmic dependence on the impact velocity observed in two dimensions.
Turning around Newton's Second Law
ERIC Educational Resources Information Center
Goff, John Eric
2004-01-01
Conceptual and quantitative difficulties surrounding Newton's second law often arise among introductory physics students. Simply turning around how one expresses Newton's second law may assist students in their understanding of a deceptively simple-looking equation.
Control of the constrained planar simple inverted pendulum
NASA Technical Reports Server (NTRS)
Bavarian, B.; Wyman, B. F.; Hemami, H.
1983-01-01
Control of a constrained planar inverted pendulum by eigenstructure assignment is considered. Linear feedback is used to stabilize and decouple the system in such a way that specified subspaces of the state space are invariant for the closed-loop system. The effectiveness of the feedback law is tested by digital computer simulation. Pre-compensation by an inverse plant is used to improve performance.
The Lagrangian Ensemble metamodel for simulating plankton ecosystems
NASA Astrophysics Data System (ADS)
Woods, J. D.
2005-10-01
This paper presents a detailed account of the Lagrangian Ensemble (LE) metamodel for simulating plankton ecosystems. It uses agent-based modelling to describe the life histories of many thousands of individual plankters. The demography of each plankton population is computed from those life histories. So too is bio-optical and biochemical feedback to the environment. The resulting “virtual ecosystem” is a comprehensive simulation of the plankton ecosystem. It is based on phenotypic equations for individual micro-organisms. LE modelling differs significantly from population-based modelling. The latter uses prognostic equations to compute demography and biofeedback directly. LE modelling diagnoses them from the properties of individual micro-organisms, whose behaviour is computed from prognostic equations. That indirect approach permits the ecosystem to adjust gracefully to changes in exogenous forcing. The paper starts with theory: it defines the Lagrangian Ensemble metamodel and explains how LE code performs a number of computations “behind the curtain”. They include budgeting chemicals, and deriving biofeedback and demography from individuals. The next section describes the practice of LE modelling. It starts with designing a model that complies with the LE metamodel. Then it describes the scenario for exogenous properties that provide the computation with initial and boundary conditions. These procedures differ significantly from those used in population-based modelling. The next section shows how LE modelling is used in research, teaching and planning. The practice depends largely on hindcasting to overcome the limits to predictability of weather forecasting. The scientific method explains observable ecosystem phenomena in terms of finer-grained processes that cannot be observed, but which are controlled by the basic laws of physics, chemistry and biology. What-If? Prediction ( WIP), used for planning, extends hindcasting by adding events that describe natural or man-made hazards and remedial actions. Verification is based on the Ecological Turing Test, which takes account of uncertainties in the observed and simulated versions of a target ecological phenomenon. The rest of the paper is devoted to a case study designed to show what LE modelling offers the biological oceanographer. The case study is presented in two parts. The first documents the WB model (Woods & Barkmann, 1994) and scenario used to simulate the ecosystem in a mesocosm moored in deep water off the Azores. The second part illustrates the emergent properties of that virtual ecosystem. The behaviour and development of an individual plankton lineage are revealed by an audit trail of the agent used in the computation. The fields of environmental properties reveal the impact of biofeedback. The fields of demographic properties show how changes in individuals cumulatively affect the birth and death rates of their population. This case study documents the virtual ecosystem used by Woods, Perilli and Barkmann (2005; hereafter WPB); to investigate the stability of simulations created by the Lagrangian Ensemble metamodel. The Azores virtual ecosystem was created and analysed on the Virtual Ecology Workbench (VEW) which is described briefly in the Appendix.
Numerical Simulations of Thermobaric Explosions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, A L; Bell, J B; Beckner, V E
2007-05-04
A Model of the energy evolution in thermobaric explosions is presented. It is based on the two-phase formulation: conservation laws for the gas and particle phases along with inter-phase interaction terms. It incorporates a Combustion Model based on the mass conservation laws for fuel, air and products; source/sink terms are treated in the fast-chemistry limit appropriate for such gas dynamic fields. The Model takes into account both the afterburning of the detonation products of the booster with air, and the combustion of the fuel (Al or TNT detonation products) with air. Numerical simulations were performed for 1.5-g thermobaric explosions inmore » five different chambers (volumes ranging from 6.6 to 40 liters and length-to-diameter ratios from 1 to 12.5). Computed pressure waveforms were very similar to measured waveforms in all cases - thereby proving that the Model correctly predicts the energy evolution in such explosions. The computed global fuel consumption {mu}(t) behaved as an exponential life function. Its derivative {dot {mu}}(t) represents the global rate of fuel consumption. It depends on the rate of turbulent mixing which controls the rate of energy release in thermobaric explosions.« less
Effective Control of Computationally Simulated Wing Rock in Subsonic Flow
NASA Technical Reports Server (NTRS)
Kandil, Osama A.; Menzies, Margaret A.
1997-01-01
The unsteady compressible, full Navier-Stokes (NS) equations and the Euler equations of rigid-body dynamics are sequentially solved to simulate the delta wing rock phenomenon. The NS equations are solved time accurately, using the implicit, upwind, Roe flux-difference splitting, finite-volume scheme. The rigid-body dynamics equations are solved using a four-stage Runge-Kutta scheme. Once the wing reaches the limit-cycle response, an active control model using a mass injection system is applied from the wing surface to suppress the limit-cycle oscillation. The active control model is based on state feedback and the control law is established using pole placement techniques. The control law is based on the feedback of two states: the roll-angle and roll velocity. The primary model of the computational applications consists of a 80 deg swept, sharp edged, delta wing at 30 deg angle of attack in a freestream of Mach number 0.1 and Reynolds number of 0.4 x 10(exp 6). With a limit-cycle roll amplitude of 41.1 deg, the control model is applied, and the results show that within one and one half cycles of oscillation, the wing roll amplitude and velocity are brought to zero.
NASA Technical Reports Server (NTRS)
Newman, Dava J.
1995-01-01
Simulations of astronaut motions during extravehicular activity (EVA) tasks were performed using computational multibody dynamics methods. The application of computational dynamic simulation to EVA was prompted by the realization that physical microgravity simulators have inherent limitations: viscosity in neutral buoyancy tanks; friction in air bearing floors; short duration for parabolic aircraft; and inertia and friction in suspension mechanisms. These limitations can mask critical dynamic effects that later cause problems during actual EVA's performed in space. Methods of formulating dynamic equations of motion for multibody systems are discussed with emphasis on Kane's method, which forms the basis of the simulations presented herein. Formulation of the equations of motion for a two degree of freedom arm is presented as an explicit example. The four basic steps in creating the computational simulations were: system description, in which the geometry, mass properties, and interconnection of system bodies are input to the computer; equation formulation based on the system description; inverse kinematics, in which the angles, velocities, and accelerations of joints are calculated for prescribed motion of the endpoint (hand) of the arm; and inverse dynamics, in which joint torques are calculated for a prescribed motion. A graphical animation and data plotting program, EVADS (EVA Dynamics Simulation), was developed and used to analyze the results of the simulations that were performed on a Silicon Graphics Indigo2 computer. EVA tasks involving manipulation of the Spartan 204 free flying astronomy payload, as performed during Space Shuttle mission STS-63 (February 1995), served as the subject for two dynamic simulations. An EVA crewmember was modeled as a seven segment system with an eighth segment representing the massive payload attached to the hand. For both simulations, the initial configuration of the lower body (trunk, upper leg, and lower leg) was a neutral microgravity posture. In the first simulation, the payload was manipulated around a circular trajectory of 0.15 m radius in 10 seconds. It was found that the wrist joint theoretically exceeded its ulnal deviation limit by as much as 49. 8 deg and was required to exert torques as high as 26 N-m to accomplish the task, well in excess of the wrist physiological limit of 12 N-m. The largest torque in the first simulation, 52 N-m, occurred in the ankle joint. To avoid these problems, the second simulation placed the arm in a more comfortable initial position and the radius and speed of the circular trajectory were reduced by half. As a result, the joint angles and torques were reduced to values well within their physiological limits. In particular, the maximum wrist torque for the second simulation was only 3 N-m and the maximum ankle torque was only 6 N-m.
Modeling of shock wave propagation in large amplitude ultrasound.
Pinton, Gianmarco F; Trahey, Gregg E
2008-01-01
The Rankine-Hugoniot relation for shock wave propagation describes the shock speed of a nonlinear wave. This paper investigates time-domain numerical methods that solve the nonlinear parabolic wave equation, or the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and the conditions they require to satisfy the Rankine-Hugoniot relation. Two numerical methods commonly used in hyperbolic conservation laws are adapted to solve the KZK equation: Godunov's method and the monotonic upwind scheme for conservation laws (MUSCL). It is shown that they satisfy the Rankine-Hugoniot relation regardless of attenuation. These two methods are compared with the current implicit solution based method. When the attenuation is small, such as in water, the current method requires a degree of grid refinement that is computationally impractical. All three numerical methods are compared in simulations for lithotripters and high intensity focused ultrasound (HIFU) where the attenuation is small compared to the nonlinearity because much of the propagation occurs in water. The simulations are performed on grid sizes that are consistent with present-day computational resources but are not sufficiently refined for the current method to satisfy the Rankine-Hugoniot condition. It is shown that satisfying the Rankine-Hugoniot conditions has a significant impact on metrics relevant to lithotripsy (such as peak pressures) and HIFU (intensity). Because the Godunov and MUSCL schemes satisfy the Rankine-Hugoniot conditions on coarse grids, they are particularly advantageous for three-dimensional simulations.
Evaluation of Honeywell Recoverable Computer System (RCS) in Presence of Electromagnetic Effects
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar
1997-01-01
The design and development of a Closed-Loop System to study and evaluate the performance of the Honeywell Recoverable Computer System (RCS) in electromagnetic environments (EME) is presented. The development of a Windows-based software package to handle the time critical communication of data and commands between the RCS and flight simulation code in real-time, while meeting the stringent hard deadlines is also presented. The performance results of the RCS while exercising flight control laws under ideal conditions as well as in the presence of electromagnetic fields is also discussed.
The Design of Fault Tolerant Quantum Dot Cellular Automata Based Logic
NASA Technical Reports Server (NTRS)
Armstrong, C. Duane; Humphreys, William M.; Fijany, Amir
2002-01-01
As transistor geometries are reduced, quantum effects begin to dominate device performance. At some point, transistors cease to have the properties that make them useful computational components. New computing elements must be developed in order to keep pace with Moore s Law. Quantum dot cellular automata (QCA) represent an alternative paradigm to transistor-based logic. QCA architectures that are robust to manufacturing tolerances and defects must be developed. We are developing software that allows the exploration of fault tolerant QCA gate architectures by automating the specification, simulation, analysis and documentation processes.
NASA Astrophysics Data System (ADS)
Weber, Maria Ann; Browning, Matthew; Nelson, Nicholas
2018-01-01
Starspots are windows into a star’s internal dynamo mechanism. However, the manner by which the dynamo-generated magnetic field traverses the stellar interior to emerge at the surface is not especially well understood. Establishing the details of magnetic flux emergence plays a key role in deciphering stellar dynamos and observed starspot properties. In the solar context, insight into this process has been obtained by assuming the magnetism giving rise to sunspots consists partly of idealized thin flux tubes (TFTs). Here, we present three sets of TFT simulations in rotating spherical shells of convection: one representative of the Sun, the second of a solar-like rapid rotator, and the third of a fully convective M dwarf. Our solar simulations reproduce sunspot observables such as low-latitude emergence, tilting action toward the equator following the Joy’s Law trend, and a phenomenon akin to active longitudes. Further, we compare the evolution of rising flux tubes in our (computationally inexpensive) TFT simulations to buoyant magnetic structures that arise naturally in a unique global simulation of a rapidly rotating Sun. We comment on the role of rapid rotation, the Coriolis force, and external torques imparted by the surrounding convection in establishing the trajectories of the flux tubes across the convection zone. In our fully convective M dwarf simulations, the expected starspot latitudes deviate from the solar trend, favoring significantly poleward latitudes unless the differential rotation is sufficiently prograde or the magnetic field is strongly super-equipartition. Together our work provides a link between dynamo-generated magnetic fields, turbulent convection, and observations of starspots along the lower main sequence.
NASA Astrophysics Data System (ADS)
Benmansour, Abdelkrim; Liazid, Abdelkrim; Logerais, Pierre-Olivier; Durastanti, Jean-Félix
2016-02-01
Cryogenic propellants LOx/H2 are used at very high pressure in rocket engine combustion. The description of the combustion process in such application is very complex due essentially to the supercritical regime. Ideal gas law becomes invalid. In order to try to capture the average characteristics of this combustion process, numerical computations are performed using a model based on a one-phase multi-component approach. Such work requires fluid properties and a correct definition of the mixture behavior generally described by cubic equations of state with appropriated thermodynamic relations validated against the NIST data. In this study we consider an alternative way to get the effect of real gas by testing the volume-weighted-mixing-law with association of the component transport properties using directly the NIST library data fitting including the supercritical regime range. The numerical simulations are carried out using 3D RANS approach associated with two tested turbulence models, the standard k-Epsilon model and the realizable k-Epsilon one. The combustion model is also associated with two chemical reaction mechanisms. The first one is a one-step generic chemical reaction and the second one is a two-step chemical reaction. The obtained results like temperature profiles, recirculation zones, visible flame lengths and distributions of OH species are discussed.
2011-02-01
provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently ...transition characteristics as well as the effectiveness of 2-D strip trips to simulate the joint between the nosecap and body of the vehicle and 3-D...diamond shaped trips, to simulate the fasteners on a closeout panel that will be on one side of the flight vehicle. In order to accomplish this, global
NASA Technical Reports Server (NTRS)
Young, L. R.
1975-01-01
Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.
NASA Astrophysics Data System (ADS)
Goldenberg, J.; Libai, B.; Solomon, S.; Jan, N.; Stauffer, D.
2000-09-01
A percolation model is presented, with computer simulations for illustrations, to show how the sales of a new product may penetrate the consumer market. We review the traditional approach in the marketing literature, which is based on differential or difference equations similar to the logistic equation (Bass, Manage. Sci. 15 (1969) 215). This mean-field approach is contrasted with the discrete percolation on a lattice, with simulations of "social percolation" (Solomon et al., Physica A 277 (2000) 239) in two to five dimensions giving power laws instead of exponential growth, and strong fluctuations right at the percolation threshold.
Passivity-Based Control for Two-Wheeled Robot Stabilization
NASA Astrophysics Data System (ADS)
Uddin, Nur; Aryo Nugroho, Teguh; Agung Pramudito, Wahyu
2018-04-01
A passivity-based control system design for two-wheeled robot (TWR) stabilization is presented. A TWR is a statically-unstable non-linear system. A control system is applied to actively stabilize the TWR. Passivity-based control method is applied to design the control system. The design results in a state feedback control law that makes the TWR closed loop system globally asymptotically stable (GAS). The GAS is proven mathematically. The TWR stabilization is demonstrated through computer simulation. The simulation results show that the designed control system is able to stabilize the TWR.
NASA Astrophysics Data System (ADS)
Ishihara, Takashi; Kaneda, Yukio; Morishita, Koji; Yokokawa, Mitsuo; Uno, Atsuya
2017-11-01
We report some results of a series of high resolution direct numerical simulations (DNSs) of forced incompressible isotropic turbulence with up to 122883 grid points and Taylor microscale Reynolds number Rλ 2300 . The DNSs show that there exists a scale range, approximately at 100 < r / η < 600 (η is the Kolmogorov length scale), where the second-order longitudinal velocity structure function fits well to a simple power-law scaling with respect to the distance r between the two points. However, the magnitude of the structure function depends on Rλ, i.e., the structure function normalized by the mean rate of energy dissipation and r is not independent of Rλ nor the viscosity. This implies that the range at 100 < r / η < 600 and Rλ up to 2300 is not the `inertial subrange', whose statistics are assumed to be independent from viscosity or Rλ in many turbulence theories. The measured exponents are to be not confused with those in the `inertial subrange': the constancy of the scaling exponent of a structure function in a certain range does not necessarily mean that the measured exponent is the scaling exponent in the `inertial subrange'. This yields a question, ``Where is the `inertial subrange' in experiments and DNSs?'' This study used the computational resources of the K computer provided by the RIKEN AICS through the HPCI System Research projects (ID:hp160102 and ID:hp170087). This research was partly supported by JSPS KAKENHI (S)16H06339 and (B) 15H03603.
Percutaneous spinal fixation simulation with virtual reality and haptics.
Luciano, Cristian J; Banerjee, P Pat; Sorenson, Jeffery M; Foley, Kevin T; Ansari, Sameer A; Rizzi, Silvio; Germanwala, Anand V; Kranzler, Leonard; Chittiboina, Prashant; Roitberg, Ben Z
2013-01-01
In this study, we evaluated the use of a part-task simulator with 3-dimensional and haptic feedback as a training tool for percutaneous spinal needle placement. To evaluate the learning effectiveness in terms of entry point/target point accuracy of percutaneous spinal needle placement on a high-performance augmented-reality and haptic technology workstation with the ability to control the duration of computer-simulated fluoroscopic exposure, thereby simulating an actual situation. Sixty-three fellows and residents performed needle placement on the simulator. A virtual needle was percutaneously inserted into a virtual patient's thoracic spine derived from an actual patient computed tomography data set. Ten of 126 needle placement attempts by 63 participants ended in failure for a failure rate of 7.93%. From all 126 needle insertions, the average error (15.69 vs 13.91), average fluoroscopy exposure (4.6 vs 3.92), and average individual performance score (32.39 vs 30.71) improved from the first to the second attempt. Performance accuracy yielded P = .04 from a 2-sample t test in which the rejected null hypothesis assumes no improvement in performance accuracy from the first to second attempt in the test session. The experiments showed evidence (P = .04) of performance accuracy improvement from the first to the second percutaneous needle placement attempt. This result, combined with previous learning retention and/or face validity results of using the simulator for open thoracic pedicle screw placement and ventriculostomy catheter placement, supports the efficacy of augmented reality and haptics simulation as a learning tool.
"Mysteries" of the First and Second Laws of Thermodynamics
ERIC Educational Resources Information Center
Battino, Rubin
2007-01-01
The thermodynamic concepts of First and Second Laws with respect to the entropy function are described using atoms and molecules and probability as manifested in statistical mechanics. The First Law is conceptually understood as [Delta]U = Q + W and the Second Law of Thermodynamics and the entropy function have provided the probability and…
Second-law efficiency of solar-thermal cavity receivers
NASA Technical Reports Server (NTRS)
Moynihan, P. I.
1983-01-01
Properly quantified performance of a solar-thermal cavity receiver must not only account for the energy gains and losses as dictated by the First Law of thermodynamics, but it must also account for the quality of that energy. However, energy quality can only be determined from the Second Law. An equation for the Second Law efficiency of a cavity receiver is derived from the definition of available energy, which is a thermodynamic property that measures the maximum amount of work obtainable when a system is allowed to come into unrestrained equilibrium with the surrounding environment. The fundamental concepts of the entropy and availability of radiation were explored from which a workable relationship among the reflected cone half-angle, the insolation, and the concentrator geometric characteristics was developed as part of the derivation of the Second Law efficiency. First and Second Law efficiencies were compared for data collected from two receivers that were designed for different purposes. A Second Law approach to quantifying the performance of a solar-thermal cavity receiver lends greater insight into the total performance than does the conventional First Law method.
Ontological and Epistemological Issues Regarding Climate Models and Computer Experiments
NASA Astrophysics Data System (ADS)
Vezer, M. A.
2010-12-01
Recent philosophical discussions (Parker 2009; Frigg and Reiss 2009; Winsberg, 2009; Morgon 2002, 2003, 2005; Gula 2002) about the ontology of computer simulation experiments and the epistemology of inferences drawn from them are of particular relevance to climate science as computer modeling and analysis are instrumental in understanding climatic systems. How do computer simulation experiments compare with traditional experiments? Is there an ontological difference between these two methods of inquiry? Are there epistemological considerations that result in one type of inference being more reliable than the other? What are the implications of these questions with respect to climate studies that rely on computer simulation analysis? In this paper, I examine these philosophical questions within the context of climate science, instantiating concerns in the philosophical literature with examples found in analysis of global climate change. I concentrate on Wendy Parker’s (2009) account of computer simulation studies, which offers a treatment of these and other questions relevant to investigations of climate change involving such modelling. Two theses at the center of Parker’s account will be the focus of this paper. The first is that computer simulation experiments ought to be regarded as straightforward material experiments; which is to say, there is no significant ontological difference between computer and traditional experimentation. Parker’s second thesis is that some of the emphasis on the epistemological importance of materiality has been misplaced. I examine both of these claims. First, I inquire as to whether viewing computer and traditional experiments as ontologically similar in the way she does implies that there is no proper distinction between abstract experiments (such as ‘thought experiments’ as well as computer experiments) and traditional ‘concrete’ ones. Second, I examine the notion of materiality (i.e., the material commonality between object and target systems) and some arguments for the claim that materiality entails some inferential advantage to traditional experimentation. I maintain that Parker’s account of the ontology of computer simulations has some interesting though potentially problematic implications regarding conventional distinctions between abstract and concrete methods of inquiry. With respect to her account of materiality, I outline and defend an alternative account, posited by Mary Morgan (2002, 2003, 2005), which holds that ontological similarity between target and object systems confers some epistemological advantage to traditional forms of experimental inquiry.
Improved Simulations of Astrophysical Plasmas: Computation of New Dielectronic Recombination Data
NASA Technical Reports Server (NTRS)
Gorczyca, T. W.; Korista, K. T.; Zatsarinny, O.; Badnell, N. R.; Savin, D. W.
2002-01-01
Here we recap the works of two posters presented at the 2002 NASA Laboratory Astrophysics Workshop. The first was Shortcomings of the R-Matrix Method for Treating Dielectronic Recombination. The second was Computation of Dielectronic Recombination Data for the Oxygen-Like Isoelectronic Sequence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, B.D.; Hanley, H.J.M.; Straty, G.C.
An experimental small angle neutron scattering (SANS) study of dense silica gels, prepared from suspensions of 24 nm colloidal silica particles at several volume fractions {theta} is discussed. Provided that {theta}{approx_lt}0.18, the scattered intensity at small wave vectors q increases as the gelation proceeds, and the structure factor S(q, t {yields} {infinity}) of the gel exhibits apparent power law behavior. Power law behavior is also observed, even for samples with {theta}>0.18, when the gel is formed under an applied shear. Shear also enhances the diffraction maximum corresponding to the inter-particle contact distance of the gel. Difficulties encountered when trying tomore » interpret SANS data from these dense systems are outlined. Results of computer simulations intended to mimic gel formation, including computations of S(q, t), are discussed. Comments on a method to extract a fractal dimension characterizing the gel are included.« less
Non-Parabolic Hydrodynamic Formulations for the Simulation of Inhomogeneous Semiconductor Devices
NASA Technical Reports Server (NTRS)
Smith, A. W.; Brennan, K. F.
1996-01-01
Hydrodynamic models are becoming prevalent design tools for small scale devices and other devices in which high energy effects can dominate transport. Most current hydrodynamic models use a parabolic band approximation to obtain fairly simple conservation equations. Interest in accounting for band structure effects in hydrodynamic device simulation has begun to grow since parabolic models cannot fully describe the transport in state of the art devices due to the distribution populating non-parabolic states within the band. This paper presents two different non-parabolic formulations or the hydrodynamic model suitable for the simulation of inhomogeneous semiconductor devices. The first formulation uses the Kane dispersion relationship ((hk)(exp 2)/2m = W(1 + alphaW). The second formulation makes use of a power law ((hk)(exp 2)/2m = xW(exp y)) for the dispersion relation. Hydrodynamic models which use the first formulation rely on the binomial expansion to obtain moment equations with closed form coefficients. This limits the energy range over which the model is valid. The power law formulation readily produces closed form coefficients similar to those obtained using the parabolic band approximation. However, the fitting parameters (x,y) are only valid over a limited energy range. The physical significance of the band non-parabolicity is discussed as well as the advantages/disadvantages and approximations of the two non-parabolic models. A companion paper describes device simulations based on the three dispersion relationships; parabolic, Kane dispersion and power law dispersion.
NASA Astrophysics Data System (ADS)
Duc-Toan, Nguyen; Tien-Long, Banh; Young-Suk, Kim; Dong-Won, Jung
2011-08-01
In this study, a modified Johnson-Cook (J-C) model and an innovated method to determine (J-C) material parameters are proposed to predict more correctly stress-strain curve for tensile tests in elevated temperatures. A MATLAB tool is used to determine material parameters by fitting a curve to follow Ludwick's hardening law at various elevated temperatures. Those hardening law parameters are then utilized to determine modified (J-C) model material parameters. The modified (J-C) model shows the better prediction compared to the conventional one. As the first verification, an FEM tensile test simulation based on the isotropic hardening model for boron sheet steel at elevated temperatures was carried out via a user-material subroutine, using an explicit finite element code, and compared with the measurements. The temperature decrease of all elements due to the air cooling process was then calculated when considering the modified (J-C) model and coded to VUMAT subroutine for tensile test simulation of cooling process. The modified (J-C) model showed the good agreement between the simulation results and the corresponding experiments. The second investigation was applied for V-bending spring-back prediction of magnesium alloy sheets at elevated temperatures. Here, the combination of proposed J-C model with modified hardening law considering the unusual plastic behaviour for magnesium alloy sheet was adopted for FEM simulation of V-bending spring-back prediction and shown the good comparability with corresponding experiments.
Wall Modeled Large Eddy Simulation of Airfoil Trailing Edge Noise
NASA Astrophysics Data System (ADS)
Kocheemoolayil, Joseph; Lele, Sanjiva
2014-11-01
Large eddy simulation (LES) of airfoil trailing edge noise has largely been restricted to low Reynolds numbers due to prohibitive computational cost. Wall modeled LES (WMLES) is a computationally cheaper alternative that makes full-scale Reynolds numbers relevant to large wind turbines accessible. A systematic investigation of trailing edge noise prediction using WMLES is conducted. Detailed comparisons are made with experimental data. The stress boundary condition from a wall model does not constrain the fluctuating velocity to vanish at the wall. This limitation has profound implications for trailing edge noise prediction. The simulation over-predicts the intensity of fluctuating wall pressure and far-field noise. An improved wall model formulation that minimizes the over-prediction of fluctuating wall pressure is proposed and carefully validated. The flow configurations chosen for the study are from the workshop on benchmark problems for airframe noise computations. The large eddy simulation database is used to examine the adequacy of scaling laws that quantify the dependence of trailing edge noise on Mach number, Reynolds number and angle of attack. Simplifying assumptions invoked in engineering approaches towards predicting trailing edge noise are critically evaluated. We gratefully acknowledge financial support from GE Global Research and thank Cascade Technologies Inc. for providing access to their massively-parallel large eddy simulation framework.
gpuSPHASE-A shared memory caching implementation for 2D SPH using CUDA
NASA Astrophysics Data System (ADS)
Winkler, Daniel; Meister, Michael; Rezavand, Massoud; Rauch, Wolfgang
2017-04-01
Smoothed particle hydrodynamics (SPH) is a meshless Lagrangian method that has been successfully applied to computational fluid dynamics (CFD), solid mechanics and many other multi-physics problems. Using the method to solve transport phenomena in process engineering requires the simulation of several days to weeks of physical time. Based on the high computational demand of CFD such simulations in 3D need a computation time of years so that a reduction to a 2D domain is inevitable. In this paper gpuSPHASE, a new open-source 2D SPH solver implementation for graphics devices, is developed. It is optimized for simulations that must be executed with thousands of frames per second to be computed in reasonable time. A novel caching algorithm for Compute Unified Device Architecture (CUDA) shared memory is proposed and implemented. The software is validated and the performance is evaluated for the well established dambreak test case.
Numerical aerodynamic simulation facility preliminary study: Executive study
NASA Technical Reports Server (NTRS)
1977-01-01
A computing system was designed with the capability of providing an effective throughput of one billion floating point operations per second for three dimensional Navier-Stokes codes. The methodology used in defining the baseline design, and the major elements of the numerical aerodynamic simulation facility are described.
NASA Astrophysics Data System (ADS)
Pratomo, Ariawan Wahyu; Muchammad, Tauviqirrahman, Mohammad; Jamari, Bayuseno, Athanasius P.
2016-04-01
Polymer thickened oils are the most preferred materials for modern lubrication applications due to their high shear. The present paper explores a lubrication mechanism in sliding contact lubricated with polymer thickened oil considering cavitation. Investigations are carried out by using a numerical method based on commercial CFD (computational fluid dynamic) software ANSYS for fluid flow phenomenon (Fluent) to assess the tribological characteristic (i.e. hydrodynamic pressure distribution) of lubricated sliding contact. The Zwart-Gerber-Belamri model for cavitation is adopted in this simulation to predict the extent of the full film region. The polymer thickened oil is characterized as non-Newtonian power-law fluid. The simulation results show that the cavitation lead lower pressure profile compared to that without cavitation. In addition, it is concluded that the characteristic of the lubrication performance with polymer thickened oil is strongly dependent on the Power-law index of lubricant.
A Molecular Dynamics Simulation of the Turbulent Couette Minimal Flow Unit
NASA Astrophysics Data System (ADS)
Smith, Edward
2016-11-01
What happens to turbulent motions below the Kolmogorov length scale? In order to explore this question, a 300 million molecule Molecular Dynamics (MD) simulation is presented for the minimal Couette channel in which turbulence can be sustained. The regeneration cycle and turbulent statistics show excellent agreement to continuum based computational fluid dynamics (CFD) at Re=400. As MD requires only Newton's laws and a form of inter-molecular potential, it captures a much greater range of phenomena without requiring the assumptions of Newton's law of viscosity, thermodynamic equilibrium, fluid isotropy or the limitation of grid resolution. The fundamental nature of MD means it is uniquely placed to explore the nature of turbulent transport. A number of unique insights from MD are presented, including energy budgets, sub-grid turbulent energy spectra, probability density functions, Lagrangian statistics and fluid wall interactions. EPSRC Post Doctoral Prize Fellowship.
Costas loop lock detection in the advanced receiver
NASA Technical Reports Server (NTRS)
Mileant, A.; Hinedi, S.
1989-01-01
The advanced receiver currently being developed uses a Costas digital loop to demodulate the subcarrier. Previous analyses of lock detector algorithms for Costas loops have ignored the effects of the inherent correlation between the samples of the phase-error process. Accounting for this correlation is necessary to achieve the desired lock-detection probability for a given false-alarm rate. Both analysis and simulations are used to quantify the effects of phase correlation on lock detection for the square-law and the absolute-value type detectors. Results are obtained which depict the lock-detection probability as a function of loop signal-to-noise ratio for a given false-alarm rate. The mathematical model and computer simulation show that the square-law detector experiences less degradation due to phase jitter than the absolute-value detector and that the degradation in detector signal-to-noise ratio is more pronounced for square-wave than for sine-wave signals.
Kirchhoff and Ohm in action: solving electric currents in continuous extended media
NASA Astrophysics Data System (ADS)
Dolinko, A. E.
2018-03-01
In this paper we show a simple and versatile computational simulation method for determining electric currents and electric potential in 2D and 3D media with arbitrary distribution of resistivity. One of the highlights of the proposed method is that the simulation space containing the distribution of resistivity and the points of external applied voltage are introduced by means of digital images or bitmaps, which easily allows simulating any phenomena involving distributions of resistivity. The simulation is based on the Kirchhoff’s laws of electric currents and it is solved by means of an iterative procedure. The method is also generalised to account for media with distributions of reactive impedance. At the end of this work, we show an example of application of the simulation, consisting in reproducing the response obtained with the geophysical method of electric resistivity tomography in presence of soil cracks. This paper is aimed at undergraduate or graduated students interested in computational physics and electricity and also researchers involved in the area of continuous electric media, which could find a simple and powerful tool for investigation.
Very high order PNPM schemes on unstructured meshes for the resistive relativistic MHD equations
NASA Astrophysics Data System (ADS)
Dumbser, Michael; Zanotti, Olindo
2009-10-01
In this paper we propose the first better than second order accurate method in space and time for the numerical solution of the resistive relativistic magnetohydrodynamics (RRMHD) equations on unstructured meshes in multiple space dimensions. The nonlinear system under consideration is purely hyperbolic and contains a source term, the one for the evolution of the electric field, that becomes stiff for low values of the resistivity. For the spatial discretization we propose to use high order PNPM schemes as introduced in Dumbser et al. [M. Dumbser, D. Balsara, E.F. Toro, C.D. Munz, A unified framework for the construction of one-step finite volume and discontinuous Galerkin schemes, Journal of Computational Physics 227 (2008) 8209-8253] for hyperbolic conservation laws and a high order accurate unsplit time-discretization is achieved using the element-local space-time discontinuous Galerkin approach proposed in Dumbser et al. [M. Dumbser, C. Enaux, E.F. Toro, Finite volume schemes of very high order of accuracy for stiff hyperbolic balance laws, Journal of Computational Physics 227 (2008) 3971-4001] for one-dimensional balance laws with stiff source terms. The divergence-free character of the magnetic field is accounted for through the divergence cleaning procedure of Dedner et al. [A. Dedner, F. Kemm, D. Kröner, C.-D. Munz, T. Schnitzer, M. Wesenberg, Hyperbolic divergence cleaning for the MHD equations, Journal of Computational Physics 175 (2002) 645-673]. To validate our high order method we first solve some numerical test cases for which exact analytical reference solutions are known and we also show numerical convergence studies in the stiff limit of the RRMHD equations using PNPM schemes from third to fifth order of accuracy in space and time. We also present some applications with shock waves such as a classical shock tube problem with different values for the conductivity as well as a relativistic MHD rotor problem and the relativistic equivalent of the Orszag-Tang vortex problem. We have verified that the proposed method can handle equally well the resistive regime and the stiff limit of ideal relativistic MHD. For these reasons it provides a powerful tool for relativistic astrophysical simulations involving the appearance of magnetic reconnection.
Predicted torque equilibrium attitude utilization for Space Station attitude control
NASA Technical Reports Server (NTRS)
Kumar, Renjith R.; Heck, Michael L.; Robertson, Brent P.
1990-01-01
An approximate knowledge of the torque equilibrium attitude (TEA) is shown to improve the performance of a control moment gyroscope (CMG) momentum management/attitude control law for Space Station Freedom. The linearized equations of motion are used in conjunction with a state transformation to obtain a control law which uses full state feedback and the predicted TEA to minimize both attitude excursions and CMG peak and secular momentum. The TEA can be computationally determined either by observing the steady state attitude of a 'controlled' spacecraft using arbitrary initial attitude, or by simulating a fixed attitude spacecraft flying in desired orbit subject to realistic environmental disturbance models.
Analytic derivation and evaluation of a state-trajectory control law for dc-to-dc converters
NASA Technical Reports Server (NTRS)
Burns, W. W., III; Wilson, T. G.
1977-01-01
Mathematical representations of a state-plane switching boundary employed in a state-trajectory control law for dc-to-dc converters are derived. Several levels of approximation to the switching boundary equations are presented, together with an evaluation of the effects of nonideal operating characteristics of converter power stage components on the shape and location of the boundary and the behavior of a system controlled by it. Digital computer simulations of dc-to-dc converters operating in conjunction with each of these levels of control are presented and evaluated with respect to changes in transient and steady-state performance.
Nature of the anomalies in the supercooled liquid state of the mW model of water.
Holten, Vincent; Limmer, David T; Molinero, Valeria; Anisimov, Mikhail A
2013-05-07
The thermodynamic properties of the supercooled liquid state of the mW model of water show anomalous behavior. Like in real water, the heat capacity and compressibility sharply increase upon supercooling. One of the possible explanations of these anomalies, the existence of a second (liquid-liquid) critical point, is not supported by simulations for this model. In this work, we reproduce the anomalies of the mW model with two thermodynamic scenarios: one based on a non-ideal "mixture" with two different types of local order of the water molecules, and one based on weak crystallization theory. We show that both descriptions accurately reproduce the model's basic thermodynamic properties. However, the coupling constant required for the power laws implied by weak crystallization theory is too large relative to the regular backgrounds, contradicting assumptions of weak crystallization theory. Fluctuation corrections outside the scope of this work would be necessary to fit the forms predicted by weak crystallization theory. For the two-state approach, the direct computation of the low-density fraction of molecules in the mW model is in agreement with the prediction of the phenomenological equation of state. The non-ideality of the "mixture" of the two states never becomes strong enough to cause liquid-liquid phase separation, also in agreement with simulation results.
Nature of the anomalies in the supercooled liquid state of the mW model of water
NASA Astrophysics Data System (ADS)
Holten, Vincent; Limmer, David T.; Molinero, Valeria; Anisimov, Mikhail A.
2013-05-01
The thermodynamic properties of the supercooled liquid state of the mW model of water show anomalous behavior. Like in real water, the heat capacity and compressibility sharply increase upon supercooling. One of the possible explanations of these anomalies, the existence of a second (liquid-liquid) critical point, is not supported by simulations for this model. In this work, we reproduce the anomalies of the mW model with two thermodynamic scenarios: one based on a non-ideal "mixture" with two different types of local order of the water molecules, and one based on weak crystallization theory. We show that both descriptions accurately reproduce the model's basic thermodynamic properties. However, the coupling constant required for the power laws implied by weak crystallization theory is too large relative to the regular backgrounds, contradicting assumptions of weak crystallization theory. Fluctuation corrections outside the scope of this work would be necessary to fit the forms predicted by weak crystallization theory. For the two-state approach, the direct computation of the low-density fraction of molecules in the mW model is in agreement with the prediction of the phenomenological equation of state. The non-ideality of the "mixture" of the two states never becomes strong enough to cause liquid-liquid phase separation, also in agreement with simulation results.
Use of a computer model in the understanding of erythropoietic control mechanisms
NASA Technical Reports Server (NTRS)
Dunn, C. D. R.
1978-01-01
During an eight-week visit approximately 200 simulations using the computer model for the regulation of erythopoiesis were carries out in four general areas: with the human model simulating hypoxia and dehydration, evaluation of the simulation of dehydration using the mouse model. The experiments led to two considerations for the models. Firstly, a direct relationship between erythropoietin concentration and bone marrow sensitivity to the hormone and, secondly, a partial correction of tissue hypoxia prior to compensation by an increased hematocrit. This latter change in particular produced a better simuation of the effects of hypoxia on plasma erythropoietin concentrations.
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
A Model Description Document for the Emulation Simulation Computer Model was already published. The model consisted of a detailed model (emulation) of a SAWD CO2 removal subsystem which operated with much less detailed (simulation) models of a cabin, crew, and condensing and sensible heat exchangers. The purpose was to explore the utility of such an emulation simulation combination in the design, development, and test of a piece of ARS hardware, SAWD. Extensions to this original effort are presented. The first extension is an update of the model to reflect changes in the SAWD control logic which resulted from test. Also, slight changes were also made to the SAWD model to permit restarting and to improve the iteration technique. The second extension is the development of simulation models for more pieces of air and water processing equipment. Models are presented for: EDC, Molecular Sieve, Bosch, Sabatier, a new condensing heat exchanger, SPE, SFWES, Catalytic Oxidizer, and multifiltration. The third extension is to create two system simulations using these models. The first system presented consists of one air and one water processing system. The second consists of a potential air revitalization system.
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
A user's Manual for the Emulation Simulation Computer Model was published previously. The model consisted of a detailed model (emulation) of a SAWD CO2 removal subsystem which operated with much less detailed (simulation) models of a cabin, crew, and condensing and sensible heat exchangers. The purpose was to explore the utility of such an emulation/simulation combination in the design, development, and test of a piece of ARS hardware - SAWD. Extensions to this original effort are presented. The first extension is an update of the model to reflect changes in the SAWD control logic which resulted from the test. In addition, slight changes were also made to the SAWD model to permit restarting and to improve the iteration technique. The second extension is the development of simulation models for more pieces of air and water processing equipment. Models are presented for: EDC, Molecular Sieve, Bosch, Sabatier, a new condensing heat exchanger, SPE, SFWES, Catalytic Oxidizer, and multifiltration. The third extension is to create two system simulations using these models. The first system presented consists of one air and one water processing system, the second a potential Space Station air revitalization system.
NASA Technical Reports Server (NTRS)
Neiner, G. H.; Cole, G. L.; Arpasi, D. J.
1972-01-01
Digital computer control of a mixed-compression inlet is discussed. The inlet was terminated with a choked orifice at the compressor face station to dynamically simulate a turbojet engine. Inlet diffuser exit airflow disturbances were used. A digital version of a previously tested analog control system was used for both normal shock and restart control. Digital computer algorithms were derived using z-transform and finite difference methods. Using a sample rate of 1000 samples per second, the digital normal shock and restart controls essentially duplicated the inlet analog computer control results. At a sample rate of 100 samples per second, the control system performed adequately but was less stable.
A statistically defined anthropomorphic software breast phantom.
Lau, Beverly A; Reiser, Ingrid; Nishikawa, Robert M; Bakic, Predrag R
2012-06-01
Digital anthropomorphic breast phantoms have emerged in the past decade because of recent advances in 3D breast x-ray imaging techniques. Computer phantoms in the literature have incorporated power-law noise to represent glandular tissue and branching structures to represent linear components such as ducts. When power-law noise is added to those phantoms in one piece, the simulated fibroglandular tissue is distributed randomly throughout the breast, resulting in dense tissue placement that may not be observed in a real breast. The authors describe a method for enhancing an existing digital anthropomorphic breast phantom by adding binarized power-law noise to a limited area of the breast. Phantoms with (0.5 mm)(3) voxel size were generated using software developed by Bakic et al. Between 0% and 40% of adipose compartments in each phantom were replaced with binarized power-law noise (β = 3.0) ranging from 0.1 to 0.6 volumetric glandular fraction. The phantoms were compressed to 7.5 cm thickness, then blurred using a 3 × 3 boxcar kernel and up-sampled to (0.1 mm)(3) voxel size using trilinear interpolation. Following interpolation, the phantoms were adjusted for volumetric glandular fraction using global thresholding. Monoenergetic phantom projections were created, including quantum noise and simulated detector blur. Texture was quantified in the simulated projections using power-spectrum analysis to estimate the power-law exponent β from 25.6 × 25.6 mm(2) regions of interest. Phantoms were generated with total volumetric glandular fraction ranging from 3% to 24%. Values for β (averaged per projection view) were found to be between 2.67 and 3.73. Thus, the range of textures of the simulated breasts covers the textures observed in clinical images. Using these new techniques, digital anthropomorphic breast phantoms can be generated with a variety of glandular fractions and patterns. β values for this new phantom are comparable with published values for breast tissue in x-ray projection modalities. The combination of conspicuous linear structures and binarized power-law noise added to a limited area of the phantom qualitatively improves its realism. © 2012 American Association of Physicists in Medicine.
Extended law of corresponding states for protein solutions
NASA Astrophysics Data System (ADS)
Platten, Florian; Valadez-Pérez, Néstor E.; Castañeda-Priego, Ramón; Egelhaaf, Stefan U.
2015-05-01
The so-called extended law of corresponding states, as proposed by Noro and Frenkel [J. Chem. Phys. 113, 2941 (2000)], involves a mapping of the phase behaviors of systems with short-range attractive interactions. While it has already extensively been applied to various model potentials, here we test its applicability to protein solutions with their complex interactions. We successfully map their experimentally determined metastable gas-liquid binodals, as available in the literature, to the binodals of short-range square-well fluids, as determined by previous as well as new Monte Carlo simulations. This is achieved by representing the binodals as a function of the temperature scaled with the critical temperature (or as a function of the reduced second virial coefficient) and the concentration scaled by the cube of an effective particle diameter, where the scalings take into account the attractive and repulsive contributions to the interaction potential, respectively. The scaled binodals of the protein solutions coincide with simulation data of the adhesive hard-sphere fluid. Furthermore, once the repulsive contributions are taken into account by the effective particle diameter, the temperature dependence of the reduced second virial coefficients follows a master curve that corresponds to a linear temperature dependence of the depth of the square-well potential. We moreover demonstrate that, based on this approach and cloud-point measurements only, second virial coefficients can be estimated, which we show to agree with values determined by light scattering or by Derjaguin-Landau-Verwey-Overbeek (DLVO)-based calculations.
Extended law of corresponding states for protein solutions.
Platten, Florian; Valadez-Pérez, Néstor E; Castañeda-Priego, Ramón; Egelhaaf, Stefan U
2015-05-07
The so-called extended law of corresponding states, as proposed by Noro and Frenkel [J. Chem. Phys. 113, 2941 (2000)], involves a mapping of the phase behaviors of systems with short-range attractive interactions. While it has already extensively been applied to various model potentials, here we test its applicability to protein solutions with their complex interactions. We successfully map their experimentally determined metastable gas-liquid binodals, as available in the literature, to the binodals of short-range square-well fluids, as determined by previous as well as new Monte Carlo simulations. This is achieved by representing the binodals as a function of the temperature scaled with the critical temperature (or as a function of the reduced second virial coefficient) and the concentration scaled by the cube of an effective particle diameter, where the scalings take into account the attractive and repulsive contributions to the interaction potential, respectively. The scaled binodals of the protein solutions coincide with simulation data of the adhesive hard-sphere fluid. Furthermore, once the repulsive contributions are taken into account by the effective particle diameter, the temperature dependence of the reduced second virial coefficients follows a master curve that corresponds to a linear temperature dependence of the depth of the square-well potential. We moreover demonstrate that, based on this approach and cloud-point measurements only, second virial coefficients can be estimated, which we show to agree with values determined by light scattering or by Derjaguin-Landau-Verwey-Overbeek (DLVO)-based calculations.
International benchmarking of longitudinal train dynamics simulators: results
NASA Astrophysics Data System (ADS)
Wu, Qing; Spiryagin, Maksym; Cole, Colin; Chang, Chongyi; Guo, Gang; Sakalo, Alexey; Wei, Wei; Zhao, Xubao; Burgelman, Nico; Wiersma, Pier; Chollet, Hugues; Sebes, Michel; Shamdani, Amir; Melzi, Stefano; Cheli, Federico; di Gialleonardo, Egidio; Bosso, Nicola; Zampieri, Nicolò; Luo, Shihui; Wu, Honghua; Kaza, Guy-Léon
2018-03-01
This paper presents the results of the International Benchmarking of Longitudinal Train Dynamics Simulators which involved participation of nine simulators (TABLDSS, UM, CRE-LTS, TDEAS, PoliTo, TsDyn, CARS, BODYSIM and VOCO) from six countries. Longitudinal train dynamics results and computing time of four simulation cases are presented and compared. The results show that all simulators had basic agreement in simulations of locomotive forces, resistance forces and track gradients. The major differences among different simulators lie in the draft gear models. TABLDSS, UM, CRE-LTS, TDEAS, TsDyn and CARS had general agreement in terms of the in-train forces; minor differences exist as reflections of draft gear model variations. In-train force oscillations were observed in VOCO due to the introduction of wheel-rail contact. In-train force instabilities were sometimes observed in PoliTo and BODYSIM due to the velocity controlled transitional characteristics which could have generated unreasonable transitional stiffness. Regarding computing time per train operational second, the following list is in order of increasing computing speed: VOCO, TsDyn, PoliTO, CARS, BODYSIM, UM, TDEAS, CRE-LTS and TABLDSS (fastest); all simulators except VOCO, TsDyn and PoliTo achieved faster speeds than real-time simulations. Similarly, regarding computing time per integration step, the computing speeds in order are: CRE-LTS, VOCO, CARS, TsDyn, UM, TABLDSS and TDEAS (fastest).
Challenges in reducing the computational time of QSTS simulations for distribution system analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.
The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less
Santander, Julian E; Tsapatsis, Michael; Auerbach, Scott M
2013-04-16
We have constructed and applied an algorithm to simulate the behavior of zeolite frameworks during liquid adsorption. We applied this approach to compute the adsorption isotherms of furfural-water and hydroxymethyl furfural (HMF)-water mixtures adsorbing in silicalite zeolite at 300 K for comparison with experimental data. We modeled these adsorption processes under two different statistical mechanical ensembles: the grand canonical (V-Nz-μg-T or GC) ensemble keeping volume fixed, and the P-Nz-μg-T (osmotic) ensemble allowing volume to fluctuate. To optimize accuracy and efficiency, we compared pure Monte Carlo (MC) sampling to hybrid MC-molecular dynamics (MD) simulations. For the external furfural-water and HMF-water phases, we assumed the ideal solution approximation and employed a combination of tabulated data and extended ensemble simulations for computing solvation free energies. We found that MC sampling in the V-Nz-μg-T ensemble (i.e., standard GCMC) does a poor job of reproducing both the Henry's law regime and the saturation loadings of these systems. Hybrid MC-MD sampling of the V-Nz-μg-T ensemble, which includes framework vibrations at fixed total volume, provides better results in the Henry's law region, but this approach still does not reproduce experimental saturation loadings. Pure MC sampling of the osmotic ensemble was found to approach experimental saturation loadings more closely, whereas hybrid MC-MD sampling of the osmotic ensemble quantitatively reproduces such loadings because the MC-MD approach naturally allows for locally anisotropic volume changes wherein some pores expand whereas others contract.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radhakrishnan, B.; Eisenbach, M.; Burress, Timothy A.
2017-01-24
A new scaling approach has been proposed for the spin exchange and the dipole–dipole interaction energy as a function of the system size. The computed scaling laws are used in atomistic Monte Carlo simulations of magnetic moment evolution to predict the transition from single domain to a vortex structure as the system size increases. The width of a 180° – domain wall extracted from the simulated structures is in close agreement with experimentally values for an F–Si alloy. In conclusion, the transition size from a single domain to a vortex structure is also in close agreement with theoretically predicted andmore » experimentally measured values for Fe.« less
Electrohydrodynamic coalescence of droplets using an embedded potential flow model
NASA Astrophysics Data System (ADS)
Garzon, M.; Gray, L. J.; Sethian, J. A.
2018-03-01
The coalescence, and subsequent satellite formation, of two inviscid droplets is studied numerically. The initial drops are taken to be of equal and different sizes, and simulations have been carried out with and without the presence of an electrical field. The main computational challenge is the tracking of a free surface that changes topology. Coupling level set and boundary integral methods with an embedded potential flow model, we seamlessly compute through these singular events. As a consequence, the various coalescence modes that appear depending upon the relative ratio of the parent droplets can be studied. Computations of first stage pinch-off, second stage pinch-off, and complete engulfment are analyzed and compared to recent numerical studies and laboratory experiments. Specifically, we study the evolution of bridge radii and the related scaling laws, the minimum drop radii evolution from coalescence to satellite pinch-off, satellite sizes, and the upward stretching of the near cylindrical protrusion at the droplet top. Clear evidence of partial coalescence self-similarity is presented for parent droplet ratios between 1.66 and 4. This has been possible due to the fact that computational initial conditions only depend upon the mother droplet size, in contrast with laboratory experiments where the difficulty in establishing the same initial physical configuration is well known. The presence of electric forces changes the coalescence patterns, and it is possible to control the satellite droplet size by tuning the electrical field intensity. All of the numerical results are in very good agreement with recent laboratory experiments for water droplet coalescence.
Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications
2016-10-17
finite volume schemes, discontinuous Galerkin finite element method, and related methods, for solving computational fluid dynamics (CFD) problems and...approximation for finite element methods. (3) The development of methods of simulation and analysis for the study of large scale stochastic systems of...laws, finite element method, Bernstein-Bezier finite elements , weakly interacting particle systems, accelerated Monte Carlo, stochastic networks 16
Spatial Evaluation and Verification of Earthquake Simulators
NASA Astrophysics Data System (ADS)
Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.
2017-06-01
In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2014-01-01
The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. Recently introduced molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties are extended in the current work to include electronic energy level transitions and reactions involving charged particles. These extensions are shown to agree favorably with reported transition and reaction rates from the literature for near-equilibrium conditions. Also, the extensions are applied to the second flight of the Project FIRE flight experiment at 1634 seconds with a Knudsen number of 0.001 at an altitude of 76.4 km. In order to accomplish this, NASA's direct simulation Monte Carlo code DAC was rewritten to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced chemistry model, and to include the extensions presented in this work. The 1634 second data point was chosen for comparisons to be made in order to include a CFD solution. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid because, although near-transitional, the flow is still considered to be continuum. It is shown that the inclusion of electronic energy levels in the DSMC simulation is necessary for flows of this nature and is required for comparison to the CFD solution. The flow field solutions are also post-processed by the nonequilibrium radiation code HARA to compute the radiative portion.
Generalized Arcsine Laws for Fractional Brownian Motion.
Sadhu, Tridib; Delorme, Mathieu; Wiese, Kay Jörg
2018-01-26
The three arcsine laws for Brownian motion are a cornerstone of extreme-value statistics. For a Brownian B_{t} starting from the origin, and evolving during time T, one considers the following three observables: (i) the duration t_{+} the process is positive, (ii) the time t_{last} the process last visits the origin, and (iii) the time t_{max} when it achieves its maximum (or minimum). All three observables have the same cumulative probability distribution expressed as an arcsine function, thus the name arcsine laws. We show how these laws change for fractional Brownian motion X_{t}, a non-Markovian Gaussian process indexed by the Hurst exponent H. It generalizes standard Brownian motion (i.e., H=1/2). We obtain the three probabilities using a perturbative expansion in ϵ=H-1/2. While all three probabilities are different, this distinction can only be made at second order in ϵ. Our results are confirmed to high precision by extensive numerical simulations.
Time-domain damping models in structural acoustics using digital filtering
NASA Astrophysics Data System (ADS)
Parret-Fréaud, Augustin; Cotté, Benjamin; Chaigne, Antoine
2016-02-01
This paper describes a new approach in order to formulate well-posed time-domain damping models able to represent various frequency domain profiles of damping properties. The novelty of this approach is to represent the behavior law of a given material directly in a discrete-time framework as a digital filter, which is synthesized for each material from a discrete set of frequency-domain data such as complex modulus through an optimization process. A key point is the addition of specific constraints to this process in order to guarantee stability, causality and verification of thermodynamics second law when transposing the resulting discrete-time behavior law into the time domain. Thus, this method offers a framework which is particularly suitable for time-domain simulations in structural dynamics and acoustics for a wide range of materials (polymers, wood, foam, etc.), allowing to control and even reduce the distortion effects induced by time-discretization schemes on the frequency response of continuous-time behavior laws.
NASA Astrophysics Data System (ADS)
Tomaro, Robert F.
1998-07-01
The present research is aimed at developing a higher-order, spatially accurate scheme for both steady and unsteady flow simulations using unstructured meshes. The resulting scheme must work on a variety of general problems to ensure the creation of a flexible, reliable and accurate aerodynamic analysis tool. To calculate the flow around complex configurations, unstructured grids and the associated flow solvers have been developed. Efficient simulations require the minimum use of computer memory and computational times. Unstructured flow solvers typically require more computer memory than a structured flow solver due to the indirect addressing of the cells. The approach taken in the present research was to modify an existing three-dimensional unstructured flow solver to first decrease the computational time required for a solution and then to increase the spatial accuracy. The terms required to simulate flow involving non-stationary grids were also implemented. First, an implicit solution algorithm was implemented to replace the existing explicit procedure. Several test cases, including internal and external, inviscid and viscous, two-dimensional, three-dimensional and axi-symmetric problems, were simulated for comparison between the explicit and implicit solution procedures. The increased efficiency and robustness of modified code due to the implicit algorithm was demonstrated. Two unsteady test cases, a plunging airfoil and a wing undergoing bending and torsion, were simulated using the implicit algorithm modified to include the terms required for a moving and/or deforming grid. Secondly, a higher than second-order spatially accurate scheme was developed and implemented into the baseline code. Third- and fourth-order spatially accurate schemes were implemented and tested. The original dissipation was modified to include higher-order terms and modified near shock waves to limit pre- and post-shock oscillations. The unsteady cases were repeated using the higher-order spatially accurate code. The new solutions were compared with those obtained using the second-order spatially accurate scheme. Finally, the increased efficiency of using an implicit solution algorithm in a production Computational Fluid Dynamics flow solver was demonstrated for steady and unsteady flows. A third- and fourth-order spatially accurate scheme has been implemented creating a basis for a state-of-the-art aerodynamic analysis tool.
Nonlinear dynamic analysis of a rotor-bearing-seal system under two loading conditions
NASA Astrophysics Data System (ADS)
Ma, Hui; Li, Hui; Niu, Heqiang; Song, Rongze; Wen, Bangchun
2013-11-01
The operating speed of the rotating machinery often exceeds the second or even higher order critical speeds to pursue higher efficiency. Thus, how to restrain the higher order mode instability caused by the nonlinear oil-film force and seal force at high speed as far as possible has become more and more important. In this study, a lumped mass model of a rotor-bearing-seal system considering the gyroscopic effect is established. The graphite self-lubricating bearing and the sliding bearing are simulated by a spring-damping model and a nonlinear oil-film force model based on the assumption of short bearings, respectively. The seal is simulated by Muszynska nonlinear seal force model. Effects of the seal force and oil-film force on the first and second mode instabilities are investigated under two loading conditions which are determined by API Standard 617 (Axial and Centrifugal Compressors and Expander-compressors for Petroleum, Chemical and Gas Industry Services, Seventh Edition). The research focuses on the effects of exciting force forms and their magnitudes on the first and second mode whips in a rotor-bearing-seal system by using the spectrum cascades, vibration waveforms, orbits and Poincaré maps. The first and second mode instability laws are compared by including and excluding the seal effect in a rotor system with single-diameter shaft and two same discs. Meanwhile, the instability laws are also verified in a rotor system with multi-diameter shaft and two different discs. The results show that the second loading condition (out-of-phase unbalances of two discs) and the nonlinear seal force can mainly restrain the first mode instability and have slight effects on the second mode instability. This study may contribute to a further understanding about the higher order mode instability of such a rotor system with fluid-induced forces from the oil-film bearings and seals.
Simulation of Turbulent Combustion Fields of Shock-Dispersed Aluminum Using the AMR Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, A L; Bell, J B; Beckner, V E
2006-11-02
We present a Model for simulating experiments of combustion in Shock-Dispersed-Fuel (SDF) explosions. The SDF charge consisted of a 0.5-g spherical PETN booster, surrounded by 1-g of fuel powder (flake Aluminum). Detonation of the booster charge creates a high-temperature, high-pressure source (PETN detonation products gases) that both disperses the fuel and heats it. Combustion ensues when the fuel mixes with air. The gas phase is governed by the gas-dynamic conservation laws, while the particle phase obeys the continuum mechanics laws for heterogeneous media. The two phases exchange mass, momentum and energy according to inter-phase interaction terms. The kinetics model usedmore » an empirical particle burn relation. The thermodynamic model considers the air, fuel and booster products to be of frozen composition, while the Al combustion products are assumed to be in equilibrium. The thermodynamic states were calculated by the Cheetah code; resulting state points were fit with analytic functions suitable for numerical simulations. Numerical simulations of combustion of an Aluminum SDF charge in a 6.4-liter chamber were performed. Computed pressure histories agree with measurements.« less
Mathematical modelling and simulation of a tennis racket.
Brannigan, M; Adali, S
1981-01-01
By constructing a mathematical model, we consider the dynamics of a tennis racket hit by a ball. Using this model, known experimental results can be simulated on the computer, and it becomes possible to make a parametric study of a racket. Such a simulation is essential in the study of two important problems related to tennis: computation of the resulting forces and moments transferred to the hand should assist understanding of the medical problem 'tennis elbow'; secondly, simulation will enable a study to be made of the relationships between the impact time, tension in the strings, forces transmitted to the rim and return velocity of the ball, all of which can lead to the optimal design of rackets.
A high precision extrapolation method in multiphase-field model for simulating dendrite growth
NASA Astrophysics Data System (ADS)
Yang, Cong; Xu, Qingyan; Liu, Baicheng
2018-05-01
The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.
Fukui, Atsuko; Fujii, Ryuta; Yonezawa, Yorinobu; Sunada, Hisakazu
2004-03-01
The release properties of phenylpropanolamine hydrochloride (PPA) from ethylcellulose (EC) matrix granules prepared by an extrusion granulation method were examined. The release process could be divided into two parts; the first and second stages were analyzed by applying square-root time law and cube-root law equations, respectively. The validity of the treatments was confirmed by the fitness of a simulation curve with the measured curve. In the first stage, PPA was released from the gel layer of swollen EC in the matrix granules. In the second stage, the drug existing below the gel layer dissolved and was released through the gel layer. The effect of the binder solution on the release from EC matrix granules was also examined. The binder solutions were prepared from various EC and ethanol (EtOH) concentrations. The media changed from a good solvent to a poor solvent with decreasing EtOH concentration. The matrix structure changed from loose to compact with increasing EC concentration. The preferable EtOH concentration region was observed when the release process was easily predictable. The time and release ratio at the connection point of the simulation curves were also examined to determine the validity of the analysis.
Realization of planning design of mechanical manufacturing system by Petri net simulation model
NASA Astrophysics Data System (ADS)
Wu, Yanfang; Wan, Xin; Shi, Weixiang
1991-09-01
Planning design is to work out a more overall long-term plan. In order to guarantee a mechanical manufacturing system (MMS) designed to obtain maximum economical benefit, it is necessary to carry out a reasonable planning design for the system. First, some principles on planning design for MMS are introduced. Problems of production scheduling and their decision rules for computer simulation are presented. Realizable method of each production scheduling decision rule in Petri net model is discussed. Second, the solution of conflict rules for conflict problems during running Petri net is given. Third, based on the Petri net model of MMS which includes part flow and tool flow, according to the principle of minimum event time advance, a computer dynamic simulation of the Petri net model, that is, a computer dynamic simulation of MMS, is realized. Finally, the simulation program is applied to a simulation exmple, so the scheme of a planning design for MMS can be evaluated effectively.
Dissociative diffusion mechanism in vacancy-rich materials according to mass action kinetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biderman, N. J.; Sundaramoorthy, R.; Haldar, Pradeep
We conducted two sets of diffusion-reaction numerical simulations using a finite difference method (FDM) in order to investigate fast impurity diffusion via interstitial sites in vacancy-rich materials such as Cu(In,Ga)Se 2 (CIGS) and Cu 2ZnSn(S, Se) 4 (CZTSSe or CZTS) via the dissociative diffusion mechanism where the interstitial diffuser ultimately reacts with a vacancy to produce a substitutional. The first set of simulations extends the standard interstitial-limited dissociative diffusion theory to vacancy-rich material conditions where vacancies are annihilated in large amounts, introducing non-equilibrium vacancy concentration profiles. The second simulation set explores the vacancy-limited dissociative diffusion where impurity incorporation increases themore » equilibrium vacancy concentration. In addition to diffusion profiles of varying concentrations and shapes that were obtained in all simulations, some of the profiles can be fitted with the constant- and limited-source solutions of Fick’s second law despite the non-equilibrium condition induced by the interstitial-vacancy reaction. The first set of simulations reveals that the dissociative diffusion coefficient in vacancy-rich materials is inversely proportional to the initial vacancy concentration. In the second set of numerical simulations, impurity-induced changes in the vacancy concentration lead to distinctive diffusion profile shapes. The simulation results are also compared with published data of impurity diffusion in CIGS. And according to the characteristic properties of diffusion profiles from the two set of simulations, experimental detection of the dissociative diffusion mechanism in vacancy-rich materials may be possible.« less
Dissociative diffusion mechanism in vacancy-rich materials according to mass action kinetics
Biderman, N. J.; Sundaramoorthy, R.; Haldar, Pradeep; ...
2016-05-13
We conducted two sets of diffusion-reaction numerical simulations using a finite difference method (FDM) in order to investigate fast impurity diffusion via interstitial sites in vacancy-rich materials such as Cu(In,Ga)Se 2 (CIGS) and Cu 2ZnSn(S, Se) 4 (CZTSSe or CZTS) via the dissociative diffusion mechanism where the interstitial diffuser ultimately reacts with a vacancy to produce a substitutional. The first set of simulations extends the standard interstitial-limited dissociative diffusion theory to vacancy-rich material conditions where vacancies are annihilated in large amounts, introducing non-equilibrium vacancy concentration profiles. The second simulation set explores the vacancy-limited dissociative diffusion where impurity incorporation increases themore » equilibrium vacancy concentration. In addition to diffusion profiles of varying concentrations and shapes that were obtained in all simulations, some of the profiles can be fitted with the constant- and limited-source solutions of Fick’s second law despite the non-equilibrium condition induced by the interstitial-vacancy reaction. The first set of simulations reveals that the dissociative diffusion coefficient in vacancy-rich materials is inversely proportional to the initial vacancy concentration. In the second set of numerical simulations, impurity-induced changes in the vacancy concentration lead to distinctive diffusion profile shapes. The simulation results are also compared with published data of impurity diffusion in CIGS. And according to the characteristic properties of diffusion profiles from the two set of simulations, experimental detection of the dissociative diffusion mechanism in vacancy-rich materials may be possible.« less
Towards real-time photon Monte Carlo dose calculation in the cloud
NASA Astrophysics Data System (ADS)
Ziegenhein, Peter; Kozin, Igor N.; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-01
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
Towards real-time photon Monte Carlo dose calculation in the cloud.
Ziegenhein, Peter; Kozin, Igor N; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-07
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
Computational Science: A Research Methodology for the 21st Century
NASA Astrophysics Data System (ADS)
Orbach, Raymond L.
2004-03-01
Computational simulation - a means of scientific discovery that employs computer systems to simulate a physical system according to laws derived from theory and experiment - has attained peer status with theory and experiment. Important advances in basic science are accomplished by a new "sociology" for ultrascale scientific computing capability (USSCC), a fusion of sustained advances in scientific models, mathematical algorithms, computer architecture, and scientific software engineering. Expansion of current capabilities by factors of 100 - 1000 open up new vistas for scientific discovery: long term climatic variability and change, macroscopic material design from correlated behavior at the nanoscale, design and optimization of magnetic confinement fusion reactors, strong interactions on a computational lattice through quantum chromodynamics, and stellar explosions and element production. The "virtual prototype," made possible by this expansion, can markedly reduce time-to-market for industrial applications such as jet engines and safer, more fuel efficient cleaner cars. In order to develop USSCC, the National Energy Research Scientific Computing Center (NERSC) announced the competition "Innovative and Novel Computational Impact on Theory and Experiment" (INCITE), with no requirement for current DOE sponsorship. Fifty nine proposals for grand challenge scientific problems were submitted for a small number of awards. The successful grants, and their preliminary progress, will be described.
NASA Technical Reports Server (NTRS)
DiSalvo, Roberto; Deaconu, Stelu; Majumdar, Alok
2006-01-01
One of the goals of this program was to develop the experimental and analytical/computational tools required to predict the flow of non-Newtonian fluids through the various system components of a propulsion system: pipes, valves, pumps etc. To achieve this goal we selected to augment the capabilities of NASA's Generalized Fluid System Simulation Program (GFSSP) software. GFSSP is a general-purpose computer program designed to calculate steady state and transient pressure and flow distributions in a complex fluid network. While the current version of the GFSSP code is able to handle various systems components the implicit assumption in the code is that the fluids in the system are Newtonian. To extend the capability of the code to non-Newtonian fluids, such as silica gelled fuels and oxidizers, modifications to the momentum equations of the code have been performed. We have successfully implemented in GFSSP flow equations for fluids with power law behavior. The implementation of the power law fluid behavior into the GFSSP code depends on knowledge of the two fluid coefficients, n and K. The determination of these parameters for the silica gels used in this program was performed experimentally. The n and K parameters for silica water gels were determined experimentally at CFDRC's Special Projects Laboratory, with a constant shear rate capillary viscometer. Batches of 8:1 (by weight) water-silica gel were mixed using CFDRC s 10-gallon gelled propellant mixer. Prior to testing the gel was allowed to rest in the rheometer tank for at least twelve hours to ensure that the delicate structure of the gel had sufficient time to reform. During the tests silica gel was pressure fed and discharged through stainless steel pipes ranging from 1", to 36", in length and three diameters; 0.0237", 0.032", and 0.047". The data collected in these tests included pressure at tube entrance and volumetric flowrate. From these data the uncorrected shear rate, shear stress, residence time, and viscosity were evaluated using formulae for non-Newtonian, power law fluids. The maximum shear rates (corrected for entrance effects) obtained in the rheometer with the current setup were in the 150,000 to 170,000sec- range. GFSSP simulations were performed with a flow circuit simulating the capillary rheometer and using Power Law gel viscosity coefficients from the experimental data. The agreement between the experimental data and the simulated flow curves was within +/-4% given quality entrance effect data.
A Digitally Programmable Cytomorphic Chip for Simulation of Arbitrary Biochemical Reaction Networks.
Woo, Sung Sik; Kim, Jaewook; Sarpeshkar, Rahul
2018-04-01
Prior work has shown that compact analog circuits can faithfully represent and model fundamental biomolecular circuits via efficient log-domain cytomorphic transistor equivalents. Such circuits have emphasized basis functions that are dominant in genetic transcription and translation networks and deoxyribonucleic acid (DNA)-protein binding. Here, we report a system featuring digitally programmable 0.35 μm BiCMOS analog cytomorphic chips that enable arbitrary biochemical reaction networks to be exactly represented thus enabling compact and easy composition of protein networks as well. Since all biomolecular networks can be represented as chemical reaction networks, our protein networks also include the former genetic network circuits as a special case. The cytomorphic analog protein circuits use one fundamental association-dissociation-degradation building-block circuit that can be configured digitally to exactly represent any zeroth-, first-, and second-order reaction including loading, dynamics, nonlinearity, and interactions with other building-block circuits. To address a divergence issue caused by random variations in chip fabrication processes, we propose a unique way of performing computation based on total variables and conservation laws, which we instantiate at both the circuit and network levels. Thus, scalable systems that operate with finite error over infinite time can be built. We show how the building-block circuits can be composed to form various network topologies, such as cascade, fan-out, fan-in, loop, dimerization, or arbitrary networks using total variables. We demonstrate results from a system that combines interacting cytomorphic chips to simulate a cancer pathway and a glycolysis pathway. Both simulations are consistent with conventional software simulations. Our highly parallel digitally programmable analog cytomorphic systems can lead to a useful design, analysis, and simulation tool for studying arbitrary large-scale biological networks in systems and synthetic biology.
A mathematical theorem as the basis for the second law: Thomson's formulation applied to equilibrium
NASA Astrophysics Data System (ADS)
Allahverdyan, A. E.; Nieuwenhuizen, Th. M.
2002-03-01
There are several formulations of the second law, and they may, in principle, have different domains of validity. Here a simple mathematical theorem is proven which serves as the most general basis for the second law, namely the Thomson formulation (“cyclic changes cost energy”), applied to equilibrium. This formulation of the second law is a property akin to particle conservation (normalization of the wave function). It has been strictly proven for a canonical ensemble, and made plausible for a micro-canonical ensemble. As the derivation does not assume time-inversion invariance, it is applicable to situations where persistent currents occur. This clear-cut derivation allows to revive the “no perpetuum mobile in equilibrium” formulation of the second law and to criticize some assumptions which are widespread in literature. The result puts recent results devoted to foundations and limitations of the second law in proper perspective, and structurizes this relatively new field of research.
Keil, Lorenz; Hartmann, Michael; Lanzmich, Simon; Braun, Dieter
2016-07-27
How can living matter arise from dead matter? All known living systems are built around information stored in RNA and DNA. To protect this information against molecular degradation and diffusion, the second law of thermodynamics imposes the need for a non-equilibrium driving force. Following a series of successful experiments using thermal gradients, we have shown that heat gradients across sub-millimetre pores can drive accumulation, replication, and selection of ever longer molecules, implementing all the necessary parts for Darwinian evolution. For these lab experiments to proceed with ample speed, however, the temperature gradients have to be quite steep, reaching up to 30 K per 100 μm. Here we use computer simulations based on experimental data to show that 2000-fold shallower temperature gradients - down to 100 K over one metre - can still drive the accumulation of protobiomolecules. This finding opens the door for various environments to potentially host the origins of life: volcanic, water-vapour, or hydrothermal settings. Following the trajectories of single molecules in simulation, we also find that they are subjected to frequent temperature oscillations inside these pores, facilitating e.g. template-directed replication mechanisms. The tilting of the pore configuration is the central strategy to achieve replication in a shallow temperature gradient. Our results suggest that shallow thermal gradients across porous rocks could have facilitated the formation of evolutionary machines, significantly increasing the number of potential sites for the origin of life on young rocky planets.
ERIC Educational Resources Information Center
Ellis, Nick C.
2009-01-01
This article presents an analysis of interactions in the usage, structure, cognition, coadaptation of conversational partners, and emergence of linguistic constructions. It focuses on second language development of English verb-argument constructions (VACs: VL, verb locative; VOL, verb object locative; VOO, ditransitive) with particular reference…
NASA Technical Reports Server (NTRS)
Boland, J. S., III
1973-01-01
The conventional six-engine reaction control jet relay attitude control law with deadband is shown to be a good linear approximation to a weighted time-fuel optimal control law. Techniques for evaluating the value of the relative weighting between time and fuel for a particular relay control law is studied along with techniques to interrelate other parameters for the two control laws. Vehicle attitude control laws employing control moment gyros are then investigated. Steering laws obtained from the expression for the reaction torque of the gyro configuration are compared to a total optimal attitude control law that is derived from optimal linear regulator theory. This total optimal attitude control law has computational disadvantages in the solving of the matrix Riccati equation. Several computational algorithms for solving the matrix Riccati equation are investigated with respect to accuracy, computational storage requirements, and computational speed.
Three-dimensional time dependent computation of turbulent flow
NASA Technical Reports Server (NTRS)
Kwak, D.; Reynolds, W. C.; Ferziger, J. H.
1975-01-01
The three-dimensional, primitive equations of motion are solved numerically for the case of isotropic box turbulence and the distortion of homogeneous turbulence by irrotational plane strain at large Reynolds numbers. A Gaussian filter is applied to governing equations to define the large scale field. This gives rise to additional second order computed scale stresses (Leonard stresses). The residual stresses are simulated through an eddy viscosity. Uniform grids are used, with a fourth order differencing scheme in space and a second order Adams-Bashforth predictor for explicit time stepping. The results are compared to the experiments and statistical information extracted from the computer generated data.
Quiet Clean Short-haul Experimental Engine (QCSEE) under-the-wing engine simulation report
NASA Technical Reports Server (NTRS)
1977-01-01
Hybrid computer simulations of the under-the-wing engine were constructed to develop the dynamic design of the controls. The engine and control system includes a variable pitch fan and a digital electronic control. Simulation results for throttle bursts from 62 to 100 percent net thrust predict that the engine will accelerate 62 to 95 percent net thrust in one second.
Evaluating Implementations of Service Oriented Architecture for Sensor Network via Simulation
2011-04-01
Subject: COMPUTER SCIENCE Approved: Boleslaw Szymanski , Thesis Adviser Rensselaer Polytechnic Institute Troy, New York April 2011 (For Graduation May 2011...simulation supports distributed and centralized composition with a type hierarchy and multiple -service statically-located nodes in a 2-dimensional space...distributed and centralized composition with a type hierarchy and multiple -service statically-located nodes in a 2-dimensional space. The second simulation
Coulomb interactions in charged fluids.
Vernizzi, Graziano; Guerrero-García, Guillermo Iván; de la Cruz, Monica Olvera
2011-07-01
The use of Ewald summation schemes for calculating long-range Coulomb interactions, originally applied to ionic crystalline solids, is a very common practice in molecular simulations of charged fluids at present. Such a choice imposes an artificial periodicity which is generally absent in the liquid state. In this paper we propose a simple analytical O(N(2)) method which is based on Gauss's law for computing exactly the Coulomb interaction between charged particles in a simulation box, when it is averaged over all possible orientations of a surrounding infinite lattice. This method mitigates the periodicity typical of crystalline systems and it is suitable for numerical studies of ionic liquids, charged molecular fluids, and colloidal systems with Monte Carlo and molecular dynamics simulations.
NASA Technical Reports Server (NTRS)
Halyo, Nesim
1987-01-01
A combined stochastic feedforward and feedback control design methodology was developed. The objective of the feedforward control law is to track the commanded trajectory, whereas the feedback control law tries to maintain the plant state near the desired trajectory in the presence of disturbances and uncertainties about the plant. The feedforward control law design is formulated as a stochastic optimization problem and is embedded into the stochastic output feedback problem where the plant contains unstable and uncontrollable modes. An algorithm to compute the optimal feedforward is developed. In this approach, the use of error integral feedback, dynamic compensation, control rate command structures are an integral part of the methodology. An incremental implementation is recommended. Results on the eigenvalues of the implemented versus designed control laws are presented. The stochastic feedforward/feedback control methodology is used to design a digital automatic landing system for the ATOPS Research Vehicle, a Boeing 737-100 aircraft. The system control modes include localizer and glideslope capture and track, and flare to touchdown. Results of a detailed nonlinear simulation of the digital control laws, actuator systems, and aircraft aerodynamics are presented.
Jarzynski equality: connections to thermodynamics and the second law.
Palmieri, Benoit; Ronis, David
2007-01-01
The one-dimensional expanding ideal gas model is used to compute the exact nonequilibrium distribution function. The state of the system during the expansion is defined in terms of local thermodynamics quantities. The final equilibrium free energy, obtained a long time after the expansion, is compared against the free energy that appears in the Jarzynski equality. Within this model, where the Jarzynski equality holds rigorously, the free energy change that appears in the equality does not equal the actual free energy change of the system at any time of the process. More generally, the work bound that is obtained from the Jarzynski equality is an upper bound to the upper bound that is obtained from the first and second laws of thermodynamics. The cancellation of the dissipative (nonequilibrium) terms that result in the Jarzynski equality is shown in the framework of response theory. This is used to show that the intuitive assumption that the Jarzynski work bound becomes equal to the average work done when the system evolves quasistatically is incorrect under some conditions.
Characterization of a Recoverable Flight Control Computer System
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar; Torres, Wilfredo
1999-01-01
The design and development of a Closed-Loop System to study and evaluate the performance of the Honeywell Recoverable Computer System (RCS) in electromagnetic environments (EME) is presented. The development of a Windows-based software package to handle the time-critical communication of data and commands between the RCS and flight simulation code in real-time while meeting the stringent hard deadlines is also submitted. The performance results of the RCS and characteristics of its upset recovery scheme while exercising flight control laws under ideal conditions as well as in the presence of electromagnetic fields are also discussed.
On the use of log-transformation vs. nonlinear regression for analyzing biological power laws.
Xiao, Xiao; White, Ethan P; Hooten, Mevin B; Durham, Susan L
2011-10-01
Power-law relationships are among the most well-studied functional relationships in biology. Recently the common practice of fitting power laws using linear regression (LR) on log-transformed data has been criticized, calling into question the conclusions of hundreds of studies. It has been suggested that nonlinear regression (NLR) is preferable, but no rigorous comparison of these two methods has been conducted. Using Monte Carlo simulations, we demonstrate that the error distribution determines which method performs better, with NLR better characterizing data with additive, homoscedastic, normal error and LR better characterizing data with multiplicative, heteroscedastic, lognormal error. Analysis of 471 biological power laws shows that both forms of error occur in nature. While previous analyses based on log-transformation appear to be generally valid, future analyses should choose methods based on a combination of biological plausibility and analysis of the error distribution. We provide detailed guidelines and associated computer code for doing so, including a model averaging approach for cases where the error structure is uncertain.
Modeling digital breast tomosynthesis imaging systems for optimization studies
NASA Astrophysics Data System (ADS)
Lau, Beverly Amy
Digital breast tomosynthesis (DBT) is a new imaging modality for breast imaging. In tomosynthesis, multiple images of the compressed breast are acquired at different angles, and the projection view images are reconstructed to yield images of slices through the breast. One of the main problems to be addressed in the development of DBT is the optimal parameter settings to obtain images ideal for detection of cancer. Since it would be unethical to irradiate women multiple times to explore potentially optimum geometries for tomosynthesis, it is ideal to use a computer simulation to generate projection images. Existing tomosynthesis models have modeled scatter and detector without accounting for oblique angles of incidence that tomosynthesis introduces. Moreover, these models frequently use geometry-specific physical factors measured from real systems, which severely limits the robustness of their algorithms for optimization. The goal of this dissertation was to design the framework for a computer simulation of tomosynthesis that would produce images that are sensitive to changes in acquisition parameters, so an optimization study would be feasible. A computer physics simulation of the tomosynthesis system was developed. The x-ray source was modeled as a polychromatic spectrum based on published spectral data, and inverse-square law was applied. Scatter was applied using a convolution method with angle-dependent scatter point spread functions (sPSFs), followed by scaling using an angle-dependent scatter-to-primary ratio (SPR). Monte Carlo simulations were used to generate sPSFs for a 5-cm breast with a 1-cm air gap. Detector effects were included through geometric propagation of the image onto layers of the detector, which were blurred using depth-dependent detector point-spread functions (PRFs). Depth-dependent PRFs were calculated every 5-microns through a 200-micron thick CsI detector using Monte Carlo simulations. Electronic noise was added as Gaussian noise as a last step of the model. The sPSFs and detector PRFs were verified to match published data, and noise power spectrum (NPS) from simulated flat field images were shown to match empirically measured data from a digital mammography unit. A novel anthropomorphic software breast phantom was developed for 3D imaging simulation. Projection view images of the phantom were shown to have similar structure as real breasts in the spatial frequency domain, using the power-law exponent beta to quantify tissue complexity. The physics simulation and computer breast phantom were used together, following methods from a published study with real tomosynthesis images of real breasts. The simulation model and 3D numerical breast phantoms were able to reproduce the trends in the experimental data. This result demonstrates the ability of the tomosynthesis physics model to generate images sensitive to changes in acquisition parameters.
DOT National Transportation Integrated Search
1981-09-01
Volume II is the second volume of a three volume document describing the computer program HEVSIM for use with buses and heavy duty trucks. This volume is a user's manual describing how to prepare data input and execute the program. A strong effort ha...
Cart3D Simulations for the Second AIAA Sonic Boom Prediction Workshop
NASA Technical Reports Server (NTRS)
Anderson, George R.; Aftosmis, Michael J.; Nemec, Marian
2017-01-01
Simulation results are presented for all test cases prescribed in the Second AIAA Sonic Boom Prediction Workshop. For each of the four nearfield test cases, we compute pressure signatures at specified distances and off-track angles, using an inviscid, embedded-boundary Cartesian-mesh flow solver with output-based mesh adaptation. The cases range in complexity from an axisymmetric body to a full low-boom aircraft configuration with a powered nacelle. For efficiency, boom carpets are decomposed into sets of independent meshes and computed in parallel. This also facilitates the use of more effective meshing strategies - each off-track angle is computed on a mesh with good azimuthal alignment, higher aspect ratio cells, and more tailored adaptation. The nearfield signatures generally exhibit good convergence with mesh refinement. We introduce a local error estimation procedure to highlight regions of the signatures most sensitive to mesh refinement. Results are also presented for the two propagation test cases, which investigate the effects of atmospheric profiles on ground noise. Propagation is handled with an augmented Burgers' equation method (NASA's sBOOM), and ground noise metrics are computed with LCASB.
A second golden age of aeroacoustics?
Lele, Sanjiva K; Nichols, Joseph W
2014-08-13
In 1992, Sir James Lighthill foresaw the dawn of a second golden age in aeroacoustics enabled by computer simulations (Hardin JC, Hussaini MY (eds) 1993 Computational aeroacoustics, New York, NY: Springer (doi:10.1007/978-1-4613-8342-0)). This review traces the progress in large-scale computations to resolve the noise-source processes and the methods devised to predict the far-field radiated sound using this information. Keeping focus on aviation-related noise sources a brief account of the progress in simulations of jet noise, fan noise and airframe noise is given highlighting the key technical issues and challenges. The complex geometry of nozzle elements and airframe components as well as the high Reynolds number of target applications require careful assessment of the discretization algorithms on unstructured grids and modelling compromises. High-fidelity simulations with 200-500 million points are not uncommon today and are used to improve scientific understanding of the noise generation process in specific situations. We attempt to discern where the future might take us, especially if exascale computing becomes a reality in 10 years. A pressing question in this context concerns the role of modelling in the coming era. While the sheer scale of the data generated by large-scale simulations will require new methods for data analysis and data visualization, it is our view that suitable theoretical formulations and reduced models will be even more important in future. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Toward Inverse Control of Physics-Based Sound Synthesis
NASA Astrophysics Data System (ADS)
Pfalz, A.; Berdahl, E.
2017-05-01
Long Short-Term Memory networks (LSTMs) can be trained to realize inverse control of physics-based sound synthesizers. Physics-based sound synthesizers simulate the laws of physics to produce output sound according to input gesture signals. When a user's gestures are measured in real time, she or he can use them to control physics-based sound synthesizers, thereby creating simulated virtual instruments. An intriguing question is how to program a computer to learn to play such physics-based models. This work demonstrates that LSTMs can be trained to accomplish this inverse control task with four physics-based sound synthesizers.
Computer Series, 52: Scientific Exploration with a Microcomputer: Simulations for Nonscientists.
ERIC Educational Resources Information Center
Whisnant, David M.
1984-01-01
Describes two simulations, written for Apple II microcomputers, focusing on scientific methodology. The first is based on the tendency of colloidal iron in high concentrations to stick to fish gills and cause breathing difficulties. The second, modeled after the dioxin controversy, examines a hypothetical chemical thought to cause cancer. (JN)
Reconfigurable Software for Controlling Formation Flying
NASA Technical Reports Server (NTRS)
Mueller, Joseph B.
2006-01-01
Software for a system to control the trajectories of multiple spacecraft flying in formation is being developed to reflect underlying concepts of (1) a decentralized approach to guidance and control and (2) reconfigurability of the control system, including reconfigurability of the software and of control laws. The software is organized as a modular network of software tasks. The computational load for both determining relative trajectories and planning maneuvers is shared equally among all spacecraft in a cluster. The flexibility and robustness of the software are apparent in the fact that tasks can be added, removed, or replaced during flight. In a computational simulation of a representative formation-flying scenario, it was demonstrated that the following are among the services performed by the software: Uploading of commands from a ground station and distribution of the commands among the spacecraft, Autonomous initiation and reconfiguration of formations, Autonomous formation of teams through negotiations among the spacecraft, Working out details of high-level commands (e.g., shapes and sizes of geometrically complex formations), Implementation of a distributed guidance law providing autonomous optimization and assignment of target states, and Implementation of a decentralized, fuel-optimal, impulsive control law for planning maneuvers.
2003-12-01
POPL), pages 146–157, 1988 . 207 [HT01] Nevin Heintze and Olivier Tardieu. Ultra-fast aliasing analysis using CLA: A million lines of C code in a second...provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently...RESPONSIBLE PERSON a . REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39
Thermodynamics of natural selection III: Landauer's principle in computation and chemistry.
Smith, Eric
2008-05-21
This is the third in a series of three papers devoted to energy flow and entropy changes in chemical and biological processes, and their relations to the thermodynamics of computation. The previous two papers have developed reversible chemical transformations as idealizations for studying physiology and natural selection, and derived bounds from the second law of thermodynamics, between information gain in an ensemble and the chemical work required to produce it. This paper concerns the explicit mapping of chemistry to computation, and particularly the Landauer decomposition of irreversible computations, in which reversible logical operations generating no heat are separated from heat-generating erasure steps which are logically irreversible but thermodynamically reversible. The Landauer arrangement of computation is shown to produce the same entropy-flow diagram as that of the chemical Carnot cycles used in the second paper of the series to idealize physiological cycles. The specific application of computation to data compression and error-correcting encoding also makes possible a Landauer analysis of the somewhat different problem of optimal molecular recognition, which has been considered as an information theory problem. It is shown here that bounds on maximum sequence discrimination from the enthalpy of complex formation, although derived from the same logical model as the Shannon theorem for channel capacity, arise from exactly the opposite model for erasure.
Corrias, A.; Jie, X.; Romero, L.; Bishop, M. J.; Bernabeu, M.; Pueyo, E.; Rodriguez, B.
2010-01-01
In this paper, we illustrate how advanced computational modelling and simulation can be used to investigate drug-induced effects on cardiac electrophysiology and on specific biomarkers of pro-arrhythmic risk. To do so, we first perform a thorough literature review of proposed arrhythmic risk biomarkers from the ionic to the electrocardiogram levels. The review highlights the variety of proposed biomarkers, the complexity of the mechanisms of drug-induced pro-arrhythmia and the existence of significant animal species differences in drug-induced effects on cardiac electrophysiology. Predicting drug-induced pro-arrhythmic risk solely using experiments is challenging both preclinically and clinically, as attested by the rise in the cost of releasing new compounds to the market. Computational modelling and simulation has significantly contributed to the understanding of cardiac electrophysiology and arrhythmias over the last 40 years. In the second part of this paper, we illustrate how state-of-the-art open source computational modelling and simulation tools can be used to simulate multi-scale effects of drug-induced ion channel block in ventricular electrophysiology at the cellular, tissue and whole ventricular levels for different animal species. We believe that the use of computational modelling and simulation in combination with experimental techniques could be a powerful tool for the assessment of drug safety pharmacology. PMID:20478918
The design and implementation of CRT displays in the TCV real-time simulation
NASA Technical Reports Server (NTRS)
Leavitt, J. B.; Tariq, S. I.; Steinmetz, G. G.
1975-01-01
The design and application of computer graphics to the Terminal Configured Vehicle (TCV) program were described. A Boeing 737-100 series aircraft was modified with a second flight deck and several computers installed in the passenger cabin. One of the elements in support of the TCV program is a sophisticated simulation system developed to duplicate the operation of the aft flight deck. This facility consists of an aft flight deck simulator, equipped with realistic flight instrumentation, a CDC 6600 computer, and an Adage graphics terminal; this terminal presents to the simulator pilot displays similar to those used on the aircraft with equivalent man-machine interactions. These two displays form the primary flight instrumentation for the pilot and are dynamic images depicting critical flight information. The graphics terminal is a high speed interactive refresh-type graphics system. To support the cockpit display, two remote CRT's were wired in parallel with two of the Adage scopes.
Fast Photon Monte Carlo for Water Cherenkov Detectors
NASA Astrophysics Data System (ADS)
Latorre, Anthony; Seibert, Stanley
2012-03-01
We present Chroma, a high performance optical photon simulation for large particle physics detectors, such as the water Cerenkov far detector option for LBNE. This software takes advantage of the CUDA parallel computing platform to propagate photons using modern graphics processing units. In a computer model of a 200 kiloton water Cerenkov detector with 29,000 photomultiplier tubes, Chroma can propagate 2.5 million photons per second, around 200 times faster than the same simulation with Geant4. Chroma uses a surface based approach to modeling geometry which offers many benefits over a solid based modelling approach which is used in other simulations like Geant4.
Computational methods and software systems for dynamics and control of large space structures
NASA Technical Reports Server (NTRS)
Park, K. C.; Felippa, C. A.; Farhat, C.; Pramono, E.
1990-01-01
This final report on computational methods and software systems for dynamics and control of large space structures covers progress to date, projected developments in the final months of the grant, and conclusions. Pertinent reports and papers that have not appeared in scientific journals (or have not yet appeared in final form) are enclosed. The grant has supported research in two key areas of crucial importance to the computer-based simulation of large space structure. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area, as reported here, involves massively parallel computers.
Villard, P F; Vidal, F P; Hunt, C; Bello, F; John, N W; Johnson, S; Gould, D A
2009-11-01
We present here a simulator for interventional radiology focusing on percutaneous transhepatic cholangiography (PTC). This procedure consists of inserting a needle into the biliary tree using fluoroscopy for guidance. The requirements of the simulator have been driven by a task analysis. The three main components have been identified: the respiration, the real-time X-ray display (fluoroscopy) and the haptic rendering (sense of touch). The framework for modelling the respiratory motion is based on kinematics laws and on the Chainmail algorithm. The fluoroscopic simulation is performed on the graphic card and makes use of the Beer-Lambert law to compute the X-ray attenuation. Finally, the haptic rendering is integrated to the virtual environment and takes into account the soft-tissue reaction force feedback and maintenance of the initial direction of the needle during the insertion. Five training scenarios have been created using patient-specific data. Each of these provides the user with variable breathing behaviour, fluoroscopic display tuneable to any device parameters and needle force feedback. A detailed task analysis has been used to design and build the PTC simulator described in this paper. The simulator includes real-time respiratory motion with two independent parameters (rib kinematics and diaphragm action), on-line fluoroscopy implemented on the Graphics Processing Unit and haptic feedback to feel the soft-tissue behaviour of the organs during the needle insertion.
Can Newton's Third Law Be "Derived" from the Second?
NASA Astrophysics Data System (ADS)
Gangopadhyaya, Asim; Harrington, James
2017-04-01
Newton's laws have engendered much discussion over several centuries. Today, the internet is awash with a plethora of information on this topic. We find many references to Newton's laws, often discussions of various types of misunderstandings and ways to explain them. Here we present an intriguing example that shows an assumption hidden in Newton's third law that is often overlooked. As is well known, the first law defines an inertial frame of reference and the second law determines the acceleration of a particle in such a frame due to an external force. The third law describes forces exerted on each other in a two-particle system, and allows us to extend the second law to a system of particles. Students are often taught that the three laws are independent. Here we present an example that challenges this assumption. At first glance, it seems to show that, at least for a special case, the third law follows from the second law. However, a careful examination of the assumptions demonstrates that is not quite the case. Ultimately, the example does illustrate the significance of the concept of mass in linking Newton's dynamical principles.
Application of Local Discretization Methods in the NASA Finite-Volume General Circulation Model
NASA Technical Reports Server (NTRS)
Yeh, Kao-San; Lin, Shian-Jiann; Rood, Richard B.
2002-01-01
We present the basic ideas of the dynamics system of the finite-volume General Circulation Model developed at NASA Goddard Space Flight Center for climate simulations and other applications in meteorology. The dynamics of this model is designed with emphases on conservative and monotonic transport, where the property of Lagrangian conservation is used to maintain the physical consistency of the computational fluid for long-term simulations. As the model benefits from the noise-free solutions of monotonic finite-volume transport schemes, the property of Lagrangian conservation also partly compensates the accuracy of transport for the diffusion effects due to the treatment of monotonicity. By faithfully maintaining the fundamental laws of physics during the computation, this model is able to achieve sufficient accuracy for the global consistency of climate processes. Because the computing algorithms are based on local memory, this model has the advantage of efficiency in parallel computation with distributed memory. Further research is yet desirable to reduce the diffusion effects of monotonic transport for better accuracy, and to mitigate the limitation due to fast-moving gravity waves for better efficiency.
Rapid Quantification of Energy Absorption and Dissipation Metrics for PPE Padding Materials
2010-01-22
dampers , i.e., Hooke’s Law springs and viscous ...absorbing/dissipating materials. Input forces caused by blast pressures, determined from computational fluid dynamics (CFD) analysis and simulation...simple lumped-‐ parameter elements – spring, k (energy storage) – damper , b (energy dissipa/on Rapid
Computer Simulation of a 155-mm Projectile in a Scat Gun Assembly
2008-09-01
piston increases its kinetic energy as it is displaced a distance X. X X X MpsV2s /2 = f((P- vRT )/(Lpw -X))A - F)dx = (- vRT /(Lpw - X))AdY + f(PA -F...piston is displaced x, the critical displacement can be calculated using the Ideal Gas Law PV = vRT . YD = vRT /(A(Lp;v - x,i)) (ALpwv - x(,.ij
Optimization behavior of brainstem respiratory neurons. A cerebral neural network model.
Poon, C S
1991-01-01
A recent model of respiratory control suggested that the steady-state respiratory responses to CO2 and exercise may be governed by an optimal control law in the brainstem respiratory neurons. It was not certain, however, whether such complex optimization behavior could be accomplished by a realistic biological neural network. To test this hypothesis, we developed a hybrid computer-neural model in which the dynamics of the lung, brain and other tissue compartments were simulated on a digital computer. Mimicking the "controller" was a human subject who pedalled on a bicycle with varying speed (analog of ventilatory output) with a view to minimize an analog signal of the total cost of breathing (chemical and mechanical) which was computed interactively and displayed on an oscilloscope. In this manner, the visuomotor cortex served as a proxy (homolog) of the brainstem respiratory neurons in the model. Results in 4 subjects showed a linear steady-state ventilatory CO2 response to arterial PCO2 during simulated CO2 inhalation and a nearly isocapnic steady-state response during simulated exercise. Thus, neural optimization is a plausible mechanism for respiratory control during exercise and can be achieved by a neural network with cognitive computational ability without the need for an exercise stimulus.
Application of foam-extend on turbulent fluid-structure interaction
NASA Astrophysics Data System (ADS)
Rege, K.; Hjertager, B. H.
2017-12-01
Turbulent flow around flexible structures is likely to induce structural vibrations which may eventually lead to fatigue failure. In order to assess the fatigue life of these structures, it is necessary to take the action of the flow on the structure into account, but also the influence of the vibrating structure on the fluid flow. This is achieved by performing fluid-structure interaction (FSI) simulations. In this work, we have investigated the capability of a FSI toolkit for the finite volume computational fluid dynamics software foam-extend to simulate turbulence-induced vibrations of a flexible structure. A large-eddy simulation (LES) turbulence model has been implemented to a basic FSI problem of a flexible wall which is placed in a confined, turbulent flow. This problem was simulated for 2.32 seconds. This short simulation required over 200 computation hours, using 20 processor cores. Thereby, it has been shown that the simulation of FSI with LES is possible, but also computationally demanding. In order to make turbulent FSI simulations with foam-extend more applicable, more sophisticated turbulence models and/or faster FSI iteration schemes should be applied.
On the interpretation of kernels - Computer simulation of responses to impulse pairs
NASA Technical Reports Server (NTRS)
Hung, G.; Stark, L.; Eykhoff, P.
1983-01-01
A method is presented for the use of a unit impulse response and responses to impulse pairs of variable separation in the calculation of the second-degree kernels of a quadratic system. A quadratic system may be built from simple linear terms of known dynamics and a multiplier. Computer simulation results on quadratic systems with building elements of various time constants indicate reasonably that the larger time constant term before multiplication dominates in the envelope of the off-diagonal kernel curves as these move perpendicular to and away from the main diagonal. The smaller time constant term before multiplication combines with the effect of the time constant after multiplication to dominate in the kernel curves in the direction of the second-degree impulse response, i.e., parallel to the main diagonal. Such types of insight may be helpful in recognizing essential aspects of (second-degree) kernels; they may be used in simplifying the model structure and, perhaps, add to the physical/physiological understanding of the underlying processes.
The Next Frontier in Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarrao, John
2016-11-16
Exascale computing refers to computing systems capable of at least one exaflop or a billion calculations per second (1018). That is 50 times faster than the most powerful supercomputers being used today and represents a thousand-fold increase over the first petascale computer that came into operation in 2008. How we use these large-scale simulation resources is the key to solving some of today’s most pressing problems, including clean energy production, nuclear reactor lifetime extension and nuclear stockpile aging.
Heat transfer and thermal management of electric vehicle batteries with phase change materials
NASA Astrophysics Data System (ADS)
Ramandi, M. Y.; Dincer, I.; Naterer, G. F.
2011-07-01
This paper examines a passive thermal management system for electric vehicle batteries, consisting of encapsulated phase change material (PCM) which melts during a process to absorb the heat generated by a battery. A new configuration for the thermal management system, using double series PCM shells, is analyzed with finite volume simulations. A combination of computational fluid dynamics (CFD) and second law analysis is used to evaluate and compare the new system against the single PCM shells. Using a finite volume method, heat transfer in the battery pack is examined and the results are used to analyse the exergy losses. The simulations provide design guidelines for the thermal management system to minimize the size and cost of the system. The thermal conductivity and melting temperature are studied as two important parameters in the configuration of the shells. Heat transfer from the surroundings to the PCM shell in a non-insulated case is found to be infeasible. For a single PCM system, the exergy efficiency is below 50%. For the second case for other combinations, the exergy efficiencies ranged from 30-40%. The second shell content did not have significant influence on the exergy efficiencies. The double PCM shell system showed higher exergy efficiencies than the single PCM shell system (except a case for type PCM-1). With respect to the reference environment, it is found that in all cases the exergy efficiencies decreased, when the dead-state temperatures rises, and the destroyed exergy content increases gradually. For the double shell systems for all dead-state temperatures, the efficiencies were very similar. Except for a dead-state temperature of 302 K, with the other temperatures, the exergy efficiencies for different combinations are well over 50%. The range of exergy efficiencies vary widely between 15 and 85% for a single shell system, and between 30-80% for double shell systems.
Morphing continuum theory for turbulence: Theory, computation, and visualization.
Chen, James
2017-10-01
A high order morphing continuum theory (MCT) is introduced to model highly compressible turbulence. The theory is formulated under the rigorous framework of rational continuum mechanics. A set of linear constitutive equations and balance laws are deduced and presented from the Coleman-Noll procedure and Onsager's reciprocal relations. The governing equations are then arranged in conservation form and solved through the finite volume method with a second-order Lax-Friedrichs scheme for shock preservation. A numerical example of transonic flow over a three-dimensional bump is presented using MCT and the finite volume method. The comparison shows that MCT-based direct numerical simulation (DNS) provides a better prediction than Navier-Stokes (NS)-based DNS with less than 10% of the mesh number when compared with experiments. A MCT-based and frame-indifferent Q criterion is also derived to show the coherent eddy structure of the downstream turbulence in the numerical example. It should be emphasized that unlike the NS-based Q criterion, the MCT-based Q criterion is objective without the limitation of Galilean invariance.
Darrieus-Landau instability of premixed flames enhanced by fuel droplets
NASA Astrophysics Data System (ADS)
Nicoli, Colette; Haldenwang, Pierre; Denet, Bruno
2017-07-01
Recent experiments on spray flames propagating in a Wilson cloud chamber have established that spray flames are much more sensitive to wrinkles or corrugations than single-phase flames. To propose certain elements of explanation, we numerically study the Darrieus-Landau (or hydrodynamic) instability (DL-instability) developing in premixtures that contain an array of fuel droplets. Two approaches are compared: numerical simulation starting from the general conservation laws in reactive media, and the numerical computation of Sivashinsky-type model equations for DL-instability. Both approaches provide us with results in deep agreement. It is first shown that the presence of droplets in fuel-air premixtures induces initial perturbations which are large enough to trigger the DL-instability. Second, the droplets are responsible for additional wrinkles when the DL-instability is developed. The latter wrinkles are of length scales shorter than those of the DL-instability, in such a way that the DL-unstable spray flames have a larger front surface and therefore propagate faster than the single-phase ones when subjected to the same instability.
Kinematical line broadening and spatially resolved line profiles from AGN.
NASA Astrophysics Data System (ADS)
Schulz, H.; Muecke, A.; Boer, B.; Dresen, M.; Schmidt-Kaler, T.
1995-03-01
We study geometrical effects for emission-line broadening in the optically thin limit by integrating the projected line emissivity along prespecified lines of sight that intersect rotating or expanding disks or cone-like configurations. Analytical expressions are given for the case that emissivity and velocity follow power laws of the radial distance. The results help to interpret spatially resolved spectra and to check the reliability of numerical computations. In the second part we describe a numerical code applicable to any geometrical configuration. Turbulent motions, atmospheric seeing and effects induced by the size of the observing aperture are simulated with appropriate convolution procedures. An application to narrow-line Hα profiles from the central region of the Seyfert galaxy NGC 7469 is presented. The shapes and asymmetries as well as the relative strengths of the Hα lines from different spatial positions can be explained by emission from a nuclear rotating disk of ionized gas, for which the distribution of Hα line emissivity and the rotation curve are derived. Appreciable turbulent line broadening with a Gaussian σ of ~40% of the rotational velocity has to be included to obtain a satisfactory fit.
Natural convection in a vertical plane channel: DNS results for high Grashof numbers
NASA Astrophysics Data System (ADS)
Kiš, P.; Herwig, H.
2014-07-01
The turbulent natural convection of a gas ( Pr = 0.71) between two vertical infinite walls at different but constant temperatures is investigated by means of direct numerical simulation for a wide range of Grashof numbers (6.0 × 106 > Gr > 1.0 × 103). The maximum Grashof number is almost one order of magnitude higher than those of computations reported in the literature so far. Results for the turbulent transport equations are presented and compared to previous studies with special attention to the study of Verteegh and Nieuwstadt (Int J Heat Fluid Flow 19:135-149, 1998). All turbulence statistics are available on the TUHH homepage (http://www.tu-harburg.de/tt/dnsdatabase/dbindex.en.html). Accuracy considerations are based on the time averaged balance equations for kinetic and thermal energy. With the second law of thermodynamics Nusselt numbers can be determined by evaluating time averaged wall temperature gradients as well as by a volumetric time averaged integration. Comparing the results of both approaches leads to a direct measure of the physical consistency.
Control of nonlinear systems represented in quasilinear form. Ph.D. Thesis, 1994 Final Report
NASA Technical Reports Server (NTRS)
Coetsee, Josef A.
1993-01-01
Methods to synthesize controllers for nonlinear systems are developed by exploiting the fact that under mild differentiability conditions, systems of the form: x-dot = f(x) + G(x)u can be represented in quasilinear form, viz: x-dot = A(x)x + B(x)u. Two classes of control methods are investigated. The first is zero-look-ahead control, where the control input depends only on the current values of A(x) and B(x). For this case the control input is computed by continuously solving a matrix Riccati equation as the system progresses along a trajectory. The second is controllers with look-ahead, where the control input depends on the future behavior of A(x) and B(x). These controllers use the similarity between quasilinear systems and linear time varying systems to find approximate solutions to optimal control type problems. The methods that are developed are not guaranteed to be globally stable. However in simulation studies they were found to be useful alternatives for synthesizing control laws for a general class of nonlinear systems.
Adaptive mesh fluid simulations on GPU
NASA Astrophysics Data System (ADS)
Wang, Peng; Abel, Tom; Kaehler, Ralf
2010-10-01
We describe an implementation of compressible inviscid fluid solvers with block-structured adaptive mesh refinement on Graphics Processing Units using NVIDIA's CUDA. We show that a class of high resolution shock capturing schemes can be mapped naturally on this architecture. Using the method of lines approach with the second order total variation diminishing Runge-Kutta time integration scheme, piecewise linear reconstruction, and a Harten-Lax-van Leer Riemann solver, we achieve an overall speedup of approximately 10 times faster execution on one graphics card as compared to a single core on the host computer. We attain this speedup in uniform grid runs as well as in problems with deep AMR hierarchies. Our framework can readily be applied to more general systems of conservation laws and extended to higher order shock capturing schemes. This is shown directly by an implementation of a magneto-hydrodynamic solver and comparing its performance to the pure hydrodynamic case. Finally, we also combined our CUDA parallel scheme with MPI to make the code run on GPU clusters. Close to ideal speedup is observed on up to four GPUs.
Numerical simulation of a full-loop circulating fluidized bed under different operating conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Yupeng; Musser, Jordan M.; Li, Tingwen
Both experimental and computational studies of the fluidization of high-density polyethylene (HDPE) particles in a small-scale full-loop circulating fluidized bed are conducted. Experimental measurements of pressure drop are taken at different locations along the bed. The solids circulation rate is measured with an advanced Particle Image Velocimetry (PIV) technique. The bed height of the quasi-static region in the standpipe is also measured. Comparative numerical simulations are performed with a Computational Fluid Dynamics solver utilizing a Discrete Element Method (CFD-DEM). This paper reports a detailed and direct comparison between CFD-DEM results and experimental data for realistic gas-solid fluidization in a full-loopmore » circulating fluidized bed system. The comparison reveals good agreement with respect to system component pressure drop and inventory height in the standpipe. In addition, the effect of different drag laws applied within the CFD simulation is examined and compared with experimental results.« less
PLNoise: a package for exact numerical simulation of power-law noises
NASA Astrophysics Data System (ADS)
Milotti, Edoardo
2006-08-01
Many simulations of stochastic processes require colored noises: here I describe a small program library that generates samples with a tunable power-law spectral density: the algorithm can be modified to generate more general colored noises, and is exact for all time steps, even when they are unevenly spaced (as may often happen in the case of astronomical data, see e.g. [N.R. Lomb, Astrophys. Space Sci. 39 (1976) 447]. The method is exact in the sense that it reproduces a process that is theoretically guaranteed to produce a range-limited power-law spectrum 1/f with -1<β⩽1. The algorithm has a well-behaved computational complexity, it produces a nearly perfect Gaussian noise, and its computational efficiency depends on the required degree of noise Gaussianity. Program summaryTitle of program: PLNoise Catalogue identifier:ADXV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXV_v1_0.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Programming language used: ANSI C Computer: Any computer with an ANSI C compiler: the package has been tested with gcc version 3.2.3 on Red Hat Linux 3.2.3-52 and gcc version 4.0.0 and 4.0.1 on Apple Mac OS X-10.4 Operating system: All operating systems capable of running an ANSI C compiler No. of lines in distributed program, including test data, etc.:6238 No. of bytes in distributed program, including test data, etc.:52 387 Distribution format:tar.gz RAM: The code of the test program is very compact (about 50 Kbytes), but the program works with list management and allocates memory dynamically; in a typical run (like the one discussed in Section 4 in the long write-up) with average list length 2ṡ10, the RAM taken by the list is 200 Kbytes. External routines: The package needs external routines to generate uniform and exponential deviates. The implementation described here uses the random number generation library ranlib freely available from Netlib [B.W. Brown, J. Lovato, K. Russell, ranlib, available from Netlib, http://www.netlib.org/random/index.html, select the C version ranlib.c], but it has also been successfully tested with the random number routines in Numerical Recipes [W.H. Press, S.A. Teulkolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes in C: The Art of Scientific Computing, second ed., Cambridge Univ. Press, Cambridge, 1992, pp. 274-290]. Notice that ranlib requires a pair of routines from the linear algebra package LINPACK, and that the distribution of ranlib includes the C source of these routines, in case LINPACK is not installed on the target machine. Nature of problem: Exact generation of different types of Gaussian colored noise. Solution method: Random superposition of relaxation processes [E. Milotti, Phys. Rev. E 72 (2005) 056701]. Unusual features: The algorithm is theoretically guaranteed to be exact, and unlike all other existing generators it can generate samples with uneven spacing. Additional comments: The program requires an initialization step; for some parameter sets this may become rather heavy. Running time: Running time varies widely with different input parameters, however in a test run like the one in Section 4 in this work, the generation routine took on average about 7 ms for each sample.
NASA Astrophysics Data System (ADS)
Du, Zhong; Tian, Bo; Wu, Xiao-Yu; Liu, Lei; Sun, Yan
2017-07-01
Subpicosecond or femtosecond optical pulse propagation in the inhomogeneous fiber can be described by a higher-order nonlinear Schrödinger equation with variable coefficients, which is investigated in the paper. Via the Ablowitz-Kaup-Newell-Segur system and symbolic computation, the Lax pair and infinitely-many conservation laws are deduced. Based on the Lax pair and a modified Darboux transformation technique, the first- and second-order rogue wave solutions are constructed. Effects of the groupvelocity dispersion and third-order dispersion on the properties of the first- and second-order rouge waves are graphically presented and analyzed: The groupvelocity dispersion and third-order dispersion both affect the ranges and shapes of the first- and second-order rogue waves: The third-order dispersion can produce a skew angle of the first-order rogue wave and the skew angle rotates counterclockwise with the increase of the groupvelocity dispersion, when the groupvelocity dispersion and third-order dispersion are chosen as the constants; When the groupvelocity dispersion and third-order dispersion are taken as the functions of the propagation distance, the linear, X-shaped and parabolic trajectories of the rogue waves are obtained.
Investigating a New Way To Teach Law: A Computer-based Commercial Law Course.
ERIC Educational Resources Information Center
Lloyd, Robert M.
2000-01-01
Describes the successful use of an interactive, computer-based format supplemented by online chats to provide a two-credit-hour commercial law course at the University of Tennessee College of Law. (EV)
Zhu, Jie; Li, Xiaoxi; Huang, Chen; Chen, Ling; Li, Lin
2014-04-15
This work studied the structural changes and the migration of triacetin plasticizer in starch acetate films in the presence of distilled water as food simulant. Fourier-transform infrared spectroscopy result showed that the macromolecular interaction was enhanced to form compact aggregation of amorphous chains. The characterization of aggregation structures via wide and small angle X-ray scattering techniques indicated that the orderly microregion was compressed and the crystallites inside were "squeezed" to form interference and further aggregation. The compact aggregation structures restricted the mobility of macromolecules, triacetin and water molecules. The overall kinetic and the diffusion model analysis manifested that Fick's second law was the predominant mechanism for the short-term migration of triacetin. The increasing relaxation within film matrix caused the subsequent migration to deviate from Fick's law. The safe and reasonable application of the starch-based materials with restrained plasticizer migration could be accomplished by controlling the molecular interaction and aggregation structures. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Modeling of a Sequential Two-Stage Combustor
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Liu, N.-S.; Gallagher, J. R.; Ryder, R. C.; Brankovic, A.; Hendricks, J. A.
2005-01-01
A sequential two-stage, natural gas fueled power generation combustion system is modeled to examine the fundamental aerodynamic and combustion characteristics of the system. The modeling methodology includes CAD-based geometry definition, and combustion computational fluid dynamics analysis. Graphical analysis is used to examine the complex vortical patterns in each component, identifying sources of pressure loss. The simulations demonstrate the importance of including the rotating high-pressure turbine blades in the computation, as this results in direct computation of combustion within the first turbine stage, and accurate simulation of the flow in the second combustion stage. The direct computation of hot-streaks through the rotating high-pressure turbine stage leads to improved understanding of the aerodynamic relationships between the primary and secondary combustors and the turbomachinery.
NASA Technical Reports Server (NTRS)
Sozen, Mehmet
2003-01-01
In what follows, the model used for combustion of liquid hydrogen (LH2) with liquid oxygen (LOX) using chemical equilibrium assumption, and the novel computational method developed for determining the equilibrium composition and temperature of the combustion products by application of the first and second laws of thermodynamics will be described. The modular FORTRAN code developed as a subroutine that can be incorporated into any flow network code with little effort has been successfully implemented in GFSSP as the preliminary runs indicate. The code provides capability of modeling the heat transfer rate to the coolants for parametric analysis in system design.
NASA Technical Reports Server (NTRS)
Kumar, L.
1978-01-01
A computer program is described for calculating the flexibility coefficients as arm design changes are made for the remote manipulator system. The coefficients obtained are required as input for a second program which reduces the number of payload deployment and retrieval system simulation runs required to simulate the various remote manipulator system maneuvers. The second program calculates end effector flexibility and joint flexibility terms for the torque model of each joint for any arbitrary configurations. The listing of both programs is included in the appendix.
Duality quantum algorithm efficiently simulates open quantum systems
Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu
2016-01-01
Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855
Performance of distributed multiscale simulations
Borgdorff, J.; Ben Belgacem, M.; Bona-Casas, C.; Fazendeiro, L.; Groen, D.; Hoenen, O.; Mizeranschi, A.; Suter, J. L.; Coster, D.; Coveney, P. V.; Dubitzky, W.; Hoekstra, A. G.; Strand, P.; Chopard, B.
2014-01-01
Multiscale simulations model phenomena across natural scales using monolithic or component-based code, running on local or distributed resources. In this work, we investigate the performance of distributed multiscale computing of component-based models, guided by six multiscale applications with different characteristics and from several disciplines. Three modes of distributed multiscale computing are identified: supplementing local dependencies with large-scale resources, load distribution over multiple resources, and load balancing of small- and large-scale resources. We find that the first mode has the apparent benefit of increasing simulation speed, and the second mode can increase simulation speed if local resources are limited. Depending on resource reservation and model coupling topology, the third mode may result in a reduction of resource consumption. PMID:24982258
Mattei, Lorenza; Di Puccio, Francesca; Joyce, Thomas J; Ciulli, Enrico
2015-03-01
In the present study, numerical and experimental wear investigations on reverse total shoulder arthroplasties (RTSAs) were combined in order to estimate specific wear coefficients, currently not available in the literature. A wear model previously developed by the authors for metal-on-plastic hip implants was adapted to RTSAs and applied in a double direction: firstly, to evaluate specific wear coefficients for RTSAs from experimental results and secondly, to predict wear distribution. In both cases, the Archard wear law (AR) and the wear law of UHMWPE (PE) were considered, assuming four different k functions. The results indicated that both the wear laws predict higher wear coefficients for RTSA with respect to hip implants, particularly the AR law, with k values higher than twofold the hip ones. Such differences can significantly affect predictive wear model results for RTSA, when non-specific wear coefficients are used. Moreover, the wear maps simulated with the two laws are markedly different, although providing the same wear volume. A higher wear depth (+51%) is obtained with the AR law, located at the dome of the cup, while with the PE law the most worn region is close to the edge. Taking advantage of the linear trend of experimental volume losses, the wear coefficients obtained with the AR law should be valid despite having neglected the geometry update in the model. Copyright © 2015 Elsevier Ltd. All rights reserved.
3D Fluid-Structure Interaction Simulation of Aortic Valves Using a Unified Continuum ALE FEM Model.
Spühler, Jeannette H; Jansson, Johan; Jansson, Niclas; Hoffman, Johan
2018-01-01
Due to advances in medical imaging, computational fluid dynamics algorithms and high performance computing, computer simulation is developing into an important tool for understanding the relationship between cardiovascular diseases and intraventricular blood flow. The field of cardiac flow simulation is challenging and highly interdisciplinary. We apply a computational framework for automated solutions of partial differential equations using Finite Element Methods where any mathematical description directly can be translated to code. This allows us to develop a cardiac model where specific properties of the heart such as fluid-structure interaction of the aortic valve can be added in a modular way without extensive efforts. In previous work, we simulated the blood flow in the left ventricle of the heart. In this paper, we extend this model by placing prototypes of both a native and a mechanical aortic valve in the outflow region of the left ventricle. Numerical simulation of the blood flow in the vicinity of the valve offers the possibility to improve the treatment of aortic valve diseases as aortic stenosis (narrowing of the valve opening) or regurgitation (leaking) and to optimize the design of prosthetic heart valves in a controlled and specific way. The fluid-structure interaction and contact problem are formulated in a unified continuum model using the conservation laws for mass and momentum and a phase function. The discretization is based on an Arbitrary Lagrangian-Eulerian space-time finite element method with streamline diffusion stabilization, and it is implemented in the open source software Unicorn which shows near optimal scaling up to thousands of cores. Computational results are presented to demonstrate the capability of our framework.
3D Fluid-Structure Interaction Simulation of Aortic Valves Using a Unified Continuum ALE FEM Model
Spühler, Jeannette H.; Jansson, Johan; Jansson, Niclas; Hoffman, Johan
2018-01-01
Due to advances in medical imaging, computational fluid dynamics algorithms and high performance computing, computer simulation is developing into an important tool for understanding the relationship between cardiovascular diseases and intraventricular blood flow. The field of cardiac flow simulation is challenging and highly interdisciplinary. We apply a computational framework for automated solutions of partial differential equations using Finite Element Methods where any mathematical description directly can be translated to code. This allows us to develop a cardiac model where specific properties of the heart such as fluid-structure interaction of the aortic valve can be added in a modular way without extensive efforts. In previous work, we simulated the blood flow in the left ventricle of the heart. In this paper, we extend this model by placing prototypes of both a native and a mechanical aortic valve in the outflow region of the left ventricle. Numerical simulation of the blood flow in the vicinity of the valve offers the possibility to improve the treatment of aortic valve diseases as aortic stenosis (narrowing of the valve opening) or regurgitation (leaking) and to optimize the design of prosthetic heart valves in a controlled and specific way. The fluid-structure interaction and contact problem are formulated in a unified continuum model using the conservation laws for mass and momentum and a phase function. The discretization is based on an Arbitrary Lagrangian-Eulerian space-time finite element method with streamline diffusion stabilization, and it is implemented in the open source software Unicorn which shows near optimal scaling up to thousands of cores. Computational results are presented to demonstrate the capability of our framework. PMID:29713288
Development of a Computational Assay for the Estrogen Receptor
2006-07-01
University Ashley Deline, Senior Thesis in chemistry, " Molecular Dynamic Simulations of a Glycoform and its Constituent Parts Related to Rheumatoid Arthritis...involves running a long molecular dynamics (MD) simulation of the uncoupled receptor in order to sample the protein’s unique conformations. The second...Receptor binding domain. * Performed several long molecular dynamics simulations (800 ps - 3 ns) on the ligand-ER system using ligands with known
Numerical Relativity, Black Hole Mergers, and Gravitational Waves: Part II
NASA Technical Reports Server (NTRS)
Centrella, Joan
2012-01-01
This series of 3 lectures will present recent developments in numerical relativity, and their applications to simulating black hole mergers and computing the resulting gravitational waveforms. In this second lecture, we focus on simulations of black hole binary mergers. We hig hlight the instabilities that plagued the codes for many years, the r ecent breakthroughs that led to the first accurate simulations, and the current state of the art.
A Lumped Computational Model for Sodium Sulfur Battery Analysis
NASA Astrophysics Data System (ADS)
Wu, Fan
Due to the cost of materials and time consuming testing procedures, development of new batteries is a slow and expensive practice. The purpose of this study is to develop a computational model and assess the capabilities of such a model designed to aid in the design process and control of sodium sulfur batteries. To this end, a transient lumped computational model derived from an integral analysis of the transport of species, energy and charge throughout the battery has been developed. The computation processes are coupled with the use of Faraday's law, and solutions for the species concentrations, electrical potential and current are produced in a time marching fashion. Properties required for solving the governing equations are calculated and updated as a function of time based on the composition of each control volume. The proposed model is validated against multi- dimensional simulations and experimental results from literatures, and simulation results using the proposed model is presented and analyzed. The computational model and electrochemical model used to solve the equations for the lumped model are compared with similar ones found in the literature. The results obtained from the current model compare favorably with those from experiments and other models.
Modeling weakly-ionized plasmas in magnetic field: A new computationally-efficient approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parent, Bernard, E-mail: parent@pusan.ac.kr; Macheret, Sergey O.; Shneider, Mikhail N.
2015-11-01
Despite its success at simulating accurately both non-neutral and quasi-neutral weakly-ionized plasmas, the drift-diffusion model has been observed to be a particularly stiff set of equations. Recently, it was demonstrated that the stiffness of the system could be relieved by rewriting the equations such that the potential is obtained from Ohm's law rather than Gauss's law while adding some source terms to the ion transport equation to ensure that Gauss's law is satisfied in non-neutral regions. Although the latter was applicable to multicomponent and multidimensional plasmas, it could not be used for plasmas in which the magnetic field was significant.more » This paper hence proposes a new computationally-efficient set of electron and ion transport equations that can be used not only for a plasma with multiple types of positive and negative ions, but also for a plasma in magnetic field. Because the proposed set of equations is obtained from the same physical model as the conventional drift-diffusion equations without introducing new assumptions or simplifications, it results in the same exact solution when the grid is refined sufficiently while being more computationally efficient: not only is the proposed approach considerably less stiff and hence requires fewer iterations to reach convergence but it yields a converged solution that exhibits a significantly higher resolution. The combined faster convergence and higher resolution is shown to result in a hundredfold increase in computational efficiency for some typical steady and unsteady plasma problems including non-neutral cathode and anode sheaths as well as quasi-neutral regions.« less
NASA Technical Reports Server (NTRS)
Chatterjee, Sharmista
1993-01-01
Our first goal in this project was to perform a systems analysis of a closed loop Environmental Control Life Support System (ECLSS). This pertains to the development of a model of an existing real system from which to assess the state or performance of the existing system. Systems analysis is applied to conceptual models obtained from a system design effort. For our modelling purposes we used a simulator tool called ASPEN (Advanced System for Process Engineering). Our second goal was to evaluate the thermodynamic efficiency of the different components comprising an ECLSS. Use is made of the second law of thermodynamics to determine the amount of irreversibility of energy loss of each component. This will aid design scientists in selecting the components generating the least entropy, as our penultimate goal is to keep the entropy generation of the whole system at a minimum.
Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, Michel; Archer, Bill; Matzen, M. Keith
2014-09-16
The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.« less
A control method for bilateral teleoperating systems
NASA Astrophysics Data System (ADS)
Strassberg, Yesayahu
1992-01-01
The thesis focuses on control of bilateral master-slave teleoperators. The bilateral control issue of teleoperators is studied and a new scheme that overcomes basic unsolved problems is proposed. A performance measure, based on the multiport modeling method, is introduced in order to evaluate and understand the limitations of earlier published bilateral control laws. Based on the study evaluating the different methods, the objective of the thesis is stated. The proposed control law is then introduced, its ideal performance is demonstrated, and conditions for stability and robustness are derived. It is shown that stability, desired performance, and robustness can be obtained under the assumption that the deviation of the model from the actual system satisfies certain norm inequalities and the measurement uncertainties are bounded. The proposed scheme is validated by numerical simulation. The simulated system is based on the configuration of the RAL (Robotics and Automation Laboratory) telerobot. From the simulation results it is shown that good tracking performance can be obtained. In order to verify the performance of the proposed scheme when applied to a real hardware system, an experimental setup of a three degree of freedom master-slave teleoperator (i.e. three degree of freedom master and three degree of freedom slave robot) was built. Three basic experiments were conducted to verify the performance of the proposed control scheme. The first experiment verified the master control law and its contribution to the robustness and performance of the entire system. The second experiment demonstrated the actual performance of the system while performing a free motion teleoperating task. From the experimental results, it is shown that the control law has good performance and is robust to uncertainties in the models of the master and slave.
Hybrid Method for Power Control Simulation of a Single Fluid Plasma Thruster
NASA Astrophysics Data System (ADS)
Jaisankar, S.; Sheshadri, T. S.
2018-05-01
Propulsive plasma flow through a cylindrical-conical diverging thruster is simulated by a power controlled hybrid method to obtain the basic flow, thermodynamic and electromagnetic variables. Simulation is based on a single fluid model with electromagnetics being described by the equations of potential Poisson, Maxwell and the Ohm's law while the compressible fluid dynamics by the Navier Stokes in cylindrical form. The proposed method solved the electromagnetics and fluid dynamics separately, both to segregate the two prominent scales for an efficient computation and for the delivery of voltage controlled rated power. The magnetic transport is solved for steady state while fluid dynamics is allowed to evolve in time along with an electromagnetic source using schemes based on generalized finite difference discretization. The multistep methodology with power control is employed for simulating fully ionized propulsive flow of argon plasma through the thruster. Numerical solution shows convergence of every part of the solver including grid stability causing the multistep hybrid method to converge for a rated power delivery. Simulation results are reasonably in agreement with the reported physics of plasma flow in the thruster thus indicating the potential utility of this hybrid computational framework, especially when single fluid approximation of plasma is relevant.
Experimental study and simulation of space charge stimulated discharge
NASA Astrophysics Data System (ADS)
Noskov, M. D.; Malinovski, A. S.; Cooke, C. M.; Wright, K. A.; Schwab, A. J.
2002-11-01
The electrical discharge of volume distributed space charge in poly(methylmethacrylate) (PMMA) has been investigated both experimentally and by computer simulation. The experimental space charge was implanted in dielectric samples by exposure to a monoenergetic electron beam of 3 MeV. Electrical breakdown through the implanted space charge region within the sample was initiated by a local electric field enhancement applied to the sample surface. A stochastic-deterministic dynamic model for electrical discharge was developed and used in a computer simulation of these breakdowns. The model employs stochastic rules to describe the physical growth of the discharge channels, and deterministic laws to describe the electric field, the charge, and energy dynamics within the discharge channels and the dielectric. Simulated spatial-temporal and current characteristics of the expanding discharge structure during physical growth are quantitatively compared with the experimental data to confirm the discharge model. It was found that a single fixed set of physically based dielectric parameter values was adequate to simulate the complete family of experimental space charge discharges in PMMA. It is proposed that such a set of parameters also provides a useful means to quantify the breakdown properties of other dielectrics.
NASA Astrophysics Data System (ADS)
Kwon, Deuk-Chul; Shin, Sung-Sik; Yu, Dong-Hun
2017-10-01
In order to reduce the computing time in simulation of radio frequency (rf) plasma sources, various numerical schemes were developed. It is well known that the upwind, exponential, and power-law schemes can efficiently overcome the limitation on the grid size for fluid transport simulations of high density plasma discharges. Also, the semi-implicit method is a well-known numerical scheme to overcome on the simulation time step. However, despite remarkable advances in numerical techniques and computing power over the last few decades, efficient multi-dimensional modeling of low temperature plasma discharges has remained a considerable challenge. In particular, there was a difficulty on parallelization in time for the time periodic steady state problems such as capacitively coupled plasma discharges and rf sheath dynamics because values of plasma parameters in previous time step are used to calculate new values each time step. Therefore, we present a parallelization method for the time periodic steady state problems by using period-slices. In order to evaluate the efficiency of the developed method, one-dimensional fluid simulations are conducted for describing rf sheath dynamics. The result shows that speedup can be achieved by using a multithreading method.
Avalanche statistics from data with low time resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
LeBlanc, Michael; Nawano, Aya; Wright, Wendelin J.
Extracting avalanche distributions from experimental microplasticity data can be hampered by limited time resolution. We compute the effects of low time resolution on avalanche size distributions and give quantitative criteria for diagnosing and circumventing problems associated with low time resolution. We show that traditional analysis of data obtained at low acquisition rates can lead to avalanche size distributions with incorrect power-law exponents or no power-law scaling at all. Furthermore, we demonstrate that it can lead to apparent data collapses with incorrect power-law and cutoff exponents. We propose new methods to analyze low-resolution stress-time series that can recover the size distributionmore » of the underlying avalanches even when the resolution is so low that naive analysis methods give incorrect results. We test these methods on both downsampled simulation data from a simple model and downsampled bulk metallic glass compression data and find that the methods recover the correct critical exponents.« less
Avalanche statistics from data with low time resolution
LeBlanc, Michael; Nawano, Aya; Wright, Wendelin J.; ...
2016-11-22
Extracting avalanche distributions from experimental microplasticity data can be hampered by limited time resolution. We compute the effects of low time resolution on avalanche size distributions and give quantitative criteria for diagnosing and circumventing problems associated with low time resolution. We show that traditional analysis of data obtained at low acquisition rates can lead to avalanche size distributions with incorrect power-law exponents or no power-law scaling at all. Furthermore, we demonstrate that it can lead to apparent data collapses with incorrect power-law and cutoff exponents. We propose new methods to analyze low-resolution stress-time series that can recover the size distributionmore » of the underlying avalanches even when the resolution is so low that naive analysis methods give incorrect results. We test these methods on both downsampled simulation data from a simple model and downsampled bulk metallic glass compression data and find that the methods recover the correct critical exponents.« less
Realistic radiative MHD simulation of a solar flare
NASA Astrophysics Data System (ADS)
Rempel, Matthias D.; Cheung, Mark; Chintzoglou, Georgios; Chen, Feng; Testa, Paola; Martinez-Sykora, Juan; Sainz Dalda, Alberto; DeRosa, Marc L.; Viktorovna Malanushenko, Anna; Hansteen, Viggo H.; De Pontieu, Bart; Carlsson, Mats; Gudiksen, Boris; McIntosh, Scott W.
2017-08-01
We present a recently developed version of the MURaM radiative MHD code that includes coronal physics in terms of optically thin radiative loss and field aligned heat conduction. The code employs the "Boris correction" (semi-relativistic MHD with a reduced speed of light) and a hyperbolic treatment of heat conduction, which allow for efficient simulations of the photosphere/corona system by avoiding the severe time-step constraints arising from Alfven wave propagation and heat conduction. We demonstrate that this approach can be used even in dynamic phases such as a flare. We consider a setup in which a flare is triggered by flux emergence into a pre-existing bipolar active region. After the coronal energy release, efficient transport of energy along field lines leads to the formation of flare ribbons within seconds. In the flare ribbons we find downflows for temperatures lower than ~5 MK and upflows at higher temperatures. The resulting soft X-ray emission shows a fast rise and slow decay, reaching a peak corresponding to a mid C-class flare. The post reconnection energy release in the corona leads to average particle energies reaching 50 keV (500 MK under the assumption of a thermal plasma). We show that hard X-ray emission from the corona computed under the assumption of thermal bremsstrahlung can produce a power-law spectrum due to the multi-thermal nature of the plasma. The electron energy flux into the flare ribbons (classic heat conduction with free streaming limit) is highly inhomogeneous and reaches peak values of about 3x1011 erg/cm2/s in a small fraction of the ribbons, indicating regions that could potentially produce hard X-ray footpoint sources. We demonstrate that these findings are robust by comparing simulations computed with different values of the saturation heat flux as well as the "reduced speed of light".
Development of MCNPX-ESUT computer code for simulation of neutron/gamma pulse height distribution
NASA Astrophysics Data System (ADS)
Abolfazl Hosseini, Seyed; Vosoughi, Naser; Zangian, Mehdi
2015-05-01
In this paper, the development of the MCNPX-ESUT (MCNPX-Energy Engineering of Sharif University of Technology) computer code for simulation of neutron/gamma pulse height distribution is reported. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry in mixed neutron/gamma fields, this type of detectors is selected for simulation in the present study. The proposed algorithm for simulation includes four main steps. The first step is the modeling of the neutron/gamma particle transport and their interactions with the materials in the environment and detector volume. In the second step, the number of scintillation photons due to charged particles such as electrons, alphas, protons and carbon nuclei in the scintillator material is calculated. In the third step, the transport of scintillation photons in the scintillator and lightguide is simulated. Finally, the resolution corresponding to the experiment is considered in the last step of the simulation. Unlike the similar computer codes like SCINFUL, NRESP7 and PHRESP, the developed computer code is applicable to both neutron and gamma sources. Hence, the discrimination of neutron and gamma in the mixed fields may be performed using the MCNPX-ESUT computer code. The main feature of MCNPX-ESUT computer code is that the neutron/gamma pulse height simulation may be performed without needing any sort of post processing. In the present study, the pulse height distributions due to a monoenergetic neutron/gamma source in NE-213 detector using MCNPX-ESUT computer code is simulated. The simulated neutron pulse height distributions are validated through comparing with experimental data (Gohil et al. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 664 (2012) 304-309.) and the results obtained from similar computer codes like SCINFUL, NRESP7 and Geant4. The simulated gamma pulse height distribution for a 137Cs source is also compared with the experimental data.
Welter, Michael; Rieger, Heiko
2016-01-01
Tumor vasculature, the blood vessel network supplying a growing tumor with nutrients such as oxygen or glucose, is in many respects different from the hierarchically organized arterio-venous blood vessel network in normal tissues. Angiogenesis (the formation of new blood vessels), vessel cooption (the integration of existing blood vessels into the tumor vasculature), and vessel regression remodel the healthy vascular network into a tumor-specific vasculature. Integrative models, based on detailed experimental data and physical laws, implement, in silico, the complex interplay of molecular pathways, cell proliferation, migration, and death, tissue microenvironment, mechanical and hydrodynamic forces, and the fine structure of the host tissue vasculature. With the help of computer simulations high-precision information about blood flow patterns, interstitial fluid flow, drug distribution, oxygen and nutrient distribution can be obtained and a plethora of therapeutic protocols can be tested before clinical trials. This chapter provides an overview over the current status of computer simulations of vascular remodeling during tumor growth including interstitial fluid flow, drug delivery, and oxygen supply within the tumor. The model predictions are compared with experimental and clinical data and a number of longstanding physiological paradigms about tumor vasculature and intratumoral solute transport are critically scrutinized.
Halloran, Jason P; Ackermann, Marko; Erdemir, Ahmet; van den Bogert, Antonie J
2010-10-19
Current computational methods for simulating locomotion have primarily used muscle-driven multibody dynamics, in which neuromuscular control is optimized. Such simulations generally represent joints and soft tissue as simple kinematic or elastic elements for computational efficiency. These assumptions limit application in studies such as ligament injury or osteoarthritis, where local tissue loading must be predicted. Conversely, tissue can be simulated using the finite element method with assumed or measured boundary conditions, but this does not represent the effects of whole body dynamics and neuromuscular control. Coupling the two domains would overcome these limitations and allow prediction of movement strategies guided by tissue stresses. Here we demonstrate this concept in a gait simulation where a musculoskeletal model is coupled to a finite element representation of the foot. Predictive simulations incorporated peak plantar tissue deformation into the objective of the movement optimization, as well as terms to track normative gait data and minimize fatigue. Two optimizations were performed, first without the strain minimization term and second with the term. Convergence to realistic gait patterns was achieved with the second optimization realizing a 44% reduction in peak tissue strain energy density. The study demonstrated that it is possible to alter computationally predicted neuromuscular control to minimize tissue strain while including desired kinematic and muscular behavior. Future work should include experimental validation before application of the methodology to patient care. Copyright © 2010 Elsevier Ltd. All rights reserved.
Numerical modeling of landslides and generated seismic waves: The Bingham Canyon Mine landslides
NASA Astrophysics Data System (ADS)
Miallot, H.; Mangeney, A.; Capdeville, Y.; Hibert, C.
2016-12-01
Landslides are important natural hazards and key erosion processes. They create long period surface waves that can be recorded by regional and global seismic networks. The seismic signals are generated by acceleration/deceleration of the mass sliding over the topography. They consist in a unique and powerful tool to detect, characterize and quantify the landslide dynamics. We investigate here the processes at work during the two massive landslides that struck the Bingham Canyon Mine on the 10th April 2013. We carry a combined analysis of the generated seismic signals and the landslide processes computed with a 3D modeling on a complex topography. Forces computed by broadband seismic waveform inversion are used to constrain the study and particularly the force-source and the bulk dynamic. The source time function are obtained by a 3D model (Shaltop) where rheological parameters can be adjusted. We first investigate the influence of the initial shape of the sliding mass which strongly affects the whole landslide dynamic. We also see that the initial shape of the source mass of the first landslide constrains pretty well the second landslide source mass. We then investigate the effect of a rheological parameter, the frictional angle, that strongly influences the resulted computed seismic source function. We test here numerous friction laws as the frictional Coulomb law and a velocity-weakening friction law. Our results show that the force waveform fitting the observed data is highly variable depending on these different choices.
ERIC Educational Resources Information Center
Ajredini, Fadil; Izairi, Neset; Zajkov, Oliver
2014-01-01
This research investigates the influence of computer simulations (virtual experiments) on one hand and real experiments on the other hand on the conceptual understanding of electrical charging. The investigated sample consists of students in the second year (10th grade) of three gymnasiums in Macedonia. There were two experimental groups and one…
Photonic simulation of entanglement growth and engineering after a spin chain quench.
Pitsios, Ioannis; Banchi, Leonardo; Rab, Adil S; Bentivegna, Marco; Caprara, Debora; Crespi, Andrea; Spagnolo, Nicolò; Bose, Sougato; Mataloni, Paolo; Osellame, Roberto; Sciarrino, Fabio
2017-11-17
The time evolution of quantum many-body systems is one of the most important processes for benchmarking quantum simulators. The most curious feature of such dynamics is the growth of quantum entanglement to an amount proportional to the system size (volume law) even when interactions are local. This phenomenon has great ramifications for fundamental aspects, while its optimisation clearly has an impact on technology (e.g., for on-chip quantum networking). Here we use an integrated photonic chip with a circuit-based approach to simulate the dynamics of a spin chain and maximise the entanglement generation. The resulting entanglement is certified by constructing a second chip, which measures the entanglement between multiple distant pairs of simulated spins, as well as the block entanglement entropy. This is the first photonic simulation and optimisation of the extensive growth of entanglement in a spin chain, and opens up the use of photonic circuits for optimising quantum devices.
NASA Astrophysics Data System (ADS)
Ishii, Ayako; Ohnishi, Naofumi; Nagakura, Hiroki; Ito, Hirotaka; Yamada, Shoichi
2017-11-01
We developed a three-dimensional radiative transfer code for an ultra-relativistic background flow-field by using the Monte Carlo (MC) method in the context of gamma-ray burst (GRB) emission. For obtaining reliable simulation results in the coupled computation of MC radiation transport with relativistic hydrodynamics which can reproduce GRB emission, we validated radiative transfer computation in the ultra-relativistic regime and assessed the appropriate simulation conditions. The radiative transfer code was validated through two test calculations: (1) computing in different inertial frames and (2) computing in flow-fields with discontinuous and smeared shock fronts. The simulation results of the angular distribution and spectrum were compared among three different inertial frames and in good agreement with each other. If the time duration for updating the flow-field was sufficiently small to resolve a mean free path of a photon into ten steps, the results were thoroughly converged. The spectrum computed in the flow-field with a discontinuous shock front obeyed a power-law in frequency whose index was positive in the range from 1 to 10 MeV. The number of photons in the high-energy side decreased with the smeared shock front because the photons were less scattered immediately behind the shock wave due to the small electron number density. The large optical depth near the shock front was needed for obtaining high-energy photons through bulk Compton scattering. Even one-dimensional structure of the shock wave could affect the results of radiation transport computation. Although we examined the effect of the shock structure on the emitted spectrum with a large number of cells, it is hard to employ so many computational cells per dimension in multi-dimensional simulations. Therefore, a further investigation with a smaller number of cells is required for obtaining realistic high-energy photons with multi-dimensional computations.
The Next Frontier in Computing
Sarrao, John
2018-06-13
Exascale computing refers to computing systems capable of at least one exaflop or a billion calculations per second (1018). That is 50 times faster than the most powerful supercomputers being used today and represents a thousand-fold increase over the first petascale computer that came into operation in 2008. How we use these large-scale simulation resources is the key to solving some of todayâs most pressing problems, including clean energy production, nuclear reactor lifetime extension and nuclear stockpile aging.
Aiding Design of Wave Energy Converters via Computational Simulations
NASA Astrophysics Data System (ADS)
Jebeli Aqdam, Hejar; Ahmadi, Babak; Raessi, Mehdi; Tootkaboni, Mazdak
2015-11-01
With the increasing interest in renewable energy sources, wave energy converters will continue to gain attention as a viable alternative to current electricity production methods. It is therefore crucial to develop computational tools for the design and analysis of wave energy converters. A successful design requires balance between the design performance and cost. Here an analytical solution is used for the approximate analysis of interactions between a flap-type wave energy converter (WEC) and waves. The method is verified using other flow solvers and experimental test cases. Then the model is used in conjunction with a powerful heuristic optimization engine, Charged System Search (CSS) to explore the WEC design space. CSS is inspired by charged particles behavior. It searches the design space by considering candidate answers as charged particles and moving them based on the Coulomb's laws of electrostatics and Newton's laws of motion to find the global optimum. Finally the impacts of changes in different design parameters on the power takeout of the superior WEC designs are investigated. National Science Foundation, CBET-1236462.
An Informational-Theoretical Formulation of the Second Law of Thermodynamics
ERIC Educational Resources Information Center
Ben-Naim, Arieh
2009-01-01
This paper presents a formulation of the second law of thermodynamics couched in terms of Shannon's measure of information. This formulation has an advantage over other formulations of the second law. First, it shows explicitly what is the thing that changes in a spontaneous process in an isolated system, which is traditionally referred to as the…
Self-evaluation of decision-making: A general Bayesian framework for metacognitive computation.
Fleming, Stephen M; Daw, Nathaniel D
2017-01-01
People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a "second-order" inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one's own actions to metacognitive judgments. In addition, the model provides insight into why subjects' metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Self-Evaluation of Decision-Making: A General Bayesian Framework for Metacognitive Computation
2017-01-01
People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a “second-order” inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one’s own actions to metacognitive judgments. In addition, the model provides insight into why subjects’ metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. PMID:28004960
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 2 is the User's Guide, and describes the program's general features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.
One-Dimensional Model for Mud Flows.
1985-10-01
law relation between the Chezy coefficient and the flow Reynolds number. Jeyapalan et al. [2], in their analysis of mine tailing dam failures...8217.. .: -:.. ; .r;./. : ... . :\\ :. . ... . RESULTS The model is compared with several dambreak experiments performed by Jeyapalan et al. [3]. In these...0.34 seconds per computational node. 5i Test 6 Test 2 Test 7 44 E 3 A2 Experimental Results0 Jeyapalan at al. (3) - C6- Numerical Results 4 8 12 i6 Time
NASA Astrophysics Data System (ADS)
Hu, Dawei; Li, Leyuan; Liu, Hui; Zhang, Houkai; Fu, Yuming; Sun, Yi; Li, Liang
It is necessary to process inedible plant biomass into soil-like substrate (SLS) by bio-compost to realize biological resource sustainable utilization. Although similar to natural soil in structure and function, SLS often has uneven water distribution adversely affecting the plant growth due to unsatisfactory porosity, permeability and gravity distribution. In this article, SLS plant-growing facility (SLS-PGF) were therefore rotated properly for cultivating lettuce, and the Brinkman equations coupled with laminar flow equations were taken as governing equations, and boundary conditions were specified by actual operating characteristics of rotating SLS-PGF. Optimal open-control law of the angular and inflow velocity was determined by lettuce water requirement and CFD simulations. The experimental result clearly showed that water content was more uniformly distributed in SLS under the action of centrifugal and Coriolis force, rotating SLS-PGF with the optimal open-control law could meet lettuce water requirement at every growth stage and achieve precise irrigation.
Pair Potential That Reproduces the Shape of Isochrones in Molecular Liquids.
Veldhorst, Arno A; Schrøder, Thomas B; Dyre, Jeppe C
2016-08-18
Many liquids have curves (isomorphs) in their phase diagrams along which structure, dynamics, and some thermodynamic quantities are invariant in reduced units. A substantial part of their phase diagrams is thus effectively one dimensional. The shapes of these isomorphs are described by a material-dependent function of density, h(ρ), which for real liquids is well approximated by a power law, ρ(γ). However, in simulations, a power law is not adequate when density changes are large; typical models, such as Lennard-Jones liquids, show that γ(ρ) ≡ d ln h(ρ)/d ln ρ is a decreasing function of density. This article presents results from computer simulations using a new pair potential that diverges at a nonzero distance and can be tuned to give a more realistic shape of γ(ρ). Our results indicate that the finite size of molecules is an important factor to take into account when modeling liquids over a large density range.
Dynamics and Control of Flexible Space Vehicles
NASA Technical Reports Server (NTRS)
Likins, P. W.
1970-01-01
The purpose of this report is twofold: (1) to survey the established analytic procedures for the simulation of controlled flexible space vehicles, and (2) to develop in detail methods that employ a combination of discrete and distributed ("modal") coordinates, i.e., the hybrid-coordinate methods. Analytic procedures are described in three categories: (1) discrete-coordinate methods, (2) hybrid-coordinate methods, and (3) vehicle normal-coordinate methods. Each of these approaches is described and analyzed for its advantages and disadvantages, and each is found to have an area of applicability. The hybrid-coordinate method combines the efficiency of the vehicle normal-coordinate method with the versatility of the discrete-coordinate method, and appears to have the widest range of practical application. The results in this report have practical utility in two areas: (1) complex digital computer simulation of flexible space vehicles of arbitrary configuration subject to realistic control laws, and (2) preliminary control system design based on transfer functions for linearized models of dynamics and control laws.
Stoichiometric network theory for nonequilibrium biochemical systems.
Qian, Hong; Beard, Daniel A; Liang, Shou-dan
2003-02-01
We introduce the basic concepts and develop a theory for nonequilibrium steady-state biochemical systems applicable to analyzing large-scale complex isothermal reaction networks. In terms of the stoichiometric matrix, we demonstrate both Kirchhoff's flux law sigma(l)J(l)=0 over a biochemical species, and potential law sigma(l) mu(l)=0 over a reaction loop. They reflect mass and energy conservation, respectively. For each reaction, its steady-state flux J can be decomposed into forward and backward one-way fluxes J = J+ - J-, with chemical potential difference deltamu = RT ln(J-/J+). The product -Jdeltamu gives the isothermal heat dissipation rate, which is necessarily non-negative according to the second law of thermodynamics. The stoichiometric network theory (SNT) embodies all of the relevant fundamental physics. Knowing J and deltamu of a biochemical reaction, a conductance can be computed which directly reflects the level of gene expression for the particular enzyme. For sufficiently small flux a linear relationship between J and deltamu can be established as the linear flux-force relation in irreversible thermodynamics, analogous to Ohm's law in electrical circuits.
A review of second law techniques applicable to basic thermal science research
NASA Astrophysics Data System (ADS)
Drost, M. Kevin; Zamorski, Joseph R.
1988-11-01
This paper reports the results of a review of second law analysis techniques which can contribute to basic research in the thermal sciences. The review demonstrated that second law analysis has a role in basic thermal science research. Unlike traditional techniques, second law analysis accurately identifies the sources and location of thermodynamic losses. This allows the development of innovative solutions to thermal science problems by directing research to the key technical issues. Two classes of second law techniques were identified as being particularly useful. First, system and component investigations can provide information of the source and nature of irreversibilities on a macroscopic scale. This information will help to identify new research topics and will support the evaluation of current research efforts. Second, the differential approach can provide information on the causes and spatial and temporal distribution of local irreversibilities. This information enhances the understanding of fluid mechanics, thermodynamics, and heat and mass transfer, and may suggest innovative methods for reducing irreversibilities.
Quantum power source: putting in order of a Brownian motion without Maxwell's demon
NASA Astrophysics Data System (ADS)
Aristov, Vitaly V.; Nikulov, A. V.
2003-07-01
The problem of possible violation of the second law of thermodynamics is discussed. It is noted that the task of the well known challenge to the second law called Maxwell's demon is put in order a chaotic perpetual motion and if any ordered Brownian motion exists then the second law can be broken without this hypothetical intelligent entity. The postulate of absolute randomness of any Brownian motion saved the second law in the beginning of the 20th century when it was realized as perpetual motion. This postulate can be proven in the limits of classical mechanics but is not correct according to quantum mechanics. Moreover some enough known quantum phenomena, such as the persistent current at non-zero resistance, are an experimental evidence of the non-chaotic Brownian motion with non-zero average velocity. An experimental observation of a dc quantum power soruce is interperted as evidence of violation of the second law.
Kinetic Monte Carlo Method for Rule-based Modeling of Biochemical Networks
Yang, Jin; Monine, Michael I.; Faeder, James R.; Hlavacek, William S.
2009-01-01
We present a kinetic Monte Carlo method for simulating chemical transformations specified by reaction rules, which can be viewed as generators of chemical reactions, or equivalently, definitions of reaction classes. A rule identifies the molecular components involved in a transformation, how these components change, conditions that affect whether a transformation occurs, and a rate law. The computational cost of the method, unlike conventional simulation approaches, is independent of the number of possible reactions, which need not be specified in advance or explicitly generated in a simulation. To demonstrate the method, we apply it to study the kinetics of multivalent ligand-receptor interactions. We expect the method will be useful for studying cellular signaling systems and other physical systems involving aggregation phenomena. PMID:18851068
Aeroservoelastic and Flight Dynamics Analysis Using Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Arena, Andrew S., Jr.
1999-01-01
This document in large part is based on the Masters Thesis of Cole Stephens. The document encompasses a variety of technical and practical issues involved when using the STARS codes for Aeroservoelastic analysis of vehicles. The document covers in great detail a number of technical issues and step-by-step details involved in the simulation of a system where aerodynamics, structures and controls are tightly coupled. Comparisons are made to a benchmark experimental program conducted at NASA Langley. One of the significant advantages of the methodology detailed is that as a result of the technique used to accelerate the CFD-based simulation, a systems model is produced which is very useful for developing the control law strategy, and subsequent high-speed simulations.
Randles, Amanda; Frakes, David H; Leopold, Jane A
2017-11-01
Noninvasive engineering models are now being used for diagnosing and planning the treatment of cardiovascular disease. Techniques in computational modeling and additive manufacturing have matured concurrently, and results from simulations can inform and enable the design and optimization of therapeutic devices and treatment strategies. The emerging synergy between large-scale simulations and 3D printing is having a two-fold benefit: first, 3D printing can be used to validate the complex simulations, and second, the flow models can be used to improve treatment planning for cardiovascular disease. In this review, we summarize and discuss recent methods and findings for leveraging advances in both additive manufacturing and patient-specific computational modeling, with an emphasis on new directions in these fields and remaining open questions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fast neural net simulation with a DSP processor array.
Muller, U A; Gunzinger, A; Guggenbuhl, W
1995-01-01
This paper describes the implementation of a fast neural net simulator on a novel parallel distributed-memory computer. A 60-processor system, named MUSIC (multiprocessor system with intelligent communication), is operational and runs the backpropagation algorithm at a speed of 330 million connection updates per second (continuous weight update) using 32-b floating-point precision. This is equal to 1.4 Gflops sustained performance. The complete system with 3.8 Gflops peak performance consumes less than 800 W of electrical power and fits into a 19-in rack. While reaching the speed of modern supercomputers, MUSIC still can be used as a personal desktop computer at a researcher's own disposal. In neural net simulation, this gives a computing performance to a single user which was unthinkable before. The system's real-time interfaces make it especially useful for embedded applications.
NASA Astrophysics Data System (ADS)
Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.
2014-12-01
Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.
How I Teach the Second Law of Thermodynamics
ERIC Educational Resources Information Center
Kincanon, Eric
2013-01-01
An alternative method of presenting the second law of thermodynamics in introductory courses is presented. The emphasis is on statistical approaches as developed by Atkins. This has the benefit of stressing the statistical nature of the law.
Two ways to model voltage current curves of adiabatic MgB2 wires
NASA Astrophysics Data System (ADS)
Stenvall, A.; Korpela, A.; Lehtonen, J.; Mikkonen, R.
2007-08-01
Usually overheating of the sample destroys attempts to measure voltage-current curves of conduction cooled high critical current MgB2 wires at low temperatures. Typically, when a quench occurs a wire burns out due to massive heat generation and negligible cooling. It has also been suggested that high n values measured with MgB2 wires and coils are not an intrinsic property of the material but arise due to heating during the voltage-current measurement. In addition, quite recently low n values for MgB2 wires have been reported. In order to find out the real properties of MgB2 an efficient computational model is required to simulate the voltage-current measurement. In this paper we go back to basics and consider two models to couple electromagnetic and thermal phenomena. In the first model the magnetization losses are computed according to the critical state model and the flux creep losses are considered separately. In the second model the superconductor resistivity is described by the widely used power law. Then the coupled current diffusion and heat conduction equations are solved with the finite element method. In order to compare the models, example runs are carried out with an adiabatic slab. Both models produce a similar significant temperature rise near the critical current which leads to fictitiously high n values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowery, P.S.; Lessor, D.L.
Waste glass melter and in situ vitrification (ISV) processes represent the combination of electrical thermal, and fluid flow phenomena to produce a stable waste-from product. Computational modeling of the thermal and fluid flow aspects of these processes provides a useful tool for assessing the potential performance of proposed system designs. These computations can be performed at a fraction of the cost of experiment. Consequently, computational modeling of vitrification systems can also provide and economical means for assessing the suitability of a proposed process application. The computational model described in this paper employs finite difference representations of the basic continuum conservationmore » laws governing the thermal, fluid flow, and electrical aspects of the vitrification process -- i.e., conservation of mass, momentum, energy, and electrical charge. The resulting code is a member of the TEMPEST family of codes developed at the Pacific Northwest Laboratory (operated by Battelle for the US Department of Energy). This paper provides an overview of the numerical approach employed in TEMPEST. In addition, results from several TEMPEST simulations of sample waste glass melter and ISV processes are provided to illustrate the insights to be gained from computational modeling of these processes. 3 refs., 13 figs.« less
Design of teleoperation system with a force-reflecting real-time simulator
NASA Technical Reports Server (NTRS)
Hirata, Mitsunori; Sato, Yuichi; Nagashima, Fumio; Maruyama, Tsugito
1994-01-01
We developed a force-reflecting teleoperation system that uses a real-time graphic simulator. This system eliminates the effects of communication time delays in remote robot manipulation. The simulator provides the operator with predictive display and feedback of computed contact forces through a six-degree of freedom (6-DOF) master arm on a real-time basis. With this system, peg-in-hole tasks involving round-trip communication time delays of up to a few seconds were performed at three support levels: a real image alone, a predictive display with a real image, and a real-time graphic simulator with computed-contact-force reflection and a predictive display. The experimental results indicate the best teleoperation efficiency was achieved by using the force-reflecting simulator with two images. The shortest work time, lowest sensor maximum, and a 100 percent success rate were obtained. These results demonstrate the effectiveness of simulated-force-reflecting teleoperation efficiency.
Evaluation of Enthalpy Diagrams for NH3-H2O Absorption Refrigerator
NASA Astrophysics Data System (ADS)
Takei, Toshitaka; Saito, Kiyoshi; Kawai, Sunao
The protection of environment is becoming a grave problem nowadays and an absorption refrigerator, which does not use fleon as a refrigerant, is acquiring a close attention. Among the absorption refrigerators, a number of ammonia-water absorption refrigerators are being used in realm such as refrigeration and ice accumulation, since this type of refrigerator can produce below zero degree products. It is essential to conduct an investigation on the characteristics of ammonia-water absorption refrigerator in detail by means of computer simulation in order to realize low cost, highly efficient operation. Unfortunately, there have been number of problems in order to conduct computer simulations. Firstly, Merkel's achievements of enthalpy diagram does not give the relational equations. And secondly, although relational equation are being proposed by Ziegler, simpler equations that can be applied to computer simulation are yet to be proposed. In this research, simper equations based on Ziegler's equations have been derived to make computer simulation concerning the performance of ammonia-water absorption refrigerator possible-Both results of computer simulations using simple equations and Merkel's enthalpy diagram respectively, have been compared with the actual experimental data of one staged ammonia-water absorption refrigerator. Consequently, it is clarified that the results from Ziegler's equations agree with experimental data better than those from Merkel's enthalpy diagram.
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 3: Programmer's reference
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the 2-D or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating-direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 3 is the Programmer's Reference, and describes the program structure, the FORTRAN variables stored in common blocks, and the details of each subprogram.
Decomposing the aerodynamic forces of low-Reynolds flapping airfoils
NASA Astrophysics Data System (ADS)
Moriche, Manuel; Garcia-Villalba, Manuel; Flores, Oscar
2016-11-01
We present direct numerical simulations of flow around flapping NACA0012 airfoils at relatively small Reynolds numbers, Re = 1000 . The simulations are carried out with TUCAN, an in-house code that solves the Navier-Stokes equations for an incompressible flow with an immersed boundary method to model the presence of the airfoil. The motion of the airfoil is composed of a vertical translation, heaving, and a rotation about the quarter of the chord, pitching. Both motions are prescribed by sinusoidal laws, with a reduced frequency of k = 1 . 41 , a pitching amplitude of 30deg and a heaving amplitude of one chord. Both, the mean pitch angle and the phase shift between pitching and heaving motions are varied, to build a database with 18 configurations. Four of these cases are analysed in detail using the force decomposition algorithm of Chang (1992) and Martín Alcántara et al. (2015). This method decomposes the total aerodynamic force into added-mass (translation and rotation of the airfoil), a volumetric contribution from the vorticity (circulatory effects) and a surface contribution proportional to viscosity. In particular we will focus on the second, analysing the contribution of the leading and trailing edge vortices that typically appear in these flows. This work has been supported by the Spanish MINECO under Grant TRA2013-41103-P. The authors thankfully acknowledge the computer resources provided by the Red Española de Supercomputacion.
Effect of the Environment and Environmental Uncertainty on Ship Routes
2012-06-01
models consisting of basic differential equations simulating the fluid dynamic process and physics of the environment. Based on Newton’s second law of...Charles and Hazel Hall, for their unconditional love and support. They were there for me during this entire process , as they have been throughout...A simple transit across the Atlantic Ocean can easily become a rough voyage if the ship encounters high winds, which in turn will cause a high sea
A Real-Time Method for Estimating Viscous Forebody Drag Coefficients
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Hurtado, Marco; Rivera, Jose; Naughton, Jonathan W.
2000-01-01
This paper develops a real-time method based on the law of the wake for estimating forebody skin-friction coefficients. The incompressible law-of-the-wake equations are numerically integrated across the boundary layer depth to develop an engineering model that relates longitudinally averaged skin-friction coefficients to local boundary layer thickness. Solutions applicable to smooth surfaces with pressure gradients and rough surfaces with negligible pressure gradients are presented. Model accuracy is evaluated by comparing model predictions with previously measured flight data. This integral law procedure is beneficial in that skin-friction coefficients can be indirectly evaluated in real-time using a single boundary layer height measurement. In this concept a reference pitot probe is inserted into the flow, well above the anticipated maximum thickness of the local boundary layer. Another probe is servomechanism-driven and floats within the boundary layer. A controller regulates the position of the floating probe. The measured servomechanism position of this second probe provides an indirect measurement of both local and longitudinally averaged skin friction. Simulation results showing the performance of the control law for a noisy boundary layer are then presented.
A Collection of Nonlinear Aircraft Simulations in MATLAB
NASA Technical Reports Server (NTRS)
Garza, Frederico R.; Morelli, Eugene A.
2003-01-01
Nonlinear six degree-of-freedom simulations for a variety of aircraft were created using MATLAB. Data for aircraft geometry, aerodynamic characteristics, mass / inertia properties, and engine characteristics were obtained from open literature publications documenting wind tunnel experiments and flight tests. Each nonlinear simulation was implemented within a common framework in MATLAB, and includes an interface with another commercially-available program to read pilot inputs and produce a three-dimensional (3-D) display of the simulated airplane motion. Aircraft simulations include the General Dynamics F-16 Fighting Falcon, Convair F-106B Delta Dart, Grumman F-14 Tomcat, McDonnell Douglas F-4 Phantom, NASA Langley Free-Flying Aircraft for Sub-scale Experimental Research (FASER), NASA HL-20 Lifting Body, NASA / DARPA X-31 Enhanced Fighter Maneuverability Demonstrator, and the Vought A-7 Corsair II. All nonlinear simulations and 3-D displays run in real time in response to pilot inputs, using contemporary desktop personal computer hardware. The simulations can also be run in batch mode. Each nonlinear simulation includes the full nonlinear dynamics of the bare airframe, with a scaled direct connection from pilot inputs to control surface deflections to provide adequate pilot control. Since all the nonlinear simulations are implemented entirely in MATLAB, user-defined control laws can be added in a straightforward fashion, and the simulations are portable across various computing platforms. Routines for trim, linearization, and numerical integration are included. The general nonlinear simulation framework and the specifics for each particular aircraft are documented.
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
Over the years, computer modeling has been used extensively in many disciplines to solve engineering problems. A set of computer program tools is proposed to assist the engineer in the various phases of the Space Station program from technology selection through flight operations. The development and application of emulation and simulation transient performance modeling tools for life support systems are examined. The results of the development and the demonstration of the utility of three computer models are presented. The first model is a detailed computer model (emulation) of a solid amine water desorbed (SAWD) CO2 removal subsystem combined with much less detailed models (simulations) of a cabin, crew, and heat exchangers. This model was used in parallel with the hardware design and test of this CO2 removal subsystem. The second model is a simulation of an air revitalization system combined with a wastewater processing system to demonstrate the capabilities to study subsystem integration. The third model is that of a Space Station total air revitalization system. The station configuration consists of a habitat module, a lab module, two crews, and four connecting nodes.