Evaluation of Computer Simulations for Teaching Apparel Merchandising Concepts.
ERIC Educational Resources Information Center
Jolly, Laura D.; Sisler, Grovalynn
1988-01-01
The study developed and evaluated computer simulations for teaching apparel merchandising concepts. Evaluation results indicated that teaching method (computer simulation versus case study) does not significantly affect cognitive learning. Student attitudes varied, however, according to topic (profitable merchandising analysis versus retailing…
ERIC Educational Resources Information Center
Tang, Hui; Abraham, Michael R.
2016-01-01
Computer-based simulations can help students visualize chemical representations and understand chemistry concepts, but simulations at different levels of representation may vary in effectiveness on student learning. This study investigated the influence of computer activities that simulate chemical reactions at different levels of representation…
NASA Technical Reports Server (NTRS)
Curran, R. T.; Hornfeck, W. A.
1972-01-01
The functional requirements for the design of an interpretive simulator for the space ultrareliable modular computer (SUMC) are presented. A review of applicable existing computer simulations is included along with constraints on the SUMC simulator functional design. Input requirements, output requirements, and language requirements for the simulator are discussed in terms of a SUMC configuration which may vary according to the application.
Users manual for linear Time-Varying Helicopter Simulation (Program TVHIS)
NASA Technical Reports Server (NTRS)
Burns, M. R.
1979-01-01
A linear time-varying helicopter simulation program (TVHIS) is described. The program is designed as a realistic yet efficient helicopter simulation. It is based on a linear time-varying helicopter model which includes rotor, actuator, and sensor models, as well as a simulation of flight computer logic. The TVHIS can generate a mean trajectory simulation along a nominal trajectory, or propagate covariance of helicopter states, including rigid-body, turbulence, control command, controller states, and rigid-body state estimates.
Paper simulation techniques in user requirements analysis for interactive computer systems
NASA Technical Reports Server (NTRS)
Ramsey, H. R.; Atwood, M. E.; Willoughby, J. K.
1979-01-01
This paper describes the use of a technique called 'paper simulation' in the analysis of user requirements for interactive computer systems. In a paper simulation, the user solves problems with the aid of a 'computer', as in normal man-in-the-loop simulation. In this procedure, though, the computer does not exist, but is simulated by the experimenters. This allows simulated problem solving early in the design effort, and allows the properties and degree of structure of the system and its dialogue to be varied. The technique, and a method of analyzing the results, are illustrated with examples from a recent paper simulation exercise involving a Space Shuttle flight design task
[Design of a miniaturized blood temperature-varying system based on computer distributed control].
Xu, Qiang; Zhou, Zhaoying; Peng, Jiegang; Zhu, Junhua
2007-10-01
Blood temperature-varying has been widely applied in clinical practice such as extracorporeal circulation for whole-body perfusion hyperthermia (WBPH), body rewarming and blood temperature-varying in organ transplantation. This paper reports a novel DCS (Computer distributed control)-based blood temperature-varying system which includes therapy management function and whose hardware and software can be extended easily. Simulation results illustrate that this system provides precise temperature control with good performance in various operation conditions.
Large Scale Geologic Controls on Hydraulic Stimulation
NASA Astrophysics Data System (ADS)
McLennan, J. D.; Bhide, R.
2014-12-01
When simulating a hydraulic fracturing, the analyst has historically prescribed a single planar fracture. Originally (in the 1950s through the 1970s) this was necessitated by computational restrictions. In the latter part of the twentieth century, hydraulic fracture simulation evolved to incorporate vertical propagation controlled by modulus, fluid loss, and the minimum principal stress. With improvements in software, computational capacity, and recognition that in-situ discontinuities are relevant, fully three-dimensional hydraulic simulation is now becoming possible. Advances in simulation capabilities enable coupling structural geologic data (three-dimensional representation of stresses, natural fractures, and stratigraphy) with decision making processes for stimulation - volumes, rates, fluid types, completion zones. Without this interaction between simulation capabilities and geological information, low permeability formation exploitation may linger on the fringes of real economic viability. Comparative simulations have been undertaken in varying structural environments where the stress contrast and the frequency of natural discontinuities causes varying patterns of multiple, hydraulically generated or reactivated flow paths. Stress conditions and nature of the discontinuities are selected as variables and are used to simulate how fracturing can vary in different structural regimes. The basis of the simulations is commercial distinct element software (Itasca Corporation's 3DEC).
Simulation of a combined-cycle engine
NASA Technical Reports Server (NTRS)
Vangerpen, Jon
1991-01-01
A FORTRAN computer program was developed to simulate the performance of combined-cycle engines. These engines combine features of both gas turbines and reciprocating engines. The computer program can simulate both design point and off-design operation. Widely varying engine configurations can be evaluated for their power, performance, and efficiency as well as the influence of altitude and air speed. Although the program was developed to simulate aircraft engines, it can be used with equal success for stationary and automative applications.
Some issues related to simulation of the tracking and communications computer network
NASA Technical Reports Server (NTRS)
Lacovara, Robert C.
1989-01-01
The Communications Performance and Integration branch of the Tracking and Communications Division has an ongoing involvement in the simulation of its flight hardware for Space Station Freedom. Specifically, the communication process between central processor(s) and orbital replaceable units (ORU's) is simulated with varying degrees of fidelity. The results of investigations into three aspects of this simulation effort are given. The most general area involves the use of computer assisted software engineering (CASE) tools for this particular simulation. The second area of interest is simulation methods for systems of mixed hardware and software. The final area investigated is the application of simulation methods to one of the proposed computer network protocols for space station, specifically IEEE 802.4.
Some issues related to simulation of the tracking and communications computer network
NASA Astrophysics Data System (ADS)
Lacovara, Robert C.
1989-12-01
The Communications Performance and Integration branch of the Tracking and Communications Division has an ongoing involvement in the simulation of its flight hardware for Space Station Freedom. Specifically, the communication process between central processor(s) and orbital replaceable units (ORU's) is simulated with varying degrees of fidelity. The results of investigations into three aspects of this simulation effort are given. The most general area involves the use of computer assisted software engineering (CASE) tools for this particular simulation. The second area of interest is simulation methods for systems of mixed hardware and software. The final area investigated is the application of simulation methods to one of the proposed computer network protocols for space station, specifically IEEE 802.4.
NASA Technical Reports Server (NTRS)
Palusinski, O. A.; Allgyer, T. T.; Mosher, R. A.; Bier, M.; Saville, D. A.
1981-01-01
A mathematical model of isoelectric focusing at the steady state has been developed for an M-component system of electrochemically defined ampholytes. The model is formulated from fundamental principles describing the components' chemical equilibria, mass transfer resulting from diffusion and electromigration, and electroneutrality. The model consists of ordinary differential equations coupled with a system of algebraic equations. The model is implemented on a digital computer using FORTRAN-based simulation software. Computer simulation data are presented for several two-component systems showing the effects of varying the isoelectric points and dissociation constants of the constituents.
Polymer Composites Corrosive Degradation: A Computational Simulation
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Minnetyan, Levon
2007-01-01
A computational simulation of polymer composites corrosive durability is presented. The corrosive environment is assumed to manage the polymer composite degradation on a ply-by-ply basis. The degradation is correlated with a measured pH factor and is represented by voids, temperature and moisture which vary parabolically for voids and linearly for temperature and moisture through the laminate thickness. The simulation is performed by a computational composite mechanics computer code which includes micro, macro, combined stress failure and laminate theories. This accounts for starting the simulation from constitutive material properties and up to the laminate scale which exposes the laminate to the corrosive environment. Results obtained for one laminate indicate that the ply-by-ply degradation degrades the laminate to the last one or the last several plies. Results also demonstrate that the simulation is applicable to other polymer composite systems as well.
Navier-Stokes simulations of slender axisymmetric shapes in supersonic, turbulent flow
NASA Astrophysics Data System (ADS)
Moran, Kenneth J.; Beran, Philip S.
1994-07-01
Computational fluid dynamics is used to study flows about slender, axisymmetric bodies at very high speeds. Numerical experiments are conducted to simulate a broad range of flight conditions. Mach number is varied from 1.5 to 8 and Reynolds number is varied from 1 X 10(exp 6)/m to 10(exp 8)/m. The primary objective is to develop and validate a computational and methodology for the accurate simulation of a wide variety of flow structures. Accurate results are obtained for detached bow shocks, recompression shocks, corner-point expansions, base-flow recirculations, and turbulent boundary layers. Accuracy is assessed through comparison with theory and experimental data; computed surface pressure, shock structure, base-flow structure, and velocity profiles are within measurement accuracy throughout the range of conditions tested. The methodology is both practical and general: general in its applicability, and practicaal in its performance. To achieve high accuracy, modifications to previously reported techniques are implemented in the scheme. These modifications improve computed results in the vicinity of symmetry lines and in the base flow region, including the turbulent wake.
Thrust Augmentation Study of Cross-Flow Fan for Vertical Take-Off and Landing Aircraft
2012-09-01
configuration by varying the gap between the CFFs. Computational fluid simulations of the dual CFF configuration was performed using ANSYS CFX to find the...Computational fluid simulations of the dual CFF configuration was performed using ANSYS CFX to find the thrust generated as well as the optimal operating point...RECOMMENDATIONS ...............................................................................43 APPENDIX A. ANSYS CFX SETTINGS FOR DUAL CFF (8,000
ERIC Educational Resources Information Center
Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M.
2010-01-01
Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…
Exploring Biomolecular Recognition by Modeling and Simulation
NASA Astrophysics Data System (ADS)
Wade, Rebecca
2007-12-01
Biomolecular recognition is complex. The balance between the different molecular properties that contribute to molecular recognition, such as shape, electrostatics, dynamics and entropy, varies from case to case. This, along with the extent of experimental characterization, influences the choice of appropriate computational approaches to study biomolecular interactions. I will present computational studies in which we aim to make concerted use of bioinformatics, biochemical network modeling and molecular simulation techniques to study protein-protein and protein-small molecule interactions and to facilitate computer-aided drug design.
Understanding Resonance Graphs Using Easy Java Simulations (EJS) and Why We Use EJS
ERIC Educational Resources Information Center
Wee, Loo Kang; Lee, Tat Leong; Chew, Charles; Wong, Darren; Tan, Samuel
2015-01-01
This paper reports a computer model simulation created using Easy Java Simulation (EJS) for learners to visualize how the steady-state amplitude of a driven oscillating system varies with the frequency of the periodic driving force. The simulation shows (N = 100) identical spring-mass systems being subjected to (1) a periodic driving force of…
The LHCb Grid Simulation: Proof of Concept
NASA Astrophysics Data System (ADS)
Hushchyn, M.; Ustyuzhanin, A.; Arzymatov, K.; Roiser, S.; Baranov, A.
2017-10-01
The Worldwide LHC Computing Grid provides access to data and computational resources to analyze it for researchers with different geographical locations. The grid has a hierarchical topology with multiple sites distributed over the world with varying number of CPUs, amount of disk storage and connection bandwidth. Job scheduling and data distribution strategy are key elements of grid performance. Optimization of algorithms for those tasks requires their testing on real grid which is hard to achieve. Having a grid simulator might simplify this task and therefore lead to more optimal scheduling and data placement algorithms. In this paper we demonstrate a grid simulator for the LHCb distributed computing software.
Human operator identification model and related computer programs
NASA Technical Reports Server (NTRS)
Kessler, K. M.; Mohr, J. N.
1978-01-01
Four computer programs which provide computational assistance in the analysis of man/machine systems are reported. The programs are: (1) Modified Transfer Function Program (TF); (2) Time Varying Response Program (TVSR); (3) Optimal Simulation Program (TVOPT); and (4) Linear Identification Program (SCIDNT). The TV program converts the time domain state variable system representative to frequency domain transfer function system representation. The TVSR program computes time histories of the input/output responses of the human operator model. The TVOPT program is an optimal simulation program and is similar to TVSR in that it produces time histories of system states associated with an operator in the loop system. The differences between the two programs are presented. The SCIDNT program is an open loop identification code which operates on the simulated data from TVOPT (or TVSR) or real operator data from motion simulators.
A digital computer simulation and study of a direct-energy-transfer power-conditioning system
NASA Technical Reports Server (NTRS)
Burns, W. W., III; Owen, H. A., Jr.; Wilson, T. G.; Rodriguez, G. E.; Paulkovich, J.
1974-01-01
A digital computer simulation technique, which can be used to study such composite power-conditioning systems, was applied to a spacecraft direct-energy-transfer power-processing system. The results obtained duplicate actual system performance with considerable accuracy. The validity of the approach and its usefulness in studying various aspects of system performance such as steady-state characteristics and transient responses to severely varying operating conditions are demonstrated experimentally.
Automated inverse computer modeling of borehole flow data in heterogeneous aquifers
NASA Astrophysics Data System (ADS)
Sawdey, J. R.; Reeve, A. S.
2012-09-01
A computer model has been developed to simulate borehole flow in heterogeneous aquifers where the vertical distribution of permeability may vary significantly. In crystalline fractured aquifers, flow into or out of a borehole occurs at discrete locations of fracture intersection. Under these circumstances, flow simulations are defined by independent variables of transmissivity and far-field heads for each flow contributing fracture intersecting the borehole. The computer program, ADUCK (A Downhole Underwater Computational Kit), was developed to automatically calibrate model simulations to collected flowmeter data providing an inverse solution to fracture transmissivity and far-field head. ADUCK has been tested in variable borehole flow scenarios, and converges to reasonable solutions in each scenario. The computer program has been created using open-source software to make the ADUCK model widely available to anyone who could benefit from its utility.
Flow in curved ducts of varying cross-section
NASA Astrophysics Data System (ADS)
Sotiropoulos, F.; Patel, V. C.
1992-07-01
Two numerical methods for solving the incompressible Navier-Stokes equations are compared with each other by applying them to calculate laminar and turbulent flows through curved ducts of regular cross-section. Detailed comparisons, between the computed solutions and experimental data, are carried out in order to validate the two methods and to identify their relative merits and disadvantages. Based on the conclusions of this comparative study a numerical method is developed for simulating viscous flows through curved ducts of varying cross-sections. The proposed method is capable of simulating the near-wall turbulence using fine computational meshes across the sublayer in conjunction with a two-layer k-epsilon model. Numerical solutions are obtained for: (1) a straight transition duct geometry, and (2) a hydroturbine draft-tube configuration at model scale Reynolds number for various inlet swirl intensities. The report also provides a detailed literature survey that summarizes all the experimental and computational work in the area of duct flows.
Structural Composites Corrosive Management by Computational Simulation
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Minnetyan, Levon
2006-01-01
A simulation of corrosive management on polymer composites durability is presented. The corrosive environment is assumed to manage the polymer composite degradation on a ply-by-ply basis. The degradation is correlated with a measured Ph factor and is represented by voids, temperature, and moisture which vary parabolically for voids and linearly for temperature and moisture through the laminate thickness. The simulation is performed by a computational composite mechanics computer code which includes micro, macro, combined stress failure, and laminate theories. This accounts for starting the simulation from constitutive material properties and up to the laminate scale which exposes the laminate to the corrosive environment. Results obtained for one laminate indicate that the ply-by-ply managed degradation degrades the laminate to the last one or the last several plies. Results also demonstrate that the simulation is applicable to other polymer composite systems as well.
Mendoza, Patricia; d'Anjou, Marc-André; Carmel, Eric N; Fournier, Eric; Mai, Wilfried; Alexander, Kate; Winter, Matthew D; Zwingenberger, Allison L; Thrall, Donald E; Theoret, Christine
2014-01-01
Understanding radiographic anatomy and the effects of varying patient and radiographic tube positioning on image quality can be a challenge for students. The purposes of this study were to develop and validate a novel technique for creating simulated radiographs using computed tomography (CT) datasets. A DICOM viewer (ORS Visual) plug-in was developed with the ability to move and deform cuboidal volumetric CT datasets, and to produce images simulating the effects of tube-patient-detector distance and angulation. Computed tomographic datasets were acquired from two dogs, one cat, and one horse. Simulated radiographs of different body parts (n = 9) were produced using different angles to mimic conventional projections, before actual digital radiographs were obtained using the same projections. These studies (n = 18) were then submitted to 10 board-certified radiologists who were asked to score visualization of anatomical landmarks, depiction of patient positioning, realism of distortion/magnification, and image quality. No significant differences between simulated and actual radiographs were found for anatomic structure visualization and patient positioning in the majority of body parts. For the assessment of radiographic realism, no significant differences were found between simulated and digital radiographs for canine pelvis, equine tarsus, and feline abdomen body parts. Overall, image quality and contrast resolution of simulated radiographs were considered satisfactory. Findings from the current study indicated that radiographs simulated using this new technique are comparable to actual digital radiographs. Further studies are needed to apply this technique in developing interactive tools for teaching radiographic anatomy and the effects of varying patient and tube positioning. © 2013 American College of Veterinary Radiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gelbard, F.; Fitzgerald, J.W.; Hoppel, W.A.
1998-07-01
We present the theoretical framework and computational methods that were used by {ital Fitzgerald} {ital et al.} [this issue (a), (b)] describing a one-dimensional sectional model to simulate multicomponent aerosol dynamics in the marine boundary layer. The concepts and limitations of modeling spatially varying multicomponent aerosols are elucidated. New numerical sectional techniques are presented for simulating multicomponent aerosol growth, settling, and eddy transport, coupled to time-dependent and spatially varying condensing vapor concentrations. Comparisons are presented with new exact solutions for settling and particle growth by simultaneous dynamic condensation of one vapor and by instantaneous equilibration with a spatially varying secondmore » vapor. {copyright} 1998 American Geophysical Union« less
Adding computationally efficient realism to Monte Carlo turbulence simulation
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1985-01-01
Frequently in aerospace vehicle flight simulation, random turbulence is generated using the assumption that the craft is small compared to the length scales of turbulence. The turbulence is presumed to vary only along the flight path of the vehicle but not across the vehicle span. The addition of the realism of three-dimensionality is a worthy goal, but any such attempt will not gain acceptance in the simulator community unless it is computationally efficient. A concept for adding three-dimensional realism with a minimum of computational complexity is presented. The concept involves the use of close rational approximations to irrational spectra and cross-spectra so that systems of stable, explicit difference equations can be used to generate the turbulence.
Use of computational fluid dynamics in respiratory medicine.
Fernández Tena, Ana; Casan Clarà, Pere
2015-06-01
Computational Fluid Dynamics (CFD) is a computer-based tool for simulating fluid movement. The main advantages of CFD over other fluid mechanics studies include: substantial savings in time and cost, the analysis of systems or conditions that are very difficult to simulate experimentally (as is the case of the airways), and a practically unlimited level of detail. We used the Ansys-Fluent CFD program to develop a conducting airway model to simulate different inspiratory flow rates and the deposition of inhaled particles of varying diameters, obtaining results consistent with those reported in the literature using other procedures. We hope this approach will enable clinicians to further individualize the treatment of different respiratory diseases. Copyright © 2014 SEPAR. Published by Elsevier Espana. All rights reserved.
Merritt, M.L.
1993-01-01
The simulation of the transport of injected freshwater in a thin brackish aquifer, overlain and underlain by confining layers containing more saline water, is shown to be influenced by the choice of the finite-difference approximation method, the algorithm for representing vertical advective and dispersive fluxes, and the values assigned to parametric coefficients that specify the degree of vertical dispersion and molecular diffusion that occurs. Computed potable water recovery efficiencies will differ depending upon the choice of algorithm and approximation method, as will dispersion coefficients estimated based on the calibration of simulations to match measured data. A comparison of centered and backward finite-difference approximation methods shows that substantially different transition zones between injected and native waters are depicted by the different methods, and computed recovery efficiencies vary greatly. Standard and experimental algorithms and a variety of values for molecular diffusivity, transverse dispersivity, and vertical scaling factor were compared in simulations of freshwater storage in a thin brackish aquifer. Computed recovery efficiencies vary considerably, and appreciable differences are observed in the distribution of injected freshwater in the various cases tested. The results demonstrate both a qualitatively different description of transport using the experimental algorithms and the interrelated influences of molecular diffusion and transverse dispersion on simulated recovery efficiency. When simulating natural aquifer flow in cross-section, flushing of the aquifer occurred for all tested coefficient choices using both standard and experimental algorithms. ?? 1993.
A Comparative Study of High and Low Fidelity Fan Models for Turbofan Engine System Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Afjeh, Abdollah A.
1991-01-01
In this paper, a heterogeneous propulsion system simulation method is presented. The method is based on the formulation of a cycle model of a gas turbine engine. The model includes the nonlinear characteristics of the engine components via use of empirical data. The potential to simulate the entire engine operation on a computer without the aid of data is demonstrated by numerically generating "performance maps" for a fan component using two flow models of varying fidelity. The suitability of the fan models were evaluated by comparing the computed performance with experimental data. A discussion of the potential benefits and/or difficulties in connecting simulations solutions of differing fidelity is given.
Simulating the Gradually Deteriorating Performance of an RTG
NASA Technical Reports Server (NTRS)
Wood, Eric G.; Ewell, Richard C.; Patel, Jagdish; Hanks, David R.; Lozano, Juan A.; Snyder, G. Jeffrey; Noon, Larry
2008-01-01
Degra (now in version 3) is a computer program that simulates the performance of a radioisotope thermoelectric generator (RTG) over its lifetime. Degra is provided with a graphical user interface that is used to edit input parameters that describe the initial state of the RTG and the time-varying loads and environment to which it will be exposed. Performance is computed by modeling the flows of heat from the radioactive source and through the thermocouples, also allowing for losses, to determine the temperature drop across the thermocouples. This temperature drop is used to determine the open-circuit voltage, electrical resistance, and thermal conductance of the thermocouples. Output power can then be computed by relating the open-circuit voltage and the electrical resistance of the thermocouples to a specified time-varying load voltage. Degra accounts for the gradual deterioration of performance attributable primarily to decay of the radioactive source and secondarily to gradual deterioration of the thermoelectric material. To provide guidance to an RTG designer, given a minimum of input, Degra computes the dimensions, masses, and thermal conductances of important internal structures as well as the overall external dimensions and total mass.
Computing pKa Values with a Mixing Hamiltonian Quantum Mechanical/Molecular Mechanical Approach.
Liu, Yang; Fan, Xiaoli; Jin, Yingdi; Hu, Xiangqian; Hu, Hao
2013-09-10
Accurate computation of the pKa value of a compound in solution is important but challenging. Here, a new mixing quantum mechanical/molecular mechanical (QM/MM) Hamiltonian method is developed to simulate the free-energy change associated with the protonation/deprotonation processes in solution. The mixing Hamiltonian method is designed for efficient quantum mechanical free-energy simulations by alchemically varying the nuclear potential, i.e., the nuclear charge of the transforming nucleus. In pKa calculation, the charge on the proton is varied in fraction between 0 and 1, corresponding to the fully deprotonated and protonated states, respectively. Inspired by the mixing potential QM/MM free energy simulation method developed previously [H. Hu and W. T. Yang, J. Chem. Phys. 2005, 123, 041102], this method succeeds many advantages of a large class of λ-coupled free-energy simulation methods and the linear combination of atomic potential approach. Theory and technique details of this method, along with the calculation results of the pKa of methanol and methanethiol molecules in aqueous solution, are reported. The results show satisfactory agreement with the experimental data.
GUI for Computational Simulation of a Propellant Mixer
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Richter, Hanz; Barbieri, Enrique; Granger, Jamie
2005-01-01
Control Panel is a computer program that generates a graphical user interface (GUI) for computational simulation of a rocket-test-stand propellant mixer in which gaseous hydrogen (GH2) is injected into flowing liquid hydrogen (LH2) to obtain a combined flow having desired thermodynamic properties. The GUI is used in conjunction with software that models the mixer as a system having three inputs (the positions of the GH2 and LH2 inlet valves and an outlet valve) and three outputs (the pressure inside the mixer and the outlet flow temperature and flow rate). The user can specify valve characteristics and thermodynamic properties of the input fluids via userfriendly dialog boxes. The user can enter temporally varying input values or temporally varying desired output values. The GUI provides (1) a set-point calculator function for determining fixed valve positions that yield desired output values and (2) simulation functions that predict the response of the mixer to variations in the properties of the LH2 and GH2 and manual- or feedback-control variations in valve positions. The GUI enables scheduling of a sequence of operations that includes switching from manual to feedback control when a certain event occurs.
Genetic data simulators and their applications: an overview
Peng, Bo; Chen, Huann-Sheng; Mechanic, Leah E.; Racine, Ben; Clarke, John; Gillanders, Elizabeth; Feuer, Eric J.
2016-01-01
Computer simulations have played an indispensable role in the development and application of statistical models and methods for genetic studies across multiple disciplines. The need to simulate complex evolutionary scenarios and pseudo-datasets for various studies has fueled the development of dozens of computer programs with varying reliability, performance, and application areas. To help researchers compare and choose the most appropriate simulators for their studies, we have created the Genetic Simulation Resources (GSR) website, which allows authors of simulation software to register their applications and describe them with more than 160 defined attributes. This article summarizes the properties of 93 simulators currently registered at GSR and provides an overview of the development and applications of genetic simulators. Unlike other review articles that address technical issues or compare simulators for particular application areas, we focus on software development, maintenance, and features of simulators, often from a historical perspective. Publications that cite these simulators are used to summarize both the applications of genetic simulations and the utilization of simulators. PMID:25504286
Zhao, Chenhui; Zhang, Guangcheng; Wu, Yibo
2012-01-01
The resin flow behavior in the vacuum assisted resin infusion molding process (VARI) of foam sandwich composites was studied by both visualization flow experiments and computer simulation. Both experimental and simulation results show that: the distribution medium (DM) leads to a shorter molding filling time in grooved foam sandwich composites via the VARI process, and the mold filling time is linearly reduced with the increase of the ratio of DM/Preform. Patterns of the resin sources have a significant influence on the resin filling time. The filling time of center source is shorter than that of edge pattern. Point pattern results in longer filling time than of linear source. Short edge/center patterns need a longer time to fill the mould compared with Long edge/center sources.
Computer image generation: Reconfigurability as a strategy in high fidelity space applications
NASA Technical Reports Server (NTRS)
Bartholomew, Michael J.
1989-01-01
The demand for realistic, high fidelity, computer image generation systems to support space simulation is well established. However, as the number and diversity of space applications increase, the complexity and cost of computer image generation systems also increase. One strategy used to harmonize cost with varied requirements is establishment of a reconfigurable image generation system that can be adapted rapidly and easily to meet new and changing requirements. The reconfigurability strategy through the life cycle of system conception, specification, design, implementation, operation, and support for high fidelity computer image generation systems are discussed. The discussion is limited to those issues directly associated with reconfigurability and adaptability of a specialized scene generation system in a multi-faceted space applications environment. Examples and insights gained through the recent development and installation of the Improved Multi-function Scene Generation System at Johnson Space Center, Systems Engineering Simulator are reviewed and compared with current simulator industry practices. The results are clear; the strategy of reconfigurability applied to space simulation requirements provides a viable path to supporting diverse applications with an adaptable computer image generation system.
The effect of photometric and geometric context on photometric and geometric lightness effects
Lee, Thomas Y.; Brainard, David H.
2014-01-01
We measured the lightness of probe tabs embedded at different orientations in various contextual images presented on a computer-controlled stereo display. Two background context planes met along a horizontal roof-like ridge. Each plane was a graphic rendering of a set of achromatic surfaces with the simulated illumination for each plane controlled independently. Photometric context was varied by changing the difference in simulated illumination intensity between the two background planes. Geometric context was varied by changing the angle between them. We parsed the data into separate photometric effects and geometric effects. For fixed geometry, varying photometric context led to linear changes in both the photometric and geometric effects. Varying geometric context did not produce a statistically reliable change in either the photometric or geometric effects. PMID:24464163
The effect of photometric and geometric context on photometric and geometric lightness effects.
Lee, Thomas Y; Brainard, David H
2014-01-24
We measured the lightness of probe tabs embedded at different orientations in various contextual images presented on a computer-controlled stereo display. Two background context planes met along a horizontal roof-like ridge. Each plane was a graphic rendering of a set of achromatic surfaces with the simulated illumination for each plane controlled independently. Photometric context was varied by changing the difference in simulated illumination intensity between the two background planes. Geometric context was varied by changing the angle between them. We parsed the data into separate photometric effects and geometric effects. For fixed geometry, varying photometric context led to linear changes in both the photometric and geometric effects. Varying geometric context did not produce a statistically reliable change in either the photometric or geometric effects.
Blast Load Simulator Experiments for Computational Model Validation Report 3
2017-07-01
establish confidence in the results produced by the simulations. This report describes a set of replicate experiments in which a small, non - responding steel...designed to simulate blast waveforms for explosive yields up to 20,000 lb of TNT equivalent at a peak reflected pressure up to 80 psi and a peak...the pressure loading on a non - responding box-type structure at varying obliquities located in the flow of the BLS simulated blast environment for
NASA Astrophysics Data System (ADS)
Chien, Cheng-Chih
In the past thirty years, the effectiveness of computer assisted learning was found varied by individual studies. Today, with drastic technical improvement, computers have been widely spread in schools and used in a variety of ways. In this study, a design model involving educational technology, pedagogy, and content domain is proposed for effective use of computers in learning. Computer simulation, constructivist and Vygotskian perspectives, and circular motion are the three elements of the specific Chain Model for instructional design. The goal of the physics course is to help students remove the ideas which are not consistent with the physics community and rebuild new knowledge. To achieve the learning goal, the strategies of using conceptual conflicts and using language to internalize specific tasks into mental functions were included. Computer simulations and accompanying worksheets were used to help students explore their own ideas and to generate questions for discussions. Using animated images to describe the dynamic processes involved in the circular motion may reduce the complexity and possible miscommunications resulting from verbal explanations. The effectiveness of the instructional material on student learning is evaluated. The results of problem solving activities show that students using computer simulations had significantly higher scores than students not using computer simulations. For conceptual understanding, on the pretest students in the non-simulation group had significantly higher score than students in the simulation group. There was no significant difference observed between the two groups in the posttest. The relations of gender, prior physics experience, and frequency of computer uses outside the course to student achievement were also studied. There were fewer female students than male students and fewer students using computer simulations than students not using computer simulations. These characteristics affect the statistical power for detecting differences. For the future research, more intervention of simulations may be introduced to explore the potential of computer simulation in helping students learning. A test for conceptual understanding with more problems and appropriate difficulty level may be needed.
Spectral quality requirements for effluent identification
NASA Astrophysics Data System (ADS)
Czerwinski, R. N.; Seeley, J. A.; Wack, E. C.
2005-11-01
We consider the problem of remotely identifying gaseous materials using passive sensing of long-wave infrared (LWIR) spectral features at hyperspectral resolution. Gaseous materials are distinguishable in the LWIR because of their unique spectral fingerprints. A sensor degraded in capability by noise or limited spectral resolution, however, may be unable to positively identify contaminants, especially if they are present in low concentrations or if the spectral library used for comparisons includes materials with similar spectral signatures. This paper will quantify the relative importance of these parameters and express the relationships between them in a functional form which can be used as a rule of thumb in sensor design or in assessing sensor capability for a specific task. This paper describes the simulation of remote sensing datacontaining a gas cloud.In each simulation, the spectra are degraded in spectral resolution and through the addition of noise to simulate spectra collected by sensors of varying design and capability. We form a trade space by systematically varying the number of sensor spectral channels and signal-to-noise ratio over a range of values. For each scenario, we evaluate the capability of the sensor for gas identification by computing the ratio of the F-statistic for the truth gas tothe same statistic computed over the rest of the library.The effect of the scope of the library is investigated as well, by computing statistics on the variability of the identification capability as the library composition is varied randomly.
ERIC Educational Resources Information Center
Mayrath, Michael C.; Nihalani, Priya K.; Robinson, Daniel H.
2011-01-01
In 2 experiments, 241 undergraduates with low domain knowledge viewed a tutorial on how to use Packet Tracer (PT), a computer-networking training simulation developed by the Cisco Networking Academy. Participants were then tested on retention of tutorial content and transfer using PT. Tutorial modality (text, narration, or narration plus text) was…
ERIC Educational Resources Information Center
Cordell, Curtis C.; And Others
A training effectiveness evaluation of the Navy Advanced Fire Fighting Training System was conducted. This system incorporates simulated fires as well as curriculum materials and instruction. The fires are non-pollutant, computer controlled, and installed in a simulated shipboard environment. Two teams of 15 to 16 persons, with varying amounts of…
NASA Astrophysics Data System (ADS)
Herrington, A. R.; Lauritzen, P. H.; Reed, K. A.
2017-12-01
The spectral element dynamical core of the Community Atmosphere Model (CAM) has recently been coupled to an approximately isotropic, finite-volume grid per implementation of the conservative semi-Lagrangian multi-tracer transport scheme (CAM-SE-CSLAM; Lauritzen et al. 2017). In this framework, the semi-Lagrangian transport of tracers are computed on the finite-volume grid, while the adiabatic dynamics are solved using the spectral element grid. The physical parameterizations are evaluated on the finite-volume grid, as opposed to the unevenly spaced Gauss-Lobatto-Legendre nodes of the spectral element grid. Computing the physics on the finite-volume grid reduces numerical artifacts such as grid imprinting, possibly because the forcing terms are no longer computed at element boundaries where the resolved dynamics are least smooth. The separation of the physics grid and the dynamics grid allows for a unique opportunity to understand the resolution sensitivity in CAM-SE-CSLAM. The observed large sensitivity of CAM to horizontal resolution is a poorly understood impediment to improved simulations of regional climate using global, variable resolution grids. Here, a series of idealized moist simulations are presented in which the finite-volume grid resolution is varied relative to the spectral element grid resolution in CAM-SE-CSLAM. The simulations are carried out at multiple spectral element grid resolutions, in part to provide a companion set of simulations, in which the spectral element grid resolution is varied relative to the finite-volume grid resolution, but more generally to understand if the sensitivity to the finite-volume grid resolution is consistent across a wider spectrum of resolved scales. Results are interpreted in the context of prior ideas regarding resolution sensitivity of global atmospheric models.
Comprehensive national database of tree effects on air quality and human health in the United States
Satoshi Hirabayashi; David J. Nowak
2016-01-01
Trees remove air pollutants through dry deposition processes depending upon forest structure, meteorology, and air quality that vary across space and time. Employing nationally available forest, weather, air pollution and human population data for 2010, computer simulations were performed for deciduous and evergreen trees with varying leaf area index for rural and...
Large-scale expensive black-box function optimization
NASA Astrophysics Data System (ADS)
Rashid, Kashif; Bailey, William; Couët, Benoît
2012-09-01
This paper presents the application of an adaptive radial basis function method to a computationally expensive black-box reservoir simulation model of many variables. An iterative proxy-based scheme is used to tune the control variables, distributed for finer control over a varying number of intervals covering the total simulation period, to maximize asset NPV. The method shows that large-scale simulation-based function optimization of several hundred variables is practical and effective.
Simulation of biochemical reactions with time-dependent rates by the rejection-based algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thanh, Vo Hong, E-mail: vo@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; Department of Mathematics, University of Trento, Trento
We address the problem of simulating biochemical reaction networks with time-dependent rates and propose a new algorithm based on our rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)]. The computation for selecting next reaction firings by our time-dependent RSSA (tRSSA) is computationally efficient. Furthermore, the generated trajectory is exact by exploiting the rejection-based mechanism. We benchmark tRSSA on different biological systems with varying forms of reaction rates to demonstrate its applicability and efficiency. We reveal that for nontrivial cases, the selection of reaction firings in existing algorithms introduces approximations because the integration of reactionmore » rates is very computationally demanding and simplifying assumptions are introduced. The selection of the next reaction firing by our approach is easier while preserving the exactness.« less
NASA Astrophysics Data System (ADS)
Sable, Peter; Helminiak, Nathaniel; Harstad, Eric; Gullerud, Arne; Hollenshead, Jeromy; Hertel, Eugene; Sandia National Laboratories Collaboration; Marquette University Collaboration
2017-06-01
With the increasing use of hydrocodes in modeling and system design, experimental benchmarking of software has never been more important. While this has been a large area of focus since the inception of computational design, comparisons with temperature data are sparse due to experimental limitations. A novel temperature measurement technique, magnetic diffusion analysis, has enabled the acquisition of in-flight temperature measurements of hyper velocity projectiles. Using this, an AC-14 bare shaped charge and an LX-14 EFP, both with copper linings, were simulated using CTH to benchmark temperature against experimental results. Particular attention was given to the slug temperature profiles after separation, and the effect of varying equation-of-state and strength models. Simulations are in agreement with experimental, attaining better than 2% error between observed shaped charge temperatures. This varied notably depending on the strength model used. Similar observations were made simulating the EFP case, with a minimum 4% deviation. Jet structures compare well with radiographic images and are consistent with ALEGRA simulations previously conducted. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Zhou, Weizhou; Shi, Baiou; Webb, Edmund
2017-11-01
Recently, there are many experimental and theoretical studies to understand and control the dynamic spreading of nano-suspension droplets on solid surfaces. However, fundamental understanding of driving forces dictating the kinetics of nano-suspension wetting and spreading, especially capillary forces that manifest during the process, is lacking. Here, we present results from atomic scale simulations that were used to compute forces between suspended particles and advancing liquid fronts. The role of nano-particle size, particle loading, and interaction strength on forces computed from simulations will be discussed. Results demonstrate that increasing the particle size dramatically changes observed wetting behavior from depinning to pinning. From simulations on varying particle size, a relationship between computed forces and particle size is advanced and compared to existing expressions in the literature. High particle loading significantly slowed spreading kinetics, by introducing tortuous transport paths for liquid delivery to the advancing contact line. Lastly, we show how weakening the interaction between the particle and the underlying substrate can change a system from exhibiting pinning behavior to de-pinning.
Virtual reality neurosurgery: a simulator blueprint.
Spicer, Mark A; van Velsen, Martin; Caffrey, John P; Apuzzo, Michael L J
2004-04-01
This article details preliminary studies undertaken to integrate the most relevant advancements across multiple disciplines in an effort to construct a highly realistic neurosurgical simulator based on a distributed computer architecture. Techniques based on modified computational modeling paradigms incorporating finite element analysis are presented, as are current and projected efforts directed toward the implementation of a novel bidirectional haptic device. Patient-specific data derived from noninvasive magnetic resonance imaging sequences are used to construct a computational model of the surgical region of interest. Magnetic resonance images of the brain may be coregistered with those obtained from magnetic resonance angiography, magnetic resonance venography, and diffusion tensor imaging to formulate models of varying anatomic complexity. The majority of the computational burden is encountered in the presimulation reduction of the computational model and allows realization of the required threshold rates for the accurate and realistic representation of real-time visual animations. Intracranial neurosurgical procedures offer an ideal testing site for the development of a totally immersive virtual reality surgical simulator when compared with the simulations required in other surgical subspecialties. The material properties of the brain as well as the typically small volumes of tissue exposed in the surgical field, coupled with techniques and strategies to minimize computational demands, provide unique opportunities for the development of such a simulator. Incorporation of real-time haptic and visual feedback is approached here and likely will be accomplished soon.
NASA Technical Reports Server (NTRS)
Bizzell, G. D.; Crane, G. E.
1976-01-01
A boundary value problem was solved numerically for a liquid that is assumed to be inviscid and incompressible, having a motion that is irrotational and axisymmetric, and having a constant (5 degrees) solid-liquid contact angle. The avoidance of excessive mesh distortion, encountered with strictly Lagrangian or Eulerian kinematics, was achieved by introducing an auxiliary kinematic velocity field along the free surface in order to vary the trajectories used in integrating the ordinary differential equations simulating the moving boundary. The computation of the velocity potential was based upon a nonuniform triangular mesh which was automatically revised to varying depths to accommodate the motion of the free surface. These methods permitted calculation of draining induced axisymmetric slosh through the many (or fractional) finite amplitude oscillations that can occur depending upon the balance of draining, gravitational, and surface tension forces. Velocity fields, evolution of the free surface with time, and liquid residual volumes were computed for three and one half decades of Weber number and for two Bond numbers, tank fill levels, and drain radii. Comparisons with experimental data are very satisfactory.
Radke, Wolfgang
2004-03-05
Simulations of the distribution coefficients of linear polymers and regular combs with various spacings between the arms have been performed. The distribution coefficients were plotted as a function of the number of segments in order to compare the size exclusion chromatography (SEC)-elution behavior of combs relative to linear molecules. By comparing the simulated SEC-calibration curves it is possible to predict the elution behavior of comb-shaped polymers relative to linear ones. In order to compare the results obtained by computer simulations with experimental data, a variety of comb-shaped polymers varying in side chain length, spacing between the side chains and molecular weights of the backbone were analyzed by SEC with light-scattering detection. It was found that the computer simulations could predict the molecular weights of linear molecules having the same retention volume with an accuracy of about 10%, i.e. the error in the molecular weight obtained by calculating the molecular weight of the comb-polymer based on a calibration curve constructed using linear standards and the results of the computer simulations are of the same magnitude as the experimental error of absolute molecular weight determination.
A static data flow simulation study at Ames Research Center
NASA Technical Reports Server (NTRS)
Barszcz, Eric; Howard, Lauri S.
1987-01-01
Demands in computational power, particularly in the area of computational fluid dynamics (CFD), led NASA Ames Research Center to study advanced computer architectures. One architecture being studied is the static data flow architecture based on research done by Jack B. Dennis at MIT. To improve understanding of this architecture, a static data flow simulator, written in Pascal, has been implemented for use on a Cray X-MP/48. A matrix multiply and a two-dimensional fast Fourier transform (FFT), two algorithms used in CFD work at Ames, have been run on the simulator. Execution times can vary by a factor of more than 2 depending on the partitioning method used to assign instructions to processing elements. Service time for matching tokens has proved to be a major bottleneck. Loop control and array address calculation overhead can double the execution time. The best sustained MFLOPS rates were less than 50% of the maximum capability of the machine.
NASA Astrophysics Data System (ADS)
Han, Suyue; Chang, Gary Han; Schirmer, Clemens; Modarres-Sadeghi, Yahya
2016-11-01
We construct a reduced-order model (ROM) to study the Wall Shear Stress (WSS) distributions in image-based patient-specific aneurysms models. The magnitude of WSS has been shown to be a critical factor in growth and rupture of human aneurysms. We start the process by running a training case using Computational Fluid Dynamics (CFD) simulation with time-varying flow parameters, such that these parameters cover the range of parameters of interest. The method of snapshot Proper Orthogonal Decomposition (POD) is utilized to construct the reduced-order bases using the training CFD simulation. The resulting ROM enables us to study the flow patterns and the WSS distributions over a range of system parameters computationally very efficiently with a relatively small number of modes. This enables comprehensive analysis of the model system across a range of physiological conditions without the need to re-compute the simulation for small changes in the system parameters.
Method and apparatus for transfer function simulator for testing complex systems
NASA Technical Reports Server (NTRS)
Kavaya, M. J. (Inventor)
1985-01-01
A method and apparatus for testing the operation of a complex stabilization circuit in a closed loop system is presented. The method is comprised of a programmed analog or digital computing system for implementing the transfer function of a load thereby providing a predictable load. The digital computing system employs a table stored in a microprocessor in which precomputed values of the load transfer function are stored for values of input signal from the stabilization circuit over the range of interest. This technique may be used not only for isolating faults in the stabilization circuit, but also for analyzing a fault in a faulty load by so varying parameters of the computing system as to simulate operation of the actual load with the fault.
Quantitative, steady-state properties of Catania's computational model of the operant reserve.
Berg, John P; McDowell, J J
2011-05-01
Catania (2005) found that a computational model of the operant reserve (Skinner, 1938) produced realistic behavior in initial, exploratory analyses. Although Catania's operant reserve computational model demonstrated potential to simulate varied behavioral phenomena, the model was not systematically tested. The current project replicated and extended the Catania model, clarified its capabilities through systematic testing, and determined the extent to which it produces behavior corresponding to matching theory. Significant departures from both classic and modern matching theory were found in behavior generated by the model across all conditions. The results suggest that a simple, dynamic operant model of the reflex reserve does not simulate realistic steady state behavior. Copyright © 2011 Elsevier B.V. All rights reserved.
Adaptive quantum computation in changing environments using projective simulation
NASA Astrophysics Data System (ADS)
Tiersch, M.; Ganahl, E. J.; Briegel, H. J.
2015-08-01
Quantum information processing devices need to be robust and stable against external noise and internal imperfections to ensure correct operation. In a setting of measurement-based quantum computation, we explore how an intelligent agent endowed with a projective simulator can act as controller to adapt measurement directions to an external stray field of unknown magnitude in a fixed direction. We assess the agent’s learning behavior in static and time-varying fields and explore composition strategies in the projective simulator to improve the agent’s performance. We demonstrate the applicability by correcting for stray fields in a measurement-based algorithm for Grover’s search. Thereby, we lay out a path for adaptive controllers based on intelligent agents for quantum information tasks.
Adaptive quantum computation in changing environments using projective simulation
Tiersch, M.; Ganahl, E. J.; Briegel, H. J.
2015-01-01
Quantum information processing devices need to be robust and stable against external noise and internal imperfections to ensure correct operation. In a setting of measurement-based quantum computation, we explore how an intelligent agent endowed with a projective simulator can act as controller to adapt measurement directions to an external stray field of unknown magnitude in a fixed direction. We assess the agent’s learning behavior in static and time-varying fields and explore composition strategies in the projective simulator to improve the agent’s performance. We demonstrate the applicability by correcting for stray fields in a measurement-based algorithm for Grover’s search. Thereby, we lay out a path for adaptive controllers based on intelligent agents for quantum information tasks. PMID:26260263
Biobeam—Multiplexed wave-optical simulations of light-sheet microscopy
Weigert, Martin; Bundschuh, Sebastian T.
2018-01-01
Sample-induced image-degradation remains an intricate wave-optical problem in light-sheet microscopy. Here we present biobeam, an open-source software package that enables simulation of operational light-sheet microscopes by combining data from 105–106 multiplexed and GPU-accelerated point-spread-function calculations. The wave-optical nature of these simulations leads to the faithful reproduction of spatially varying aberrations, diffraction artifacts, geometric image distortions, adaptive optics, and emergent wave-optical phenomena, and renders image-formation in light-sheet microscopy computationally tractable. PMID:29652879
Composite Load Spectra for Select Space Propulsion Structural Components
NASA Technical Reports Server (NTRS)
Ho, Hing W.; Newell, James F.
1994-01-01
Generic load models are described with multiple levels of progressive sophistication to simulate the composite (combined) load spectra (CLS) that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades and liquid oxygen (LOX) posts. These generic (coupled) models combine the deterministic models for composite load dynamic, acoustic, high-pressure and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients are then determined using advanced probabilistic simulation methods with and without strategically selected experimental data. The entire simulation process is included in a CLS computer code. Applications of the computer code to various components in conjunction with the PSAM (Probabilistic Structural Analysis Method) to perform probabilistic load evaluation and life prediction evaluations are also described to illustrate the effectiveness of the coupled model approach.
Reduced order models for assessing CO 2 impacts in shallow unconfined aquifers
Keating, Elizabeth H.; Harp, Dylan H.; Dai, Zhenxue; ...
2016-01-28
Risk assessment studies of potential CO 2 sequestration projects consider many factors, including the possibility of brine and/or CO 2 leakage from the storage reservoir. Detailed multiphase reactive transport simulations have been developed to predict the impact of such leaks on shallow groundwater quality; however, these simulations are computationally expensive and thus difficult to directly embed in a probabilistic risk assessment analysis. Here we present a process for developing computationally fast reduced-order models which emulate key features of the more detailed reactive transport simulations. A large ensemble of simulations that take into account uncertainty in aquifer characteristics and CO 2/brinemore » leakage scenarios were performed. Twelve simulation outputs of interest were used to develop response surfaces (RSs) using a MARS (multivariate adaptive regression splines) algorithm (Milborrow, 2015). A key part of this study is to compare different measures of ROM accuracy. We then show that for some computed outputs, MARS performs very well in matching the simulation data. The capability of the RS to predict simulation outputs for parameter combinations not used in RS development was tested using cross-validation. Again, for some outputs, these results were quite good. For other outputs, however, the method performs relatively poorly. Performance was best for predicting the volume of depressed-pH-plumes, and was relatively poor for predicting organic and trace metal plume volumes. We believe several factors, including the non-linearity of the problem, complexity of the geochemistry, and granularity in the simulation results, contribute to this varied performance. The reduced order models were developed principally to be used in probabilistic performance analysis where a large range of scenarios are considered and ensemble performance is calculated. We demonstrate that they effectively predict the ensemble behavior. But, the performance of the RSs is much less accurate when used to predict time-varying outputs from a single simulation. If an analysis requires only a small number of scenarios to be investigated, computationally expensive physics-based simulations would likely provide more reliable results. Finally, if the aggregate behavior of a large number of realizations is the focus, as will be the case in probabilistic quantitative risk assessment, the methodology presented here is relatively robust.« less
Shuttle operations simulation model programmers'/users' manual
NASA Technical Reports Server (NTRS)
Porter, D. G.
1972-01-01
The prospective user of the shuttle operations simulation (SOS) model is given sufficient information to enable him to perform simulation studies of the space shuttle launch-to-launch operations cycle. The procedures used for modifying the SOS model to meet user requirements are described. The various control card sequences required to execute the SOS model are given. The report is written for users with varying computer simulation experience. A description of the components of the SOS model is included that presents both an explanation of the logic involved in the simulation of the shuttle operations cycle and a description of the routines used to support the actual simulation.
Phase transformations at interfaces: Observations from atomistic modeling
Frolov, T.; Asta, M.; Mishin, Y.
2016-10-01
Here, we review the recent progress in theoretical understanding and atomistic computer simulations of phase transformations in materials interfaces, focusing on grain boundaries (GBs) in metallic systems. Recently developed simulation approaches enable the search and structural characterization of GB phases in single-component metals and binary alloys, calculation of thermodynamic properties of individual GB phases, and modeling of the effect of the GB phase transformations on GB kinetics. Atomistic simulations demonstrate that the GB transformations can be induced by varying the temperature, loading the GB with point defects, or varying the amount of solute segregation. The atomic-level understanding obtained from suchmore » simulations can provide input for further development of thermodynamics theories and continuous models of interface phase transformations while simultaneously serving as a testing ground for validation of theories and models. They can also help interpret and guide experimental work in this field.« less
Prediction-Correction Algorithms for Time-Varying Constrained Optimization
Simonetto, Andrea; Dall'Anese, Emiliano
2017-07-26
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
NASA Astrophysics Data System (ADS)
Larsen, J. D.; Schaap, M. G.
2013-12-01
Recent advances in computing technology and experimental techniques have made it possible to observe and characterize fluid dynamics at the micro-scale. Many computational methods exist that can adequately simulate fluid flow in porous media. Lattice Boltzmann methods provide the distinct advantage of tracking particles at the microscopic level and returning macroscopic observations. While experimental methods can accurately measure macroscopic fluid dynamics, computational efforts can be used to predict and gain insight into fluid dynamics by utilizing thin sections or computed micro-tomography (CMT) images of core sections. Although substantial effort have been made to advance non-invasive imaging methods such as CMT, fluid dynamics simulations, and microscale analysis, a true three dimensional image segmentation technique has not been developed until recently. Many competing segmentation techniques are utilized in industry and research settings with varying results. In this study lattice Boltzmann method is used to simulate stokes flow in a macroporous soil column. Two dimensional CMT images were used to reconstruct a three dimensional representation of the original sample. Six competing segmentation standards were used to binarize the CMT volumes which provide distinction between solid phase and pore space. The permeability of the reconstructed samples was calculated, with Darcy's Law, from lattice Boltzmann simulations of fluid flow in the samples. We compare simulated permeability from differing segmentation algorithms to experimental findings.
Growth and yield models for central hardwoods
Martin E. Dale; Donald E. Hilt
1989-01-01
Over the last 20 years computers have become an efficient tool to estimate growth and yield. Computerized yield estimates vary from simple approximation or interpolation of traditional normal yield tables to highly sophisticated programs that simulate the growth and yield of each individual tree.
Aircraft noise synthesis system
NASA Technical Reports Server (NTRS)
Mccurdy, David A.; Grandle, Robert E.
1987-01-01
A second-generation Aircraft Noise Synthesis System has been developed to provide test stimuli for studies of community annoyance to aircraft flyover noise. The computer-based system generates realistic, time-varying, audio simulations of aircraft flyover noise at a specified observer location on the ground. The synthesis takes into account the time-varying aircraft position relative to the observer; specified reference spectra consisting of broadband, narrowband, and pure-tone components; directivity patterns; Doppler shift; atmospheric effects; and ground effects. These parameters can be specified and controlled in such a way as to generate stimuli in which certain noise characteristics, such as duration or tonal content, are independently varied, while the remaining characteristics, such as broadband content, are held constant. The system can also generate simulations of the predicted noise characteristics of future aircraft. A description of the synthesis system and a discussion of the algorithms and methods used to generate the simulations are provided. An appendix describing the input data and providing user instructions is also included.
Unsteady streamflow simulation using a linear implicit finite-difference model
Land, Larry F.
1978-01-01
A computer program for simulating one-dimensional subcritical, gradually varied, unsteady flow in a stream has been developed and documented. Given upstream and downstream boundary conditions and channel geometry data, roughness coefficients, stage, and discharge can be calculated anywhere within the reach as a function of time. The program uses a linear implicit finite-difference technique that discritizes the partial differential equations. Then it arranges the coefficients of the continuity and momentum equations into a pentadiagonal matrix for solution. Because it is a reasonable compromise between computational accuracy, speed and ease of use,the technique is one of the most commonly used. The upstream boundary condition is a depth hydrograph. However, options also allow the boundary condition to be discharge or water-surface elevation. The downstream boundary condition is a depth which may be constant, self-setting, or unsteady. The reach may be divided into uneven increments and the cross sections may be nonprismatic and may vary from one to the other. Tributary and lateral inflow may enter the reach. The digital model will simulate such common problems as (1) flood waves, (2) releases from dams, and (3) channels where storage is a consideration. It may also supply the needed flow information for mass-transport simulation. (Woodard-USGS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaurov, Alexander A., E-mail: kaurov@uchicago.edu
The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emergedmore » from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large-scale statistical properties. These mock catalogs are particularly useful for cosmic microwave background polarization and 21 cm experiments, where large volumes are required to simulate the observed signal.« less
A multiscale MDCT image-based breathing lung model with time-varying regional ventilation
Yin, Youbing; Choi, Jiwoong; Hoffman, Eric A.; Tawhai, Merryn H.; Lin, Ching-Long
2012-01-01
A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C1 continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung. PMID:23794749
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2012-01-01
This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.
NASA Astrophysics Data System (ADS)
Marchetti, Luca; Priami, Corrado; Thanh, Vo Hong
2016-07-01
This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance and accuracy of HRSSA against other state of the art algorithms.
Heterogeneous multiscale Monte Carlo simulations for gold nanoparticle radiosensitization.
Martinov, Martin P; Thomson, Rowan M
2017-02-01
To introduce the heterogeneous multiscale (HetMS) model for Monte Carlo simulations of gold nanoparticle dose-enhanced radiation therapy (GNPT), a model characterized by its varying levels of detail on different length scales within a single phantom; to apply the HetMS model in two different scenarios relevant for GNPT and to compare computed results with others published. The HetMS model is implemented using an extended version of the EGSnrc user-code egs_chamber; the extended code is tested and verified via comparisons with recently published data from independent GNP simulations. Two distinct scenarios for the HetMS model are then considered: (a) monoenergetic photon beams (20 keV to 1 MeV) incident on a cylinder (1 cm radius, 3 cm length); (b) isotropic point source (brachytherapy source spectra) at the center of a 2.5 cm radius sphere with gold nanoparticles (GNPs) diffusing outwards from the center. Dose enhancement factors (DEFs) are compared for different source energies, depths in phantom, gold concentrations, GNP sizes, and modeling assumptions, as well as with independently published values. Simulation efficiencies are investigated. The HetMS MC simulations account for the competing effects of photon fluence perturbation (due to gold in the scatter media) coupled with enhanced local energy deposition (due to modeling discrete GNPs within subvolumes). DEFs are most sensitive to these effects for the lower source energies, varying with distance from the source; DEFs below unity (i.e., dose decreases, not enhancements) can occur at energies relevant for brachytherapy. For example, in the cylinder scenario, the 20 keV photon source has a DEF of 3.1 near the phantom's surface, decreasing to less than unity by 0.7 cm depth (for 20 mg/g). Compared to discrete modeling of GNPs throughout the gold-containing (treatment) volume, efficiencies are enhanced by up to a factor of 122 with the HetMS approach. For the spherical phantom, DEFs vary with time for diffusion, radionuclide, and radius; DEFs differ considerably from those computed using a widely applied analytic approach. By combining geometric models of varying complexity on different length scales within a single simulation, the HetMS model can effectively account for both macroscopic and microscopic effects which must both be considered for accurate computation of energy deposition and DEFs for GNPT. Efficiency gains with the HetMS approach enable diverse calculations which would otherwise be prohibitively long. The HetMS model may be extended to diverse scenarios relevant for GNPT, providing further avenues for research and development. © 2016 American Association of Physicists in Medicine.
Computer simulation analysis of the behavior of renal-regulating hormones during hypogravic stress
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1982-01-01
A computer simulation of a mathematical circulation model is used to study the alterations of body fluids and their electrolyte composition that occur in weightlessness. The behavior of the renal-regulating hormones which control these alterations is compared in simulations of several one-g analogs of weightlessness and space flight. It is shown that the renal-regulating hormones represent a tightly coupled system that responds acutely to volume disturbances and chronically to electrolyte disturbances. During hypogravic conditions these responses lead to an initial suppression of hormone levels and a long-term effect which varies depending on metabolic factors that can alter the plasma electrolytes. In addition, it is found that if pressure effects normalize rapidly, a transition phase may exist which leads to a dynamic multiphasic endocrine response.
Modeling the Impact of Motivation, Personality, and Emotion on Social Behavior
NASA Astrophysics Data System (ADS)
Miller, Lynn C.; Read, Stephen J.; Zachary, Wayne; Rosoff, Andrew
Models seeking to predict human social behavior must contend with multiple sources of individual and group variability that underlie social behavior. One set of interrelated factors that strongly contribute to that variability - motivations, personality, and emotions - has been only minimally incorporated in previous computational models of social behavior. The Personality, Affect, Culture (PAC) framework is a theory-based computational model that addresses this gap. PAC is used to simulate social agents whose social behavior varies according to their personalities and emotions, which, in turn, vary according to their motivations and underlying motive control parameters. Examples involving disease spread and counter-insurgency operations show how PAC can be used to study behavioral variability in different social contexts.
Analyzing Spacecraft Telecommunication Systems
NASA Technical Reports Server (NTRS)
Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric
2004-01-01
Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.
Miller, Robert T.; Delin, G.N.
1994-01-01
A three-dimensional, anisotropic, nonisothermal, ground-water-flow, and thermal-energy-transport model was constructed to simulate the four short-term test cycles. The model was used to simulate the entire short-term testing period of approximately 400 days. The only model properties varied during model calibration were longitudinal and transverse thermal dispersivities, which, for final calibration, were simulated as 3.3 and 0.33 meters, respectively. The model was calibrated by comparing model-computed results to (1) measured temperatures at selected altitudes in four observation wells, (2) measured temperatures at the production well, and (3) calculated thermal efficiencies of the aquifer. Model-computed withdrawal-water temperatures were within an average of about 3 percent of measured values and model-computed aquifer-thermal efficiencies were within an average of about 5 percent of calculated values for the short-term test cycles. These data indicate that the model accurately simulated thermal-energy storage within the Franconia-Ironton-Galesville aquifer.
Computational Planning in Facial Surgery.
Zachow, Stefan
2015-10-01
This article reflects the research of the last two decades in computational planning for cranio-maxillofacial surgery. Model-guided and computer-assisted surgery planning has tremendously developed due to ever increasing computational capabilities. Simulators for education, planning, and training of surgery are often compared with flight simulators, where maneuvers are also trained to reduce a possible risk of failure. Meanwhile, digital patient models can be derived from medical image data with astonishing accuracy and thus can serve for model surgery to derive a surgical template model that represents the envisaged result. Computerized surgical planning approaches, however, are often still explorative, meaning that a surgeon tries to find a therapeutic concept based on his or her expertise using computational tools that are mimicking real procedures. Future perspectives of an improved computerized planning may be that surgical objectives will be generated algorithmically by employing mathematical modeling, simulation, and optimization techniques. Planning systems thus act as intelligent decision support systems. However, surgeons can still use the existing tools to vary the proposed approach, but they mainly focus on how to transfer objectives into reality. Such a development may result in a paradigm shift for future surgery planning. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
MODELING THE FATE OF TOXIC ORGANIC MATERIALS IN AQUATIC ENVIRONMENTS
Documentation is given for PEST, a dynamic simulation model for evaluating the fate of toxic organic materials (TOM) in freshwater environments. PEST represents the time-varying concentration (in ppm) of a given TOM in each of as many as 16 carrier compartments; it also computes ...
Numerical Investigation of Flow in an Over-Expanded Nozzle with Porous Surfaces
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Abdol-Hamid, K. S.; Hunter, Craig A.
2005-01-01
A new porous condition has been implemented in the PAB3D solver for simulating the flow over porous surfaces. The newly-added boundary condition is utilized to compute the flow field of a non-axisymmetric, convergent-divergent nozzle incorporating porous cavities for shock-boundary layer interaction control. The nozzle has an expansion ratio (exit area/throat area) of 1.797 and a design nozzle pressure ratio of 8.78. The flow fields for a baseline nozzle (no porosity) and for a nozzle with porous surfaces (10% porosity ratio) are computed for NPR varying from 2.01 to 9.54. Computational model results indicate that the over-expanded nozzle flow was dominated by shock-induced boundary-layer separation. Porous configurations were capable of controlling off-design separation in the nozzle by encouraging stable separation of the exhaust flow. Computational simulation results, wall centerline pressure, mach contours, and thrust efficiency ratio are presented and discussed. Computed results are in excellent agreement with experimental data.
Numerical Investigation of Flow in an Over-expanded Nozzle with Porous Surfaces
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Elmilingui, Alaa A.; Hunter, Craig A.
2006-01-01
A new porous condition has been implemented in the PAB3D solver for simulating the flow over porous surfaces. The newly-added boundary condition is utilized to compute the flow field of a non-axisymmetric, convergent-divergent nozzle incorporating porous cavities for shock-boundary layer interaction control. The nozzle has an expansion ratio (exit area/throat area) of 1.797 and a design nozzle pressure ratio of 8.78. The flow fields for a baseline nozzle (no porosity) and for a nozzle with porous surfaces (10% porosity ratio) are computed for NPR varying from 2.01 to 9.54. Computational model results indicate that the over-expanded nozzle flow is dominated by shock-induced boundary-layer separation. Porous configurations are capable of controlling off-design separation in the nozzle by encouraging stable separation of the exhaust flow. Computational simulation results, wall centerline pressure, mach contours, and thrust efficiency ratio are presented and discussed. Computed results are in excellent agreement with experimental data.
Using Adaptive Mesh Refinment to Simulate Storm Surge
NASA Astrophysics Data System (ADS)
Mandli, K. T.; Dawson, C.
2012-12-01
Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.
Testing the Use of Implicit Solvent in the Molecular Dynamics Modelling of DNA Flexibility
NASA Astrophysics Data System (ADS)
Mitchell, J.; Harris, S.
DNA flexibility controls packaging, looping and in some cases sequence specific protein binding. Molecular dynamics simulations carried out with a computationally efficient implicit solvent model are potentially a powerful tool for studying larger DNA molecules than can be currently simulated when water and counterions are represented explicitly. In this work we compare DNA flexibility at the base pair step level modelled using an implicit solvent model to that previously determined from explicit solvent simulations and database analysis. Although much of the sequence dependent behaviour is preserved in implicit solvent, the DNA is considerably more flexible when the approximate model is used. In addition we test the ability of the implicit solvent to model stress induced DNA disruptions by simulating a series of DNA minicircle topoisomers which vary in size and superhelical density. When compared with previously run explicit solvent simulations, we find that while the levels of DNA denaturation are similar using both computational methodologies, the specific structural form of the disruptions is different.
Numerical Simulation of a High-Lift Configuration with Embedded Fluidic Actuators
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Casalino, Damiano; Lin, John C.; Appelbaum, Jason
2014-01-01
Numerical simulations have been performed for a vertical tail configuration with deflected rudder. The suction surface of the main element of this configuration is embedded with an array of 32 fluidic actuators that produce oscillating sweeping jets. Such oscillating jets have been found to be very effective for flow control applications in the past. In the current paper, a high-fidelity computational fluid dynamics (CFD) code known as the PowerFLOW(Registered TradeMark) code is used to simulate the entire flow field associated with this configuration, including the flow inside the actuators. The computed results for the surface pressure and integrated forces compare favorably with measured data. In addition, numerical solutions predict the correct trends in forces with active flow control compared to the no control case. Effect of varying yaw and rudder deflection angles are also presented. In addition, computations have been performed at a higher Reynolds number to assess the performance of fluidic actuators at flight conditions.
Physically-Based Modelling and Real-Time Simulation of Fluids.
NASA Astrophysics Data System (ADS)
Chen, Jim Xiong
1995-01-01
Simulating physically realistic complex fluid behaviors presents an extremely challenging problem for computer graphics researchers. Such behaviors include the effects of driving boats through water, blending differently colored fluids, rain falling and flowing on a terrain, fluids interacting in a Distributed Interactive Simulation (DIS), etc. Such capabilities are useful in computer art, advertising, education, entertainment, and training. We present a new method for physically-based modeling and real-time simulation of fluids in computer graphics and dynamic virtual environments. By solving the 2D Navier -Stokes equations using a CFD method, we map the surface into 3D using the corresponding pressures in the fluid flow field. This achieves realistic real-time fluid surface behaviors by employing the physical governing laws of fluids but avoiding extensive 3D fluid dynamics computations. To complement the surface behaviors, we calculate fluid volume and external boundary changes separately to achieve full 3D general fluid flow. To simulate physical activities in a DIS, we introduce a mechanism which uses a uniform time scale proportional to the clock-time and variable time-slicing to synchronize physical models such as fluids in the networked environment. Our approach can simulate many different fluid behaviors by changing the internal or external boundary conditions. It can model different kinds of fluids by varying the Reynolds number. It can simulate objects moving or floating in fluids. It can also produce synchronized general fluid flows in a DIS. Our model can serve as a testbed to simulate many other fluid phenomena which have never been successfully modeled previously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolly, S; University of Missouri, Columbia, MO; Chen, H
Purpose: Local noise power spectrum (NPS) properties are significantly affected by calculation variables and CT acquisition and reconstruction parameters, but a thoughtful analysis of these effects is absent. In this study, we performed a complete analysis of the effects of calculation and imaging parameters on the NPS. Methods: The uniformity module of a Catphan phantom was scanned with a Philips Brilliance 64-slice CT simulator using various scanning protocols. Images were reconstructed using both FBP and iDose4 reconstruction algorithms. From these images, local NPS were calculated for regions of interest (ROI) of varying locations and sizes, using four image background removalmore » methods. Additionally, using a predetermined ground truth, NPS calculation accuracy for various calculation parameters was compared for computer simulated ROIs. A complete analysis of the effects of calculation, acquisition, and reconstruction parameters on the NPS was conducted. Results: The local NPS varied with ROI size and image background removal method, particularly at low spatial frequencies. The image subtraction method was the most accurate according to the computer simulation study, and was also the most effective at removing low frequency background components in the acquired data. However, first-order polynomial fitting using residual sum of squares and principle component analysis provided comparable accuracy under certain situations. Similar general trends were observed when comparing the NPS for FBP to that of iDose4 while varying other calculation and scanning parameters. However, while iDose4 reduces the noise magnitude compared to FBP, this reduction is spatial-frequency dependent, further affecting NPS variations at low spatial frequencies. Conclusion: The local NPS varies significantly depending on calculation parameters, image acquisition parameters, and reconstruction techniques. Appropriate local NPS calculation should be performed to capture spatial variations of noise; calculation methodology should be selected with consideration of image reconstruction effects and the desired purpose of CT simulation for radiotherapy tasks.« less
Illumination discrimination in real and simulated scenes
Radonjić, Ana; Pearce, Bradley; Aston, Stacey; Krieger, Avery; Dubin, Hilary; Cottaris, Nicolas P.; Brainard, David H.; Hurlbert, Anya C.
2016-01-01
Characterizing humans' ability to discriminate changes in illumination provides information about the visual system's representation of the distal stimulus. We have previously shown that humans are able to discriminate illumination changes and that sensitivity to such changes depends on their chromatic direction. Probing illumination discrimination further would be facilitated by the use of computer-graphics simulations, which would, in practice, enable a wider range of stimulus manipulations. There is no a priori guarantee, however, that results obtained with simulated scenes generalize to real illuminated scenes. To investigate this question, we measured illumination discrimination in real and simulated scenes that were well-matched in mean chromaticity and scene geometry. Illumination discrimination thresholds were essentially identical for the two stimulus types. As in our previous work, these thresholds varied with illumination change direction. We exploited the flexibility offered by the use of graphics simulations to investigate whether the differences across direction are preserved when the surfaces in the scene are varied. We show that varying the scene's surface ensemble in a manner that also changes mean scene chromaticity modulates the relative sensitivity to illumination changes along different chromatic directions. Thus, any characterization of sensitivity to changes in illumination must be defined relative to the set of surfaces in the scene. PMID:28558392
Shock Interaction with Random Spherical Particle Beds
NASA Astrophysics Data System (ADS)
Neal, Chris; Mehta, Yash; Salari, Kambiz; Jackson, Thomas L.; Balachandar, S. "Bala"; Thakur, Siddharth
2016-11-01
In this talk we present results on fully resolved simulations of shock interaction with randomly distributed bed of particles. Multiple simulations were carried out by varying the number of particles to isolate the effect of volume fraction. Major focus of these simulations was to understand 1) the effect of the shockwave and volume fraction on the forces experienced by the particles, 2) the effect of particles on the shock wave, and 3) fluid mediated particle-particle interactions. Peak drag force for particles at different volume fractions show a downward trend as the depth of the bed increased. This can be attributed to dissipation of energy as the shockwave travels through the bed of particles. One of the fascinating observations from these simulations was the fluctuations in different quantities due to presence of multiple particles and their random distribution. These are large simulations with hundreds of particles resulting in large amount of data. We present statistical analysis of the data and make relevant observations. Average pressure in the computational domain is computed to characterize the strengths of the reflected and transmitted waves. We also present flow field contour plots to support our observations. U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marchetti, Luca, E-mail: marchetti@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; University of Trento, Department of Mathematics
This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance andmore » accuracy of HRSSA against other state of the art algorithms.« less
The Variation of Slat Noise with Mach and Reynolds Numbers
NASA Technical Reports Server (NTRS)
Lockard, David P.; Choudhari, Meelan M.
2011-01-01
The slat noise from the 30P30N high-lift system has been computed using a computational fluid dynamics code in conjunction with a Ffowcs Williams-Hawkings solver. By varying the Mach number from 0.13 to 0.25, the noise was found to vary roughly with the 5th power of the speed. Slight changes in the behavior with directivity angle could easily account for the different speed dependencies reported in the literature. Varying the Reynolds number from 1.4 to 2.4 million resulted in almost no differences, and primarily served to demonstrate the repeatability of the results. However, changing the underlying hybrid Reynolds-averaged-Navier-Stokes/Large-Eddy-Simulation turbulence model significantly altered the mean flow because of changes in the flap separation. However, the general trends observed in both the acoustics and near-field fluctuations were similar for both models.
Statistical computation of tolerance limits
NASA Technical Reports Server (NTRS)
Wheeler, J. T.
1993-01-01
Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.
NASA Technical Reports Server (NTRS)
Karmali, M. S.; Phatak, A. V.
1982-01-01
Results of a study to investigate, by means of a computer simulation, the performance sensitivity of helicopter IMC DSAL operations as a function of navigation system parameters are presented. A mathematical model representing generically a navigation system is formulated. The scenario simulated consists of a straight in helicopter approach to landing along a 6 deg glideslope. The deceleration magnitude chosen is 03g. The navigation model parameters are varied and the statistics of the total system errors (TSE) computed. These statistics are used to determine the critical navigation system parameters that affect the performance of the closed-loop navigation, guidance and control system of a UH-1H helicopter.
Ab Initio Molecular-Dynamics Simulation of Neuromorphic Computing in Phase-Change Memory Materials.
Skelton, Jonathan M; Loke, Desmond; Lee, Taehoon; Elliott, Stephen R
2015-07-08
We present an in silico study of the neuromorphic-computing behavior of the prototypical phase-change material, Ge2Sb2Te5, using ab initio molecular-dynamics simulations. Stepwise changes in structural order in response to temperature pulses of varying length and duration are observed, and a good reproduction of the spike-timing-dependent plasticity observed in nanoelectronic synapses is demonstrated. Short above-melting pulses lead to instantaneous loss of structural and chemical order, followed by delayed partial recovery upon structural relaxation. We also investigate the link between structural order and electrical and optical properties. These results pave the way toward a first-principles understanding of phase-change physics beyond binary switching.
A polymorphic reconfigurable emulator for parallel simulation
NASA Technical Reports Server (NTRS)
Parrish, E. A., Jr.; Mcvey, E. S.; Cook, G.
1980-01-01
Microprocessor and arithmetic support chip technology was applied to the design of a reconfigurable emulator for real time flight simulation. The system developed consists of master control system to perform all man machine interactions and to configure the hardware to emulate a given aircraft, and numerous slave compute modules (SCM) which comprise the parallel computational units. It is shown that all parts of the state equations can be worked on simultaneously but that the algebraic equations cannot (unless they are slowly varying). Attempts to obtain algorithms that will allow parellel updates are reported. The word length and step size to be used in the SCM's is determined and the architecture of the hardware and software is described.
Visual Complexity in Orthographic Learning: Modeling Learning across Writing System Variations
ERIC Educational Resources Information Center
Chang, Li-Yun; Plaut, David C.; Perfetti, Charles A.
2016-01-01
The visual complexity of orthographies varies across writing systems. Prior research has shown that complexity strongly influences the initial stage of reading development: the perceptual learning of grapheme forms. This study presents a computational simulation that examines the degree to which visual complexity leads to grapheme learning…
Wildfire simulation using LES with synthetic-velocity SGS models
NASA Astrophysics Data System (ADS)
McDonough, J. M.; Tang, Tingting
2016-11-01
Wildland fires are becoming more prevalent and intense worldwide as climate change leads to warmer, drier conditions; and large-eddy simulation (LES) is receiving increasing attention for fire spread predictions as computing power continues to improve (see, e.g.,). We report results from wildfire simulations over general terrain employing implicit LES for solution of the incompressible Navier-Stokes (N.-S.) and thermal energy equations with Boussinesq approximation, altered with Darcy, Forchheimer and Brinkman extensions, to represent forested regions as porous media with varying (in both space and time) porosity and permeability. We focus on subgrid-scale (SGS) behaviors computed with a synthetic-velocity model, a discrete dynamical system, based on the poor man's N.-S. equations and investigate the ability of this model to produce fire whirls (tornadoes of fire) at the (unresolved) SGS level. Professor, Mechanical Engineering and Mathematics.
A method for computing ion energy distributions for multifrequency capacitive discharges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Alan C. F.; Lieberman, M. A.; Verboncoeur, J. P.
2007-03-01
The ion energy distribution (IED) at a surface is an important parameter for processing in multiple radio frequency driven capacitive discharges. An analytical model is developed for the IED in a low pressure discharge based on a linear transfer function that relates the time-varying sheath voltage to the time-varying ion energy response at the surface. This model is in good agreement with particle-in-cell simulations over a wide range of single, dual, and triple frequency driven capacitive discharge excitations.
Kuo, Alexander S; Vijjeswarapu, Mary A; Philip, James H
2016-03-01
Inhaled induction with spontaneous respiration is a technique used for difficult airways. One of the proposed advantages is if airway patency is lost, the anesthetic agent will spontaneously redistribute until anesthetic depth is reduced and airway patency can be recovered. There are little and conflicting clinical or experimental data regarding the kinetics of this anesthetic technique. We used computer simulation to investigate this situation. We used GasMan, a computer simulation of inhaled anesthetic kinetics. For each simulation, alveolar ventilation was initiated with a set anesthetic induction concentration. When the vessel-rich group level reached the simulation specified airway obstruction threshold, alveolar ventilation was set at 0 to simulate complete airway obstruction. The time until the vessel-rich group anesthetic level decreased below the airway obstruction threshold was designated time to spontaneous recovery. We varied the parameters for each simulation, exploring the use of sevoflurane and halothane, airway obstruction threshold from 0.5 to 2 minimum alveolar concentration (MAC), anesthetic induction concentration 2 to 4 MAC sevoflurane and 4 to 6 MAC halothane, cardiac output 2.5 to 10 L/min, functional residual capacity 1.5 to 3.5 L, and relative vessel-rich group perfusion 67% to 85%. In each simulation, there were 3 general phases: anesthetic wash-in, obstruction and overshoot, and then slow redistribution. During the first 2 phases, there was a large gradient between the alveolar and vessel-rich group. Alveolar do not reflect vessel-rich group anesthetic levels until the late third phase. Time to spontaneous recovery varied between 35 and 749 seconds for sevoflurane and 13 and 222 seconds for halothane depending on the simulation parameters. Halothane had a faster time to spontaneous recovery because of the lower alveolar gradient and less overshoot of the vessel-rich group, not faster redistribution. Higher airway obstruction thresholds, decreased anesthetic induction, and higher cardiac output reduced time to spontaneous recovery. To a lesser effect, decreased functional residual capacity and the decreased relative vessel-rich groups' perfusion also reduced the time to spontaneous recovery. Spontaneous recovery after complete airway obstruction during inhaled induction is plausible, but the recovery time is highly variable and depends on the clinical and physiologic situation. These results emphasize that induction is a non-steady-state situation, thus effect-site anesthetic levels should be modeled in future research, not alveolar concentration. Finally, this study provides an example of using computer simulation to explore situations that are difficult to investigate clinically.
NASA Astrophysics Data System (ADS)
Zhu, Zichen; Wang, Yongzhi; Bian, Shuhua; Hu, Zejian; Liu, Jianqiang; Liu, Lejun
2017-11-01
We modified the sediment incipient motion in a numerical model and evaluated the impact of this modification using a study case of the coastal area around Weihai, China. The modified and unmodified versions of the model were validated by comparing simulated and observed data of currents, waves, and suspended sediment concentrations (SSC) measured from July 25th to July 26th, 2006. A fitted Shields diagram was introduced into the sediment model so that the critical erosional shear stress could vary with time. Thus, the simulated SSC patterns were improved to more closely reflect the observed values, so that the relative error of the variation range decreased by up to 34.5% and the relative error of simulated temporally averaged SSC decreased by up to 36%. In the modified model, the critical shear stress values of the simulated silt with a diameter of 0.035 mm and mud with a diameter of 0.004 mm varied from 0.05 to 0.13 N/m2, and from 0.05 to 0.14 N/m 2, respectively, instead of remaining constant in the unmodified model. Besides, a method of applying spatially varying fractions of the mixed grain size sediment improved the simulated SSC distribution to fit better to the remote sensing map and reproduced the zonal area with high SSC between Heini Bay and the erosion groove in the modified model. The Relative Mean Absolute Error was reduced by between 6% and 79%, depending on the regional attributes when we used the modified method to simulate incipient sediment motion. But the modification achieved the higher accuracy in this study at a cost of computation speed decreasing by 1.52%.
NASA Astrophysics Data System (ADS)
Lin, Mingpei; Xu, Ming; Fu, Xiaoyu
2017-05-01
Currently, a tremendous amount of space debris in Earth's orbit imperils operational spacecraft. It is essential to undertake risk assessments of collisions and predict dangerous encounters in space. However, collision predictions for an enormous amount of space debris give rise to large-scale computations. In this paper, a parallel algorithm is established on the Compute Unified Device Architecture (CUDA) platform of NVIDIA Corporation for collision prediction. According to the parallel structure of NVIDIA graphics processors, a block decomposition strategy is adopted in the algorithm. Space debris is divided into batches, and the computation and data transfer operations of adjacent batches overlap. As a consequence, the latency to access shared memory during the entire computing process is significantly reduced, and a higher computing speed is reached. Theoretically, a simulation of collision prediction for space debris of any amount and for any time span can be executed. To verify this algorithm, a simulation example including 1382 pieces of debris, whose operational time scales vary from 1 min to 3 days, is conducted on Tesla C2075 of NVIDIA. The simulation results demonstrate that with the same computational accuracy as that of a CPU, the computing speed of the parallel algorithm on a GPU is 30 times that on a CPU. Based on this algorithm, collision prediction of over 150 Chinese spacecraft for a time span of 3 days can be completed in less than 3 h on a single computer, which meets the timeliness requirement of the initial screening task. Furthermore, the algorithm can be adapted for multiple tasks, including particle filtration, constellation design, and Monte-Carlo simulation of an orbital computation.
NASA Technical Reports Server (NTRS)
Taylor, B. K.; Casasent, D. P.
1989-01-01
The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.
Simulation of radar reflectivity and surface measurements of rainfall
NASA Technical Reports Server (NTRS)
Chandrasekar, V.; Bringi, V. N.
1987-01-01
Raindrop size distributions (RSDs) are often estimated using surface raindrop sampling devices (e.g., disdrometers) or optical array (2D-PMS) probes. A number of authors have used these measured distributions to compute certain higher-order RSD moments that correspond to radar reflectivity, attenuation, optical extinction, etc. Scatter plots of these RSD moments versus disdrometer-measured rainrates are then used to deduce physical relationships between radar reflectivity, attenuation, etc., which are measured by independent instruments (e.g., radar), and rainrate. In this paper RSDs of the gamma form as well as radar reflectivity (via time series simulation) are simulated to study the correlation structure of radar estimates versus rainrate as opposed to RSD moment estimates versus rainrate. The parameters N0, D0 and m of a gamma distribution are varied over the range normally found in rainfall, as well as varying the device sampling volume. The simulations are used to explain some possible features related to discrepancies which can arise when radar rainfall measurements are compared with surface or aircraft-based sampling devices.
NASA Astrophysics Data System (ADS)
Daya Sagar, B. S.
2005-01-01
Spatio-temporal patterns of small water bodies (SWBs) under the influence of temporally varied stream flow discharge are simulated in discrete space by employing geomorphologically realistic expansion and contraction transformations. Cascades of expansion-contraction are systematically performed by synchronizing them with stream flow discharge simulated via the logistic map. Templates with definite characteristic information are defined from stream flow discharge pattern as the basis to model the spatio-temporal organization of randomly situated surface water bodies of various sizes and shapes. These spatio-temporal patterns under varied parameters (λs) controlling stream flow discharge patterns are characterized by estimating their fractal dimensions. At various λs, nonlinear control parameters, we show the union of boundaries of water bodies that traverse the water body and non-water body spaces as geomorphic attractors. The computed fractal dimensions of these attractors are 1.58, 1.53, 1.78, 1.76, 1.84, and 1.90, respectively, at λs of 1, 2, 3, 3.46, 3.57, and 3.99. These values are in line with general visual observations.
Simulation of Cold Flow in a Truncated Ideal Nozzle with Film Cooling
NASA Technical Reports Server (NTRS)
Braman, K. E.; Ruf, J. H.
2015-01-01
Flow transients during rocket start-up and shut-down can lead to significant side loads on rocket nozzles. The capability to estimate these side loads computationally can streamline the nozzle design process. Towards this goal, the flow in a truncated ideal contour (TIC) nozzle has been simulated using RANS and URANS for a range of nozzle pressure ratios (NPRs) aimed to match a series of cold flow experiments performed at the NASA MSFC Nozzle Test Facility. These simulations were performed with varying turbulence model choices and for four approximations of the supersonic film injection geometry, each of which was created with a different simplification of the test article geometry. The results show that although a reasonable match to experiment can be obtained with varying levels of geometric fidelity, the modeling choices made do not fully represent the physics of flow separation in a TIC nozzle with film cooling.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
Ku, S.; Hager, R.; Chang, C. S.; ...
2016-04-01
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ku, S.; Hager, R.; Chang, C. S.
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ku, S., E-mail: sku@pppl.gov; Hager, R.; Chang, C.S.
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. The numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
Quantum Computing Architectural Design
NASA Astrophysics Data System (ADS)
West, Jacob; Simms, Geoffrey; Gyure, Mark
2006-03-01
Large scale quantum computers will invariably require scalable architectures in addition to high fidelity gate operations. Quantum computing architectural design (QCAD) addresses the problems of actually implementing fault-tolerant algorithms given physical and architectural constraints beyond those of basic gate-level fidelity. Here we introduce a unified framework for QCAD that enables the scientist to study the impact of varying error correction schemes, architectural parameters including layout and scheduling, and physical operations native to a given architecture. Our software package, aptly named QCAD, provides compilation, manipulation/transformation, multi-paradigm simulation, and visualization tools. We demonstrate various features of the QCAD software package through several examples.
NASA Astrophysics Data System (ADS)
Hardie, Russell C.; Power, Jonathan D.; LeMaster, Daniel A.; Droege, Douglas R.; Gladysz, Szymon; Bose-Pillai, Santasri
2017-07-01
We present a numerical wave propagation method for simulating imaging of an extended scene under anisoplanatic conditions. While isoplanatic simulation is relatively common, few tools are specifically designed for simulating the imaging of extended scenes under anisoplanatic conditions. We provide a complete description of the proposed simulation tool, including the wave propagation method used. Our approach computes an array of point spread functions (PSFs) for a two-dimensional grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. The degradation includes spatially varying warping and blurring. To produce the PSF array, we generate a series of extended phase screens. Simulated point sources are numerically propagated from an array of positions on the object plane, through the phase screens, and ultimately to the focal plane of the simulated camera. Note that the optical path for each PSF will be different, and thus, pass through a different portion of the extended phase screens. These different paths give rise to a spatially varying PSF to produce anisoplanatic effects. We use a method for defining the individual phase screen statistics that we have not seen used in previous anisoplanatic simulations. We also present a validation analysis. In particular, we compare simulated outputs with the theoretical anisoplanatic tilt correlation and a derived differential tilt variance statistic. This is in addition to comparing the long- and short-exposure PSFs and isoplanatic angle. We believe this analysis represents the most thorough validation of an anisoplanatic simulation to date. The current work is also unique that we simulate and validate both constant and varying Cn2(z) profiles. Furthermore, we simulate sequences with both temporally independent and temporally correlated turbulence effects. Temporal correlation is introduced by generating even larger extended phase screens and translating this block of screens in front of the propagation area. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. Thus, we think this tool can be used effectively to study optical anisoplanatic turbulence and to aid in the development of image restoration methods.
A mathematical model for computer image tracking.
Legters, G R; Young, T Y
1982-06-01
A mathematical model using an operator formulation for a moving object in a sequence of images is presented. Time-varying translation and rotation operators are derived to describe the motion. A variational estimation algorithm is developed to track the dynamic parameters of the operators. The occlusion problem is alleviated by using a predictive Kalman filter to keep the tracking on course during severe occlusion. The tracking algorithm (variational estimation in conjunction with Kalman filter) is implemented to track moving objects with occasional occlusion in computer-simulated binary images.
NASA Astrophysics Data System (ADS)
Engelhardt, Larry
2015-12-01
We discuss how computers can be used to solve the ordinary differential equations that provide a quantum mechanical description of magnetic resonance. By varying the parameters in these equations and visually exploring how these parameters affect the results, students can quickly gain insights into the nature of magnetic resonance that go beyond the standard presentation found in quantum mechanics textbooks. The results were generated using an IPython notebook, which we provide as an online supplement with interactive plots and animations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmad, Israr, E-mail: iak-2000plus@yahoo.com; Saaban, Azizan Bin, E-mail: azizan.s@uum.edu.my; Ibrahim, Adyda Binti, E-mail: adyda@uum.edu.my
This paper addresses a comparative computational study on the synchronization quality, cost and converging speed for two pairs of identical chaotic and hyperchaotic systems with unknown time-varying parameters. It is assumed that the unknown time-varying parameters are bounded. Based on the Lyapunov stability theory and using the adaptive control method, a single proportional controller is proposed to achieve the goal of complete synchronizations. Accordingly, appropriate adaptive laws are designed to identify the unknown time-varying parameters. The designed control strategy is easy to implement in practice. Numerical simulations results are provided to verify the effectiveness of the proposed synchronization scheme.
Decision-Making and Thought Processes among Poker Players
ERIC Educational Resources Information Center
St. Germain, Joseph; Tenenbaum, Gershon
2011-01-01
This study was aimed at delineating decision-making and thought processing among poker players who vary in skill-level. Forty-five participants, 15 in each group, comprised expert, intermediate, and novice poker players. They completed the Computer Poker Simulation Task (CPST) comprised of 60 hands of No-Limit Texas Hold 'Em. During the CPST, they…
Intensity dependence of focused ultrasound lesion position
NASA Astrophysics Data System (ADS)
Meaney, Paul M.; Cahill, Mark D.; ter Haar, Gail R.
1998-04-01
Knowledge of the spatial distribution of intensity loss from an ultrasonic beam is critical to predicting lesion formation in focused ultrasound surgery. To date most models have used linear propagation models to predict the intensity profiles needed to compute the temporally varying temperature distributions. These can be used to compute thermal dose contours that can in turn be used to predict the extent of thermal damage. However, these simulations fail to adequately describe the abnormal lesion formation behavior observed for in vitro experiments in cases where the transducer drive levels are varied over a wide range. For these experiments, the extent of thermal damage has been observed to move significantly closer to the transducer with increasing transducer drive levels than would be predicted using linear propagation models. The simulations described herein, utilize the KZK (Khokhlov-Zabolotskaya-Kuznetsov) nonlinear propagation model with the parabolic approximation for highly focused ultrasound waves, to demonstrate that the positions of the peak intensity and the lesion do indeed move closer to the transducer. This illustrates that for accurate modeling of heating during FUS, nonlinear effects must be considered.
Computational methods for diffusion-influenced biochemical reactions.
Dobrzynski, Maciej; Rodríguez, Jordi Vidal; Kaandorp, Jaap A; Blom, Joke G
2007-08-01
We compare stochastic computational methods accounting for space and discrete nature of reactants in biochemical systems. Implementations based on Brownian dynamics (BD) and the reaction-diffusion master equation are applied to a simplified gene expression model and to a signal transduction pathway in Escherichia coli. In the regime where the number of molecules is small and reactions are diffusion-limited predicted fluctuations in the product number vary between the methods, while the average is the same. Computational approaches at the level of the reaction-diffusion master equation compute the same fluctuations as the reference result obtained from the particle-based method if the size of the sub-volumes is comparable to the diameter of reactants. Using numerical simulations of reversible binding of a pair of molecules we argue that the disagreement in predicted fluctuations is due to different modeling of inter-arrival times between reaction events. Simulations for a more complex biological study show that the different approaches lead to different results due to modeling issues. Finally, we present the physical assumptions behind the mesoscopic models for the reaction-diffusion systems. Input files for the simulations and the source code of GMP can be found under the following address: http://www.cwi.nl/projects/sic/bioinformatics2007/
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preece, D.S.; Knudsen, S.D.
The spherical element computer code DMC (Distinct Motion Code) used to model rock motion resulting from blasting has been enhanced to allow routine computer simulations of bench blasting. The enhancements required for bench blast simulation include: (1) modifying the gas flow portion of DMC, (2) adding a new explosive gas equation of state capability, (3) modifying the porosity calculation, and (4) accounting for blastwell spacing parallel to the face. A parametric study performed with DMC shows logical variation of the face velocity as burden, spacing, blastwell diameter and explosive type are varied. These additions represent a significant advance in themore » capability of DMC which will not only aid in understanding the physics involved in blasting but will also become a blast design tool. 8 refs., 7 figs., 1 tab.« less
Cycle-averaged dynamics of a periodically driven, closed-loop circulation model
NASA Technical Reports Server (NTRS)
Heldt, T.; Chang, J. L.; Chen, J. J. S.; Verghese, G. C.; Mark, R. G.
2005-01-01
Time-varying elastance models have been used extensively in the past to simulate the pulsatile nature of cardiovascular waveforms. Frequently, however, one is interested in dynamics that occur over longer time scales, in which case a detailed simulation of each cardiac contraction becomes computationally burdensome. In this paper, we apply circuit-averaging techniques to a periodically driven, closed-loop, three-compartment recirculation model. The resultant cycle-averaged model is linear and time invariant, and greatly reduces the computational burden. It is also amenable to systematic order reduction methods that lead to further efficiencies. Despite its simplicity, the averaged model captures the dynamics relevant to the representation of a range of cardiovascular reflex mechanisms. c2004 Elsevier Ltd. All rights reserved.
Inverse dynamics of adaptive structures used as space cranes
NASA Technical Reports Server (NTRS)
Das, S. K.; Utku, S.; Wada, B. K.
1990-01-01
As a precursor to the real-time control of fast moving adaptive structures used as space cranes, a formulation is given for the flexibility induced motion relative to the nominal motion (i.e., the motion that assumes no flexibility) and for obtaining the open loop time varying driving forces. An algorithm is proposed for the computation of the relative motion and driving forces. The governing equations are given in matrix form with explicit functional dependencies. A simulator is developed to implement the algorithm on a digital computer. In the formulations, the distributed mass of the crane is lumped by two schemes, vz., 'trapezoidal' lumping and 'Simpson's rule' lumping. The effects of the mass lumping schemes are shown by simulator runs.
Propulsive efficiency of the underwater dolphin kick in humans.
von Loebbecke, Alfred; Mittal, Rajat; Fish, Frank; Mark, Russell
2009-05-01
Three-dimensional fully unsteady computational fluid dynamic simulations of five Olympic-level swimmers performing the underwater dolphin kick are used to estimate the swimmer's propulsive efficiencies. These estimates are compared with those of a cetacean performing the dolphin kick. The geometries of the swimmers and the cetacean are based on laser and CT scans, respectively, and the stroke kinematics is based on underwater video footage. The simulations indicate that the propulsive efficiency for human swimmers varies over a relatively wide range from about 11% to 29%. The efficiency of the cetacean is found to be about 56%, which is significantly higher than the human swimmers. The computed efficiency is found not to correlate with either the slender body theory or with the Strouhal number.
Cosmic Reionization On Computers III. The Clumping Factor
Kaurov, Alexander A.; Gnedin, Nickolay Y.
2015-09-09
We use fully self-consistent numerical simulations of cosmic reionization, completed under the Cosmic Reionization On Computers project, to explore how well the recombinations in the ionized intergalactic medium (IGM) can be quantified by the effective "clumping factor." The density distribution in the simulations (and, presumably, in a real universe) is highly inhomogeneous and more-or-less smoothly varying in space. However, even in highly complex and dynamic environments, the concept of the IGM remains reasonably well-defined; the largest ambiguity comes from the unvirialized regions around galaxies that are over-ionized by the local enhancement in the radiation field ("proximity zones"). This ambiguity precludesmore » computing the IGM clumping factor to better than about 20%. Furthermore, we discuss a "local clumping factor," defined over a particular spatial scale, and quantify its scatter on a given scale and its variation as a function of scale.« less
NASA Technical Reports Server (NTRS)
Crane, J. M.; Boucek, G. P., Jr.; Smith, W. D.
1986-01-01
A flight management computer (FMC) control display unit (CDU) test was conducted to compare two types of input devices: a fixed legend (dedicated) keyboard and a programmable legend (multifunction) keyboard. The task used for comparison was operation of the flight management computer for the Boeing 737-300. The same tasks were performed by twelve pilots on the FMC control display unit configured with a programmable legend keyboard and with the currently used B737-300 dedicated keyboard. Flight simulator work activity levels and input task complexity were varied during each pilot session. Half of the points tested were previously familiar with the B737-300 dedicated keyboard CDU and half had no prior experience with it. The data collected included simulator flight parameters, keystroke time and sequences, and pilot questionnaire responses. A timeline analysis was also used for evaluation of the two keyboard concepts.
Cosmic Reionization On Computers III. The Clumping Factor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaurov, Alexander A.; Gnedin, Nickolay Y.
We use fully self-consistent numerical simulations of cosmic reionization, completed under the Cosmic Reionization On Computers project, to explore how well the recombinations in the ionized intergalactic medium (IGM) can be quantified by the effective "clumping factor." The density distribution in the simulations (and, presumably, in a real universe) is highly inhomogeneous and more-or-less smoothly varying in space. However, even in highly complex and dynamic environments, the concept of the IGM remains reasonably well-defined; the largest ambiguity comes from the unvirialized regions around galaxies that are over-ionized by the local enhancement in the radiation field ("proximity zones"). This ambiguity precludesmore » computing the IGM clumping factor to better than about 20%. Furthermore, we discuss a "local clumping factor," defined over a particular spatial scale, and quantify its scatter on a given scale and its variation as a function of scale.« less
COSMIC REIONIZATION ON COMPUTERS. III. THE CLUMPING FACTOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaurov, Alexander A.; Gnedin, Nickolay Y., E-mail: kaurov@uchicago.edu, E-mail: gnedin@fnal.gov
We use fully self-consistent numerical simulations of cosmic reionization, completed under the Cosmic Reionization On Computers project, to explore how well the recombinations in the ionized intergalactic medium (IGM) can be quantified by the effective “clumping factor.” The density distribution in the simulations (and, presumably, in a real universe) is highly inhomogeneous and more-or-less smoothly varying in space. However, even in highly complex and dynamic environments, the concept of the IGM remains reasonably well-defined; the largest ambiguity comes from the unvirialized regions around galaxies that are over-ionized by the local enhancement in the radiation field (“proximity zones”). That ambiguity precludesmore » computing the IGM clumping factor to better than about 20%. We also discuss a “local clumping factor,” defined over a particular spatial scale, and quantify its scatter on a given scale and its variation as a function of scale.« less
NASA Technical Reports Server (NTRS)
Poole, L. R.
1976-01-01
An initial attempt was made to verify the Langley Research Center and Virginia Institute of Marine Science mid-Atlantic continental-shelf wave refraction model. The model was used to simulate refraction occurring during a continental-shelf remote sensing experiment conducted on August 17, 1973. Simulated wave spectra compared favorably, in a qualitative sense, with the experimental spectra. However, it was observed that most of the wave energy resided at frequencies higher than those for which refraction and shoaling effects were predicted, In addition, variations among the experimental spectra were so small that they were not considered statistically significant. In order to verify the refraction model, simulation must be performed in conjunction with a set of significantly varying spectra in which a considerable portion of the total energy resides at frequencies for which refraction and shoaling effects are likely.
Preprocessor and postprocessor computer programs for a radial-flow finite-element model
Pucci, A.A.; Pope, D.A.
1987-01-01
Preprocessing and postprocessing computer programs that enhance the utility of the U.S. Geological Survey radial-flow model have been developed. The preprocessor program: (1) generates a triangular finite element mesh from minimal data input, (2) produces graphical displays and tabulations of data for the mesh , and (3) prepares an input data file to use with the radial-flow model. The postprocessor program is a version of the radial-flow model, which was modified to (1) produce graphical output for simulation and field results, (2) generate a statistic for comparing the simulation results with observed data, and (3) allow hydrologic properties to vary in the simulated region. Examples of the use of the processor programs for a hypothetical aquifer test are presented. Instructions for the data files, format instructions, and a listing of the preprocessor and postprocessor source codes are given in the appendixes. (Author 's abstract)
NASA Astrophysics Data System (ADS)
van Heerwaarden, Chiel C.; van Stratum, Bart J. H.; Heus, Thijs; Gibbs, Jeremy A.; Fedorovich, Evgeni; Mellado, Juan Pedro
2017-08-01
This paper describes MicroHH 1.0, a new and open-source (www.microhh.org) computational fluid dynamics code for the simulation of turbulent flows in the atmosphere. It is primarily made for direct numerical simulation but also supports large-eddy simulation (LES). The paper covers the description of the governing equations, their numerical implementation, and the parameterizations included in the code. Furthermore, the paper presents the validation of the dynamical core in the form of convergence and conservation tests, and comparison of simulations of channel flows and slope flows against well-established test cases. The full numerical model, including the associated parameterizations for LES, has been tested for a set of cases under stable and unstable conditions, under the Boussinesq and anelastic approximations, and with dry and moist convection under stationary and time-varying boundary conditions. The paper presents performance tests showing good scaling from 256 to 32 768 processes. The graphical processing unit (GPU)-enabled version of the code can reach a speedup of more than an order of magnitude for simulations that fit in the memory of a single GPU.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplan, C.R.; Shaddix, C.R.; Smyth, K.C.
This paper presents time-dependent numerical simulations of both steady and time-varying CH{sub 4}/air diffusion flames to examine the differences in combustion conditions which lead to the observed enhancement in soot production in the flickering flames. The numerical model solves the two-dimensional, time-dependent, reactive-flow Navier-Stokes equations coupled with submodels for soot formation and radiation transport. Qualitative comparisons between the experimental and computed steady flame show good agreement for the soot burnout height and overall flame shape except near the burner lip. Quantitative comparisons between experimental and computed radial profiles of temperature and soot volume fraction for the steady flame show goodmore » to excellent agreement at mid-flame heights, but some discrepancies near the burner lip and at high flame heights. For the time-varying CH{sub 4}/air flame, the simulations successfully predict that the maximum soot concentration increases by over four times compared to the steady flame with the same mean fuel and air velocities. By numerically tracking fluid parcels in the flowfield, the temperature and stoichiometry history were followed along their convective pathlines. Results for the pathline which passes through the maximum sooting region show that flickering flames exhibit much longer residence times during which the local temperatures and stoichiometries are favorable for soot production. The simulations also suggest that soot inception occurs later in flickering flames, and at slightly higher temperatures and under somewhat leaner conditions compared to the steady flame. The integrated soot model of Syed et al., which was developed from a steady CH{sub 4}/air flame, successfully predicts soot production in the time-varying CH{sub 4}/air flames.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Tae-Hyuk; Sandu, Adrian; Watson, Layne T.
2015-08-01
Ensembles of simulations are employed to estimate the statistics of possible future states of a system, and are widely used in important applications such as climate change and biological modeling. Ensembles of runs can naturally be executed in parallel. However, when the CPU times of individual simulations vary considerably, a simple strategy of assigning an equal number of tasks per processor can lead to serious work imbalances and low parallel efficiency. This paper presents a new probabilistic framework to analyze the performance of dynamic load balancing algorithms for ensembles of simulations where many tasks are mapped onto each processor, andmore » where the individual compute times vary considerably among tasks. Four load balancing strategies are discussed: most-dividing, all-redistribution, random-polling, and neighbor-redistribution. Simulation results with a stochastic budding yeast cell cycle model are consistent with the theoretical analysis. It is especially significant that there is a provable global decrease in load imbalance for the local rebalancing algorithms due to scalability concerns for the global rebalancing algorithms. The overall simulation time is reduced by up to 25 %, and the total processor idle time by 85 %.« less
Community as client: environmental issues in the real world. A SimCity computer simulation.
Bareford, C G
2001-01-01
The ability to think critically has become a crucial part of professional practice and education. SimCity, a popular computer simulation game, provides an opportunity to practice community assessment and interventions using a systems approach. SimCity is an interactive computer simulation game in which the player takes an active part in community planning. SimCity is supported on either a Windows 95/98 or a Macintosh platform and is available on CD-ROM at retail stores or at www.simcity.com. Students complete a tutorial and then apply a selected scenario in SimCity. Scenarios consist of hypothetical communities that have varying types and degrees of environmental problems, e.g., traffic, crime, nuclear meltdown, flooding, fire, and earthquakes. In problem solving with the simulated scenarios, students (a) identify systems and subsystems within the community that are critical factors impacting the environmental health of the community, (b) create changes in the systems and subsystems in an effort to solve the environmental health problem, and (c) evaluate the effectiveness of interventions based on the game score, demographic and fiscal data, and amount of community support. Because the consequences of planned intervention are part of the simulation, nursing students are able to develop critical-thinking skills. The simulation provides essential content in community planning in an interesting and interactive format.
Population Synthesis of Radio & Gamma-Ray Millisecond Pulsars
NASA Astrophysics Data System (ADS)
Frederick, Sara; Gonthier, P. L.; Harding, A. K.
2014-01-01
In recent years, the number of known gamma-ray millisecond pulsars (MSPs) in the Galactic disk has risen substantially thanks to confirmed detections by Fermi Gamma-ray Space Telescope (Fermi). We have developed a new population synthesis of gamma-ray and radio MSPs in the galaxy which uses Markov Chain Monte Carlo techniques to explore the large and small worlds of the model parameter space and allows for comparisons of the simulated and detected MSP distributions. The simulation employs empirical radio and gamma-ray luminosity models that are dependent upon the pulsar period and period derivative with freely varying exponents. Parameters associated with the birth distributions are also free to vary. The computer code adjusts the magnitudes of the model luminosities to reproduce the number of MSPs detected by a group of ten radio surveys, thus normalizing the simulation and predicting the MSP birth rates in the Galaxy. Computing many Markov chains leads to preferred sets of model parameters that are further explored through two statistical methods. Marginalized plots define confidence regions in the model parameter space using maximum likelihood methods. A secondary set of confidence regions is determined in parallel using Kuiper statistics calculated from comparisons of cumulative distributions. These two techniques provide feedback to affirm the results and to check for consistency. Radio flux and dispersion measure constraints have been imposed on the simulated gamma-ray distributions in order to reproduce realistic detection conditions. The simulated and detected distributions agree well for both sets of radio and gamma-ray pulsar characteristics, as evidenced by our various comparisons.
Three-Dimensional Computational Model for Flow in an Over-Expanded Nozzle With Porous Surfaces
NASA Technical Reports Server (NTRS)
Abdol-Hamid, K. S.; Elmiligui, Alaa; Hunter, Craig A.; Massey, Steven J.
2006-01-01
A three-Dimensional computational model is used to simulate flow in a non-axisymmetric, convergent-divergent nozzle incorporating porous cavities for shock-boundary layer interaction control. The nozzle has an expansion ratio (exit area/throat area) of 1.797 and a design nozzle pressure ratio of 8.78. Flow fields for the baseline nozzle (no porosity) and for the nozzle with porous surfaces of 10% openness are computed for Nozzle Pressure Ratio (NPR) varying from 1.29 to 9.54. The three dimensional computational results indicate that baseline (no porosity) nozzle performance is dominated by unstable, shock-induced, boundary-layer separation at over-expanded conditions. For NPR less than or equal to 1.8, the separation is three dimensional, somewhat unsteady, and confined to a bubble (with partial reattachment over the nozzle flap). For NPR greater than or equal to 2.0, separation is steady and fully detached, and becomes more two dimensional as NPR increased. Numerical simulation of porous configurations indicates that a porous patch is capable of controlling off design separation in the nozzle by either alleviating separation or by encouraging stable separation of the exhaust flow. In the present paper, computational simulation results, wall centerline pressure, mach contours, and thrust efficiency ratio are presented, discussed and compared with experimental data. Results indicate that comparisons are in good agreement with experimental data. The three-dimensional simulation improves the comparisons for over-expanded flow conditions as compared with two-dimensional assumptions.
Exploring Discretization Error in Simulation-Based Aerodynamic Databases
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Nemec, Marian
2010-01-01
This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; ...
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε –2) or (ε –2(lnε) 2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε –3) for direct simulation Monte Carlomore » or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10 –5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less
Operations analysis (study 2.1): Program manual and users guide for the LOVES computer code
NASA Technical Reports Server (NTRS)
Wray, S. T., Jr.
1975-01-01
Information is provided necessary to use the LOVES Computer Program in its existing state, or to modify the program to include studies not properly handled by the basic model. The Users Guide defines the basic elements assembled together to form the model for servicing satellites in orbit. As the program is a simulation, the method of attack is to disassemble the problem into a sequence of events, each occurring instantaneously and each creating one or more other events in the future. The main driving force of the simulation is the deterministic launch schedule of satellites and the subsequent failure of the various modules which make up the satellites. The LOVES Computer Program uses a random number generator to simulate the failure of module elements and therefore operates over a long span of time typically 10 to 15 years. The sequence of events is varied by making several runs in succession with different random numbers resulting in a Monte Carlo technique to determine statistical parameters of minimum value, average value, and maximum value.
Application of local indentations for film cooling of gas turbine blade leading edge
NASA Astrophysics Data System (ADS)
Petelchyts, V. Yu.; Khalatov, A. A.; Pysmennyi, D. N.; Dashevskyy, Yu. Ya.
2016-09-01
The paper presents results of computer simulation of the film cooling on the turbine blade leading edge model where the air coolant is supplied through radial holes and row of cylindrical inclined holes placed inside hemispherical dimples or trench. The blowing factor was varied from 0.5 to 2.0. The model size and key initial parameters for simulation were taken as for a real blade of a high-pressure high-performance gas turbine. Simulation was performed using commercial software code ANSYS CFX. The simulation results were compared with reference variant (no dimples or trench) both for the leading edge area and for the flat plate downstream of the leading edge.
Physics-based interactive volume manipulation for sharing surgical process.
Nakao, Megumi; Minato, Kotaro
2010-05-01
This paper presents a new set of techniques by which surgeons can interactively manipulate patient-specific volumetric models for sharing surgical process. To handle physical interaction between the surgical tools and organs, we propose a simple surface-constraint-based manipulation algorithm to consistently simulate common surgical manipulations such as grasping, holding and retraction. Our computation model is capable of simulating soft-tissue deformation and incision in real time. We also present visualization techniques in order to rapidly visualize time-varying, volumetric information on the deformed image. This paper demonstrates the success of the proposed methods in enabling the simulation of surgical processes, and the ways in which this simulation facilitates preoperative planning and rehearsal.
NASA Astrophysics Data System (ADS)
Wu, Lianglong; Fu, Xiquan; Guo, Xing
2013-03-01
In this paper, we propose a modified adaptive algorithm (MAA) of dealing with the high chirp to efficiently simulate the propagation of chirped pulses along an optical fiber for the propagation distance shorter than the "temporal focal length". The basis of the MAA is that the chirp term of initial pulse is treated as the rapidly varying part by means of the idea of the slowly varying envelope approximation (SVEA). Numerical simulations show that the performance of the MAA is validated, and that the proposed method can decrease the number of sampling points by orders of magnitude. In addition, the computational efficiency of the MAA compared with the time-domain beam propagation method (BPM) can be enhanced with the increase of the chirp of initial pulse.
A high-order language for a system of closely coupled processing elements
NASA Technical Reports Server (NTRS)
Feyock, S.; Collins, W. R.
1986-01-01
The research reported in this paper was occasioned by the requirements on part of the Real-Time Digital Simulator (RTDS) project under way at NASA Lewis Research Center. The RTDS simulation scheme employs a network of CPUs running lock-step cycles in the parallel computations of jet airplane simulations. Their need for a high order language (HOL) that would allow non-experts to write simulation applications and that could be implemented on a possibly varying network can best be fulfilled by using the programming language Ada. We describe how the simulation problems can be modeled in Ada, how to map a single, multi-processing Ada program into code for individual processors, regardless of network reconfiguration, and why some Ada language features are particulary well-suited to network simulations.
NASA Astrophysics Data System (ADS)
Marrero, Carlos Sosa; Aubert, Vivien; Ciferri, Nicolas; Hernández, Alfredo; de Crevoisier, Renaud; Acosta, Oscar
2017-11-01
Understanding the response to irradiation in cancer radiotherapy (RT) may help devising new strategies with improved tumor local control. Computational models may allow to unravel the underlying radiosensitive mechanisms intervening in the dose-response relationship. By using extensive simulations a wide range of parameters may be evaluated providing insights on tumor response thus generating useful data to plan modified treatments. We propose in this paper a computational model of tumor growth and radiation response which allows to simulate a whole RT protocol. Proliferation of tumor cells, cell life-cycle, oxygen diffusion, radiosensitivity, RT response and resorption of killed cells were implemented in a multiscale framework. The model was developed in C++, using the Multi-formalism Modeling and Simulation Library (M2SL). Radiosensitivity parameters extracted from literature enabled us to simulate in a regular grid (voxel-wise) a prostate cell tissue. Histopathological specimens with different aggressiveness levels extracted from patients after prostatectomy were used to initialize in silico simulations. Results on tumor growth exhibit a good agreement with data from in vitro studies. Moreover, standard fractionation of 2 Gy/fraction, with a total dose of 80 Gy as a real RT treatment was applied with varying radiosensitivity and oxygen diffusion parameters. As expected, the high influence of these parameters was observed by measuring the percentage of survival tumor cell after RT. This work paves the way to further models allowing to simulate increased doses in modified hypofractionated schemes and to develop new patient-specific combined therapies.
Investigating the Temperature Problem in Narrow Line Emitting AGN
NASA Astrophysics Data System (ADS)
Jenkins, Sam; Richardson, Chris T.
2018-06-01
Our research investigates the physical conditions in gas clouds around the narrow line region of AGN. Specifically, we explore the necessary conditions for anomalously high electron temperatures, Te, in those clouds. Our 321 galaxy data set was acquired from SDSS DR14 after requiring S/N > 5.0 in [OIII] 4363 and S/N > 3.0 in all BPT diagram emission lines, to ensure both accurate Te and galaxy classification, with 0.04 < z < 1.0. Interestingly, our data set contained no LINERs. We ran simulations using the simulation code Cloudy, and focused on matching the emission exhibited by the hottest of the 70 AGN in our data set. We used multicore computing to cut down on run time, which drastically improved the efficiency of our simulations. We varied hydrogen density, ionization parameter, and metallicity, Z, only to find these three parameters alone were incapable of recreating anomalously high Te, but successfully matched galaxies showing low- to moderate Te. These highest temperature simulations were at low Z, and were able to facilitate higher temperatures because they avoided the cooling effects of high Z. Our most successful simulations varied Z and grain content, which matched approximately 10% of our high temperature data. Our simulations with the highest grain content produced the highest Te because of the photoelectric heating effect that grains provide, which we confirmed by monitoring each heating mechanism as a function of depth. In the near future, we plan to run simulations varying grain content and ionization parameter in order to study the effects these conditions have on gas cloud Te.
ERIC Educational Resources Information Center
Yao, Lihua
2012-01-01
Multidimensional computer adaptive testing (MCAT) can provide higher precision and reliability or reduce test length when compared with unidimensional CAT or with the paper-and-pencil test. This study compared five item selection procedures in the MCAT framework for both domain scores and overall scores through simulation by varying the structure…
DKIST Adaptive Optics System: Simulation Results
NASA Astrophysics Data System (ADS)
Marino, Jose; Schmidt, Dirk
2016-05-01
The 4 m class Daniel K. Inouye Solar Telescope (DKIST), currently under construction, will be equipped with an ultra high order solar adaptive optics (AO) system. The requirements and capabilities of such a solar AO system are beyond those of any other solar AO system currently in operation. We must rely on solar AO simulations to estimate and quantify its performance.We present performance estimation results of the DKIST AO system obtained with a new solar AO simulation tool. This simulation tool is a flexible and fast end-to-end solar AO simulator which produces accurate solar AO simulations while taking advantage of current multi-core computer technology. It relies on full imaging simulations of the extended field Shack-Hartmann wavefront sensor (WFS), which directly includes important secondary effects such as field dependent distortions and varying contrast of the WFS sub-aperture images.
Design of 2D time-varying vector fields.
Chen, Guoning; Kwatra, Vivek; Wei, Li-Yi; Hansen, Charles D; Zhang, Eugene
2012-10-01
Design of time-varying vector fields, i.e., vector fields that can change over time, has a wide variety of important applications in computer graphics. Existing vector field design techniques do not address time-varying vector fields. In this paper, we present a framework for the design of time-varying vector fields, both for planar domains as well as manifold surfaces. Our system supports the creation and modification of various time-varying vector fields with desired spatial and temporal characteristics through several design metaphors, including streamlines, pathlines, singularity paths, and bifurcations. These design metaphors are integrated into an element-based design to generate the time-varying vector fields via a sequence of basis field summations or spatial constrained optimizations at the sampled times. The key-frame design and field deformation are also introduced to support other user design scenarios. Accordingly, a spatial-temporal constrained optimization and the time-varying transformation are employed to generate the desired fields for these two design scenarios, respectively. We apply the time-varying vector fields generated using our design system to a number of important computer graphics applications that require controllable dynamic effects, such as evolving surface appearance, dynamic scene design, steerable crowd movement, and painterly animation. Many of these are difficult or impossible to achieve via prior simulation-based methods. In these applications, the time-varying vector fields have been applied as either orientation fields or advection fields to control the instantaneous appearance or evolving trajectories of the dynamic effects.
Simulation of Local Blood Flow in Human Brain under Altered Gravity
NASA Technical Reports Server (NTRS)
Kim, Chang Sung; Kiris, Cetin; Kwak, Dochan
2003-01-01
In addition to the altered gravitational forces, specific shapes and connections of arteries in the brain vary in the human population (Cebral et al., 2000; Ferrandez et al., 2002). Considering the geometric variations, pulsatile unsteadiness, and moving walls, computational approach in analyzing altered blood circulation will offer an economical alternative to experiments. This paper presents a computational approach for modeling the local blood flow through the human brain under altered gravity. This computational approach has been verified through steady and unsteady experimental measurements and then applied to the unsteady blood flows through a carotid bifurcation model and an idealized Circle of Willis (COW) configuration under altered gravity conditions.
Varying execution discipline to increase performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, P.L.; Maccabe, A.B.
1993-12-22
This research investigates the relationship between execution discipline and performance. The hypothesis has two parts: 1. Different execution disciplines exhibit different performance for different computations, and 2. These differences can be effectively predicted by heuristics. A machine model is developed that can vary its execution discipline. That is, the model can execute a given program using either the control-driven, data-driven or demand-driven execution discipline. This model is referred to as a ``variable-execution-discipline`` machine. The instruction set for the model is the Program Dependence Web (PDW). The first part of the hypothesis will be tested by simulating the execution of themore » machine model on a suite of computations, based on the Livermore Fortran Kernel (LFK) Test (a.k.a. the Livermore Loops), using all three execution disciplines. Heuristics are developed to predict relative performance. These heuristics predict (a) the execution time under each discipline for one iteration of each loop and (b) the number of iterations taken by that loop; then the heuristics use those predictions to develop a prediction for the execution of the entire loop. Similar calculations are performed for branch statements. The second part of the hypothesis will be tested by comparing the results of the simulated execution with the predictions produced by the heuristics. If the hypothesis is supported, then the door is open for the development of machines that can vary execution discipline to increase performance.« less
Simulating the x-ray image contrast to setup techniques with desired flaw detectability
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2015-04-01
The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing the detector resolution. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.
Time and length scales within a fire and implications for numerical simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
TIESZEN,SHELDON R.
2000-02-02
A partial non-dimensionalization of the Navier-Stokes equations is used to obtain order of magnitude estimates of the rate-controlling transport processes in the reacting portion of a fire plume as a function of length scale. Over continuum length scales, buoyant times scales vary as the square root of the length scale; advection time scales vary as the length scale, and diffusion time scales vary as the square of the length scale. Due to the variation with length scale, each process is dominant over a given range. The relationship of buoyancy and baroclinc vorticity generation is highlighted. For numerical simulation, first principlesmore » solution for fire problems is not possible with foreseeable computational hardware in the near future. Filtered transport equations with subgrid modeling will be required as two to three decades of length scale are captured by solution of discretized conservation equations. By whatever filtering process one employs, one must have humble expectations for the accuracy obtainable by numerical simulation for practical fire problems that contain important multi-physics/multi-length-scale coupling with up to 10 orders of magnitude in length scale.« less
Analysis of the economics of photovoltaic-diesel-battery energy systems for remote applications
NASA Technical Reports Server (NTRS)
Brainard, W. A.
1983-01-01
Computer simulations were conducted to analyze the performance and operating cost of a photovoltaic energy source combined with a diesel generator system and battery storage. The simulations were based on the load demand profiles used for the design of an all photovoltaic energy system installed in the remote Papago Indian Village of Schuchuli, Arizona. Twenty year simulations were run using solar insolation data from Phoenix SOLMET tapes. Total energy produced, energy consumed, operation and maintenance costs were calculated. The life cycle and levelized energy costs were determined for a variety of system configurations (i.e., varying amounts of photovoltaic array and battery storage).
Simulation of Cold Flow in a Truncated Ideal Nozzle with Film Cooling
NASA Technical Reports Server (NTRS)
Braman, Kalen; Ruf, Joseph
2015-01-01
Flow transients during rocket start-up and shut-down can lead to significant side loads on rocket nozzles. The capability to estimate these side loads computationally can streamline the nozzle design process. Towards this goal, the flow in a truncated ideal contour (TIC) nozzle has been simulated for a range of nozzle pressure ratios (NPRs) aimed to match a series of cold flow experiments performed at the NASA MSFC Nozzle Test Facility. These simulations were performed with varying turbulence model choices and with four different versions of the TIC nozzle model geometry, each of which was created with a different simplification to the test article geometry.
1971-08-14
N-243 Flight and Guidance Centrifuge: Is used for spacecraft mission simulations and is adaptable to two configurations. Configuration 1: The cab will accommodate a three-man crew for space mission research. The accelerations and rates are intended to be smoothly applicable at very low value so the navigation and guidance procedures using a high-accuracy, out-the window display may be simulated. Configuration 2: The simulator can use a one-man cab for human tolerance studies and performance testing. Atmosphere and tempertaure can be varied as stress inducements. This simlator is operated closed-loop with digital or analog computation. It is currently man-rated for 3.5g maximum.
1971-08-14
N-243 Flight and Guidance Centrifuge: Is used for spacecraft mission simulations and is adaptable to two configurations. Configuration 1: The cab will accommodate a three-man crew for space mission research. The accelerations and rates are intended to be smoothly applicable at very low value so the navigation and guidance procedures using a high-accuracy, out-the window display may be simulated. Configuration 2: The simulator can use a one-man cab for human tolerance studies and performance testing. Atmosphere and tempertaure can be varied as stress inducements. This simlator is operated closed-loop with digital or analog computation. It is currently man-rated for 3.5g maximum.
1971-08-14
N-243 Flight and Guidance Centrifuge: Is used for spacecraft mission simulations and is adaptable to two configurations. Configuration 1: The cab will accommodate a three-man crew for space mission research. The accelerations and rates are intended to be smoothly applicable at very low value so the navigation and guidance procedures using a high-accuracy, out-the window display may be simulated. Configuration 2: The simulator can use a one-man cab for human tolerance studies and performance testing. Atmosphere and tempertaure can be varied as stress inducements. This simlator is operated closed-loop with digital or analog computation. It is currently man-rated for 3.5g maximum.
Two-dimensional Lagrangian simulation of suspended sediment
Schoellhamer, David H.
1988-01-01
A two-dimensional laterally averaged model for suspended sediment transport in steady gradually varied flow that is based on the Lagrangian reference frame is presented. The layered Lagrangian transport model (LLTM) for suspended sediment performs laterally averaged concentration. The elevations of nearly horizontal streamlines and the simulation time step are selected to optimize model stability and efficiency. The computational elements are parcels of water that are moved along the streamlines in the Lagrangian sense and are mixed with neighboring parcels. Three applications show that the LLTM can accurately simulate theoretical and empirical nonequilibrium suspended sediment distributions and slug injections of suspended sediment in a laboratory flume.
PAB3D Simulations for the CAWAPI F-16XL
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Abdol-Hamid, K. S.; Massey, Steven J.
2007-01-01
Numerical simulations of the flow around F-16XL are performed as a contribution to the Cranked Arrow Wing Aerodynamic Project International (CAWAPI) using the PAB3D CFD code. Two turbulence models are used in the calculations: a standard k-! model, and the Shih-Zhu-Lumley (SZL) algebraic stress model. Seven flight conditions are simulated for the flow around the F-16XL where the free stream Mach number varies from 0.242 to 0.97. The range of angles of attack varies from 0deg to 20deg. Computational results, surface static pressure, boundary layer velocity profiles, and skin friction are presented and compared with flight data. Numerical results are generally in good agreement with flight data, considering that only one grid resolution is utilized for the different flight conditions simulated in this study. The ASM results are closer to the flight data than the k-! model results. The ASM predicted a stronger primary vortex, however, the origin of the vortex and footprint is approximately the same as in the k-! predictions.
WE-D-303-00: Computational Phantoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John; Brigham and Women’s Hospital and Dana-Farber Cancer Institute, Boston, MA
2015-06-15
Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less
1D-3D hybrid modeling-from multi-compartment models to full resolution models in space and time.
Grein, Stephan; Stepniewski, Martin; Reiter, Sebastian; Knodel, Markus M; Queisser, Gillian
2014-01-01
Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator-which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the emerging field of fully resolved, highly detailed 3D-modeling approaches. We present the developed general framework for 1D/3D hybrid modeling and apply it to investigate electrically active neurons and their intracellular spatio-temporal calcium dynamics.
Automated problem scheduling and reduction of synchronization delay effects
NASA Technical Reports Server (NTRS)
Saltz, Joel H.
1987-01-01
It is anticipated that in order to make effective use of many future high performance architectures, programs will have to exhibit at least a medium grained parallelism. A framework is presented for partitioning very sparse triangular systems of linear equations that is designed to produce favorable preformance results in a wide variety of parallel architectures. Efficient methods for solving these systems are of interest because: (1) they provide a useful model problem for use in exploring heuristics for the aggregation, mapping and scheduling of relatively fine grained computations whose data dependencies are specified by directed acrylic graphs, and (2) because such efficient methods can find direct application in the development of parallel algorithms for scientific computation. Simple expressions are derived that describe how to schedule computational work with varying degrees of granularity. The Encore Multimax was used as a hardware simulator to investigate the performance effects of using the partitioning techniques presented in shared memory architectures with varying relative synchronization costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonetto, Andrea; Dall'Anese, Emiliano
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
NASA Astrophysics Data System (ADS)
Homainejad, Amir S.; Satari, Mehran
2000-05-01
VR is possible which brings users to the reality by computer and VE is a simulated world which takes users to any points and directions of the object. VR and VE can be very useful if accurate and precise data are sued, and allows users to work with realistic model. Photogrammetry is a technique which is able to collect and provide accurate and precise data for building 3D model in a computer. Data can be collected from various sensor and cameras, and methods of data collector are vary based on the method of image acquiring. Indeed VR includes real-time graphics, 3D model, and display and it has application in the entertainment industry, flight simulators, industrial design.
Leake, S.A.; Galloway, D.L.
2007-01-01
A new computer program was developed to simulate vertical compaction in models of regional ground-water flow. The program simulates ground-water storage changes and compaction in discontinuous interbeds or in extensive confining units, accounting for stress-dependent changes in storage properties. The new program is a package for MODFLOW, the U.S. Geological Survey modular finite-difference ground-water flow model. Several features of the program make it useful for application in shallow, unconfined flow systems. Geostatic stress can be treated as a function of water-table elevation, and compaction is a function of computed changes in effective stress at the bottom of a model layer. Thickness of compressible sediments in an unconfined model layer can vary in proportion to saturated thickness.
NASA Technical Reports Server (NTRS)
Ha Minh, H.; Viegas, J. R.; Rubesin, M. W.; Spalart, P.; Vandromme, D. D.
1989-01-01
The turbulent boundary layer under a freestream whose velocity varies sinusoidally in time around a zero mean is computed using two second order turbulence closure models. The time or phase dependent behavior of the Reynolds stresses are analyzed and results are compared to those of a previous SPALART-BALDWIN direct simulation. Comparisons show that the second order modeling is quite satisfactory for almost all phase angles, except in the relaminarization period where the computations lead to a relatively high wall shear stress.
R2 effect-size measures for mediation analysis
Fairchild, Amanda J.; MacKinnon, David P.; Taborga, Marcia P.; Taylor, Aaron B.
2010-01-01
R2 effect-size measures are presented to assess variance accounted for in mediation models. The measures offer a means to evaluate both component paths and the overall mediated effect in mediation models. Statistical simulation results indicate acceptable bias across varying parameter and sample-size combinations. The measures are applied to a real-world example using data from a team-based health promotion program to improve the nutrition and exercise habits of firefighters. SAS and SPSS computer code are also provided for researchers to compute the measures in their own data. PMID:19363189
R2 effect-size measures for mediation analysis.
Fairchild, Amanda J; Mackinnon, David P; Taborga, Marcia P; Taylor, Aaron B
2009-05-01
R(2) effect-size measures are presented to assess variance accounted for in mediation models. The measures offer a means to evaluate both component paths and the overall mediated effect in mediation models. Statistical simulation results indicate acceptable bias across varying parameter and sample-size combinations. The measures are applied to a real-world example using data from a team-based health promotion program to improve the nutrition and exercise habits of firefighters. SAS and SPSS computer code are also provided for researchers to compute the measures in their own data.
Connected word recognition using a cascaded neuro-computational model
NASA Astrophysics Data System (ADS)
Hoya, Tetsuya; van Leeuwen, Cees
2016-10-01
We propose a novel framework for processing a continuous speech stream that contains a varying number of words, as well as non-speech periods. Speech samples are segmented into word-tokens and non-speech periods. An augmented version of an earlier-proposed, cascaded neuro-computational model is used for recognising individual words within the stream. Simulation studies using both a multi-speaker-dependent and speaker-independent digit string database show that the proposed method yields a recognition performance comparable to that obtained by a benchmark approach using hidden Markov models with embedded training.
Efficient and Flexible Computation of Many-Electron Wave Function Overlaps.
Plasser, Felix; Ruckenbauer, Matthias; Mai, Sebastian; Oppel, Markus; Marquetand, Philipp; González, Leticia
2016-03-08
A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented.
DSMC simulations of shock interactions about sharp double cones
NASA Astrophysics Data System (ADS)
Moss, James N.
2001-08-01
This paper presents the results of a numerical study of shock interactions resulting from Mach 10 flow about sharp double cones. Computations are made by using the direct simulation Monte Carlo (DSMC) method of Bird. The sensitivity and characteristics of the interactions are examined by varying flow conditions, model size, and configuration. The range of conditions investigated includes those for which experiments have been or will be performed in the ONERA R5Ch low-density wind tunnel and the Calspan-University of Buffalo Research Center (CUBRC) Large Energy National Shock (LENS) tunnel.
DSMC Simulations of Shock Interactions About Sharp Double Cones
NASA Technical Reports Server (NTRS)
Moss, James N.
2000-01-01
This paper presents the results of a numerical study of shock interactions resulting from Mach 10 flow about sharp double cones. Computations are made by using the direct simulation Monte Carlo (DSMC) method of Bird. The sensitivity and characteristics of the interactions are examined by varying flow conditions, model size, and configuration. The range of conditions investigated includes those for which experiments have been or will be performed in the ONERA R5Ch low-density wind tunnel and the Calspan-University of Buffalo Research Center (CUBRC) Large Energy National Shock (LENS) tunnel.
Monte Carlo simulation of nonadiabatic expansion in cometary atmospheres - Halley
NASA Astrophysics Data System (ADS)
Hodges, R. R.
1990-02-01
Monte Carlo methods developed for the characterization of velocity-dependent collision processes and ballistic transports in planetary exospheres form the basis of the present computer simulation of icy comet atmospheres, which iteratively undertakes the simultaneous determination of velocity distribution for five neutral species (water, together with suprathermal OH, H2, O, and H) in a flow regime varying from the hydrodynamic to the ballistic. Experimental data from the neutral mass spectrometer carried by Giotto for its March, 1986 encounter with Halley are compared with a model atmosphere.
Can biophysical properties of submersed macrophytes be determined by remote sensing?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malthus, T.J.; Ciraolo, G.; La Loggia, G.
1997-06-01
This paper details the development of a computationally efficient Monte Carlo simulation program to model photon transport through submersed plant canopies, with emphasis on Seagrass communities. The model incorporates three components: the transmission of photons through a water column of varying depth and turbidity; the interaction of photons within a submersed plant canopy of varying biomass; and interactions with the bottom substrate. The three components of the model are discussed. Simulations were performed based on measured parameters for Posidonia oceanica and compared to measured subsurface reflectance spectra made over comparable seagrass communities in Sicilian coastal waters. It is shown thatmore » the output is realistic. Further simulations are undertaken to investigate the effect of depth and turbidity of the overlying water column. Both sets of results indicate the rapid loss of canopy signal as depth increases and water column phytoplankton concentrations increase. The implications for the development of algorithms for the estimation of submersed canopy biophysical parameters are briefly discussed.« less
Nonlinear to Linear Elastic Code Coupling in 2-D Axisymmetric Media.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph
Explosions within the earth nonlinearly deform the local media, but at typical seismological observation distances, the seismic waves can be considered linear. Although nonlinear algorithms can simulate explosions in the very near field well, these codes are computationally expensive and inaccurate at propagating these signals to great distances. A linearized wave propagation code, coupled to a nonlinear code, provides an efficient mechanism to both accurately simulate the explosion itself and to propagate these signals to distant receivers. To this end we have coupled Sandia's nonlinear simulation algorithm CTH to a linearized elastic wave propagation code for 2-D axisymmetric media (axiElasti)more » by passing information from the nonlinear to the linear code via time-varying boundary conditions. In this report, we first develop the 2-D axisymmetric elastic wave equations in cylindrical coordinates. Next we show how we design the time-varying boundary conditions passing information from CTH to axiElasti, and finally we demonstrate the coupling code via a simple study of the elastic radius.« less
Xiong, Guanglei; Figueroa, C. Alberto; Xiao, Nan; Taylor, Charles A.
2011-01-01
SUMMARY Simulation of blood flow using image-based models and computational fluid dynamics has found widespread application to quantifying hemodynamic factors relevant to the initiation and progression of cardiovascular diseases and for planning interventions. Methods for creating subject-specific geometric models from medical imaging data have improved substantially in the last decade but for many problems, still require significant user interaction. In addition, while fluid–structure interaction methods are being employed to model blood flow and vessel wall dynamics, tissue properties are often assumed to be uniform. In this paper, we propose a novel workflow for simulating blood flow using subject-specific geometry and spatially varying wall properties. The geometric model construction is based on 3D segmentation and geometric processing. Variable wall properties are assigned to the model based on combining centerline-based and surface-based methods. We finally demonstrate these new methods using an idealized cylindrical model and two subject-specific vascular models with thoracic and cerebral aneurysms. PMID:21765984
NASA Astrophysics Data System (ADS)
Johnson, Ryan Federick; Chelliah, Harsha Kumar
2017-01-01
For a range of flow and chemical timescales, numerical simulations of two-dimensional laminar flow over a reacting carbon surface were performed to understand further the complex coupling between heterogeneous and homogeneous reactions. An open-source computational package (OpenFOAM®) was used with previously developed lumped heterogeneous reaction models for carbon surfaces and a detailed homogeneous reaction model for CO oxidation. The influence of finite-rate chemical kinetics was explored by varying the surface temperatures from 1800 to 2600 K, while flow residence time effects were explored by varying the free-stream velocity up to 50 m/s. The reacting boundary layer structure dependence on the residence time was analysed by extracting the ratio of chemical source and species diffusion terms. The important contributions of radical species reactions on overall carbon removal rate, which is often neglected in multi-dimensional simulations, are highlighted. The results provide a framework for future development and validation of lumped heterogeneous reaction models based on multi-dimensional reacting flow configurations.
Archer, A.W.; Maples, C.G.
1989-01-01
Numerous departures from ideal relationships are revealed by Monte Carlo simulations of widely accepted binomial coefficients. For example, simulations incorporating varying levels of matrix sparseness (presence of zeros indicating lack of data) and computation of expected values reveal that not only are all common coefficients influenced by zero data, but also that some coefficients do not discriminate between sparse or dense matrices (few zero data). Such coefficients computationally merge mutually shared and mutually absent information and do not exploit all the information incorporated within the standard 2 ?? 2 contingency table; therefore, the commonly used formulae for such coefficients are more complicated than the actual range of values produced. Other coefficients do differentiate between mutual presences and absences; however, a number of these coefficients do not demonstrate a linear relationship to matrix sparseness. Finally, simulations using nonrandom matrices with known degrees of row-by-row similarities signify that several coefficients either do not display a reasonable range of values or are nonlinear with respect to known relationships within the data. Analyses with nonrandom matrices yield clues as to the utility of certain coefficients for specific applications. For example, coefficients such as Jaccard, Dice, and Baroni-Urbani and Buser are useful if correction of sparseness is desired, whereas the Russell-Rao coefficient is useful when sparseness correction is not desired. ?? 1989 International Association for Mathematical Geology.
Protocols for efficient simulations of long-time protein dynamics using coarse-grained CABS model.
Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian
2014-01-01
Coarse-grained (CG) modeling is a well-acknowledged simulation approach for getting insight into long-time scale protein folding events at reasonable computational cost. Depending on the design of a CG model, the simulation protocols vary from highly case-specific-requiring user-defined assumptions about the folding scenario-to more sophisticated blind prediction methods for which only a protein sequence is required. Here we describe the framework protocol for the simulations of long-term dynamics of globular proteins, with the use of the CABS CG protein model and sequence data. The simulations can start from a random or a selected (e.g., native) structure. The described protocol has been validated using experimental data for protein folding model systems-the prediction results agreed well with the experimental results.
Knowledge-based computer systems for radiotherapy planning.
Kalet, I J; Paluszynski, W
1990-08-01
Radiation therapy is one of the first areas of clinical medicine to utilize computers in support of routine clinical decision making. The role of the computer has evolved from simple dose calculations to elaborate interactive graphic three-dimensional simulations. These simulations can combine external irradiation from megavoltage photons, electrons, and particle beams with interstitial and intracavitary sources. With the flexibility and power of modern radiotherapy equipment and the ability of computer programs that simulate anything the machinery can do, we now face a challenge to utilize this capability to design more effective radiation treatments. How can we manage the increased complexity of sophisticated treatment planning? A promising approach will be to use artificial intelligence techniques to systematize our present knowledge about design of treatment plans, and to provide a framework for developing new treatment strategies. Far from replacing the physician, physicist, or dosimetrist, artificial intelligence-based software tools can assist the treatment planning team in producing more powerful and effective treatment plans. Research in progress using knowledge-based (AI) programming in treatment planning already has indicated the usefulness of such concepts as rule-based reasoning, hierarchical organization of knowledge, and reasoning from prototypes. Problems to be solved include how to handle continuously varying parameters and how to evaluate plans in order to direct improvements.
Optimization behavior of brainstem respiratory neurons. A cerebral neural network model.
Poon, C S
1991-01-01
A recent model of respiratory control suggested that the steady-state respiratory responses to CO2 and exercise may be governed by an optimal control law in the brainstem respiratory neurons. It was not certain, however, whether such complex optimization behavior could be accomplished by a realistic biological neural network. To test this hypothesis, we developed a hybrid computer-neural model in which the dynamics of the lung, brain and other tissue compartments were simulated on a digital computer. Mimicking the "controller" was a human subject who pedalled on a bicycle with varying speed (analog of ventilatory output) with a view to minimize an analog signal of the total cost of breathing (chemical and mechanical) which was computed interactively and displayed on an oscilloscope. In this manner, the visuomotor cortex served as a proxy (homolog) of the brainstem respiratory neurons in the model. Results in 4 subjects showed a linear steady-state ventilatory CO2 response to arterial PCO2 during simulated CO2 inhalation and a nearly isocapnic steady-state response during simulated exercise. Thus, neural optimization is a plausible mechanism for respiratory control during exercise and can be achieved by a neural network with cognitive computational ability without the need for an exercise stimulus.
1D-3D hybrid modeling—from multi-compartment models to full resolution models in space and time
Grein, Stephan; Stepniewski, Martin; Reiter, Sebastian; Knodel, Markus M.; Queisser, Gillian
2014-01-01
Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator—which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the emerging field of fully resolved, highly detailed 3D-modeling approaches. We present the developed general framework for 1D/3D hybrid modeling and apply it to investigate electrically active neurons and their intracellular spatio-temporal calcium dynamics. PMID:25120463
Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS
NASA Astrophysics Data System (ADS)
Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L.; Bolch, Wesley E.
2017-06-01
A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.
Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS.
Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L; Bolch, Wesley E
2017-06-21
A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.
Convection- and SASI-driven flows in parametrized models of core-collapse supernova explosions
Endeve, E.; Cardall, C. Y.; Budiardja, R. D.; ...
2016-01-21
We present initial results from three-dimensional simulations of parametrized core-collapse supernova (CCSN) explosions obtained with our astrophysical simulation code General Astrophysical Simulation System (GenASIS). We are interested in nonlinear flows resulting from neutrino-driven convection and the standing accretion shock instability (SASI) in the CCSN environment prior to and during the explosion. By varying parameters in our model that control neutrino heating and shock dissociation, our simulations result in convection-dominated and SASI-dominated evolution. We describe this initial set of simulation results in some detail. To characterize the turbulent flows in the simulations, we compute and compare velocity power spectra from convection-dominatedmore » and SASI-dominated (both non-exploding and exploding) models. When compared to SASI-dominated models, convection-dominated models exhibit significantly more power on small spatial scales.« less
NASA Astrophysics Data System (ADS)
Shy, L. Y.; Eichinger, B. E.
1989-05-01
Computer simulations of the formation of trifunctional and tetrafunctional polydimethyl-siloxane networks that are crosslinked by condensation of telechelic chains with multifunctional crosslinking agents have been carried out on systems containing up to 1.05×106 chains. Eigenvalue spectra of Kirchhoff matrices for these networks have been evaluated at two levels of approximation: (1) inclusion of all midchain modes, and (2) suppression of midchain modes. By use of the recursion method of Haydock and Nex, we have been able to effectively diagonalize matrices with 730 498 rows and columns without actually constructing matrices of this size. The small eigenvalues have been computed by use of the Lanczos algorithm. We demonstrate the following results: (1) The smallest eigenvalues (with chain modes suppressed) vary as μ-2/3 for sufficiently large μ, where μ is the number of junctions in the network; (2) the eigenvalue spectra of the Kirchhoff matrices are well described by McKay's theory for random regular graphs in the range of the larger eigenvalues, but there are significant departures in the region of small eigenvalues where computed spectra have many more small eigenvalues than random regular graphs; (3) the smallest eigenvalues vary as n-1.78 where n is the number of Rouse beads in the chains that comprise the network. Computations are done for both monodisperse and polydisperse chain length distributions. Large eigenvalues associated with localized motion of the junctions are found as predicted by theory. The relationship between the small eigenvalues and the equilibrium modulus of elasticity is discussed, as is the relationship between viscoelasticity and the band edge of the spectrum.
Digital data processing system dynamic loading analysis
NASA Technical Reports Server (NTRS)
Lagas, J. J.; Peterka, J. J.; Tucker, A. E.
1976-01-01
Simulation and analysis of the Space Shuttle Orbiter Digital Data Processing System (DDPS) are reported. The mated flight and postseparation flight phases of the space shuttle's approach and landing test configuration were modeled utilizing the Information Management System Interpretative Model (IMSIM) in a computerized simulation modeling of the ALT hardware, software, and workload. System requirements simulated for the ALT configuration were defined. Sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and the sensitivity analyses, a test design is described for adapting, parameterizing, and executing the IMSIM. Varying load and stress conditions for the model execution are given. The analyses of the computer simulation runs were documented as results, conclusions, and recommendations for DDPS improvements.
Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase
NASA Technical Reports Server (NTRS)
Lagas, J. J.; Peterka, J. J.; Becker, D. A.
1977-01-01
Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.
Implicit Multibody Penalty-BasedDistributed Contact.
Xu, Hongyi; Zhao, Yili; Barbic, Jernej
2014-09-01
The penalty method is a simple and popular approach to resolving contact in computer graphics and robotics. Penalty-based contact, however, suffers from stability problems due to the highly variable and unpredictable net stiffness, and this is particularly pronounced in simulations with time-varying distributed geometrically complex contact. We employ semi-implicit integration, exact analytical contact gradients, symbolic Gaussian elimination and a SVD solver to simulate stable penalty-based frictional contact with large, time-varying contact areas, involving many rigid objects and articulated rigid objects in complex conforming contact and self-contact. We also derive implicit proportional-derivative control forces for real-time control of articulated structures with loops. We present challenging contact scenarios such as screwing a hexbolt into a hole, bowls stacked in perfectly conforming configurations, and manipulating many objects using actively controlled articulated mechanisms in real time.
NASA Astrophysics Data System (ADS)
Li, Zheng; Jiang, Yi-han; Duan, Lian; Zhu, Chao-zhe
2017-08-01
Objective. Functional near infra-red spectroscopy (fNIRS) is a promising brain imaging technology for brain-computer interfaces (BCI). Future clinical uses of fNIRS will likely require operation over long time spans, during which neural activation patterns may change. However, current decoders for fNIRS signals are not designed to handle changing activation patterns. The objective of this study is to test via simulations a new adaptive decoder for fNIRS signals, the Gaussian mixture model adaptive classifier (GMMAC). Approach. GMMAC can simultaneously classify and track activation pattern changes without the need for ground-truth labels. This adaptive classifier uses computationally efficient variational Bayesian inference to label new data points and update mixture model parameters, using the previous model parameters as priors. We test GMMAC in simulations in which neural activation patterns change over time and compare to static decoders and unsupervised adaptive linear discriminant analysis classifiers. Main results. Our simulation experiments show GMMAC can accurately decode under time-varying activation patterns: shifts of activation region, expansions of activation region, and combined contractions and shifts of activation region. Furthermore, the experiments show the proposed method can track the changing shape of the activation region. Compared to prior work, GMMAC performed significantly better than the other unsupervised adaptive classifiers on a difficult activation pattern change simulation: 99% versus <54% in two-choice classification accuracy. Significance. We believe GMMAC will be useful for clinical fNIRS-based brain-computer interfaces, including neurofeedback training systems, where operation over long time spans is required.
Software Would Largely Automate Design of Kalman Filter
NASA Technical Reports Server (NTRS)
Chuang, Jason C. H.; Negast, William J.
2005-01-01
Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.
Computational modelling of cosmic rays in the neighbourhood of the Sun
NASA Astrophysics Data System (ADS)
Potgieter, M. S.; Strauss, R. D.
2017-10-01
The heliosphere is defned as the plasmatic inuence sphere of the Sun and stretches far beyond the solar system. Cosmic rays, as charged particles with energy between about 1 MeV and millions of GeV, arriving from our own Galaxy and beyond, penetrate the heliosphere and encounter the solar wind and embedded magnetic feld so that when observed they contain useful information about the basic features of the heliosphere. In order to interpret these observations, obtained on and near the Earth and farther away by several space missions, and to gain understanding of the underlying physics, called heliophysics, we need to simulate the heliosphere and the acceleration, propagation and transport of these astroparticles with numerical models. These types of models vary from magnetohydrodynamic based approaches for simulating the heliosphere to using standard fnite-difference numerical schemes to solve transport-type partial differential equations with varying complexity. A large number of these models have been developed locally to do internationally competitive research and have become as such an important training tool for human capacity development in computational physics in South Africa. How these models are applied to various aspects of heliospheric space physics, with illustrative examples, is discussed in this overview.
A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling
NASA Astrophysics Data System (ADS)
Aslam, Kamran
This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.
MONTE CARLO SIMULATIONS OF PERIODIC PULSED REACTOR WITH MOVING GEOMETRY PARTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Yan; Gohar, Yousry
2015-11-01
In a periodic pulsed reactor, the reactor state varies periodically from slightly subcritical to slightly prompt supercritical for producing periodic power pulses. Such periodic state change is accomplished by a periodic movement of specific reactor parts, such as control rods or reflector sections. The analysis of such reactor is difficult to perform with the current reactor physics computer programs. Based on past experience, the utilization of the point kinetics approximations gives considerable errors in predicting the magnitude and the shape of the power pulse if the reactor has significantly different neutron life times in different zones. To accurately simulate themore » dynamics of this type of reactor, a Monte Carlo procedure using the transfer function TRCL/TR of the MCNP/MCNPX computer programs is utilized to model the movable reactor parts. In this paper, two algorithms simulating the geometry part movements during a neutron history tracking have been developed. Several test cases have been developed to evaluate these procedures. The numerical test cases have shown that the developed algorithms can be utilized to simulate the reactor dynamics with movable geometry parts.« less
Scientific Visualization and Computational Science: Natural Partners
NASA Technical Reports Server (NTRS)
Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)
1995-01-01
Scientific visualization is developing rapidly, stimulated by computational science, which is gaining acceptance as a third alternative to theory and experiment. Computational science is based on numerical simulations of mathematical models derived from theory. But each individual simulation is like a hypothetical experiment; initial conditions are specified, and the result is a record of the observed conditions. Experiments can be simulated for situations that can not really be created or controlled. Results impossible to measure can be computed.. Even for observable values, computed samples are typically much denser. Numerical simulations also extend scientific exploration where the mathematics is analytically intractable. Numerical simulations are used to study phenomena from subatomic to intergalactic scales and from abstract mathematical structures to pragmatic engineering of everyday objects. But computational science methods would be almost useless without visualization. The obvious reason is that the huge amounts of data produced require the high bandwidth of the human visual system, and interactivity adds to the power. Visualization systems also provide a single context for all the activities involved from debugging the simulations, to exploring the data, to communicating the results. Most of the presentations today have their roots in image processing, where the fundamental task is: Given an image, extract information about the scene. Visualization has developed from computer graphics, and the inverse task: Given a scene description, make an image. Visualization extends the graphics paradigm by expanding the possible input. The goal is still to produce images; the difficulty is that the input is not a scene description displayable by standard graphics methods. Visualization techniques must either transform the data into a scene description or extend graphics techniques to display this odd input. Computational science is a fertile field for visualization research because the results vary so widely and include things that have no known appearance. The amount of data creates additional challenges for both hardware and software systems. Evaluations of visualization should ultimately reflect the insight gained into the scientific phenomena. So making good visualizations requires consideration of characteristics of the user and the purpose of the visualization. Knowledge about human perception and graphic design is also relevant. It is this breadth of knowledge that stimulates proposals for multidisciplinary visualization teams and intelligent visualization assistant software. Visualization is an immature field, but computational science is stimulating research on a broad front.
Initial Computations of Vertical Displacement Events with NIMROD
NASA Astrophysics Data System (ADS)
Bunkers, Kyle; Sovinec, C. R.
2014-10-01
Disruptions associated with vertical displacement events (VDEs) have potential for causing considerable physical damage to ITER and other tokamak experiments. We report on initial computations of generic axisymmetric VDEs using the NIMROD code [Sovinec et al., JCP 195, 355 (2004)]. An implicit thin-wall computation has been implemented to couple separate internal and external regions without numerical stability limitations. A simple rectangular cross-section domain generated with the NIMEQ code [Howell and Sovinec, CPC (2014)] modified to use a symmetry condition at the midplane is used to test linear and nonlinear axisymmetric VDE computation. As current in simulated external coils for large- R / a cases is varied, there is a clear n = 0 stability threshold which lies below the decay-index criterion for the current-loop model of a tokamak to model VDEs [Mukhovatov and Shafranov, Nucl. Fusion 11, 605 (1971)]; a scan of wall distance indicates the offset is due to the influence of the conducting wall. Results with a vacuum region surrounding a resistive wall will also be presented. Initial nonlinear computations show large vertical displacement of an intact simulated tokamak. This effort is supported by U.S. Department of Energy Grant DE-FG02-06ER54850.
The effect of denaturant on protein stability: a Monte Carlo lattice simulation
NASA Astrophysics Data System (ADS)
Choi, Ho Sup; Huh, June; Jo, Won Ho
2003-03-01
Denaturants are the reagents that decrease protein stability by interacting with both nonpolar and polar surfaces of protein when added to the aqueous solvent. However, the physical nature of these interactions has not been clearly understood. It is not easy to elucidate the nature of denaturant theoretically or experimentally. Even in computer simulation, the denaturant atoms are unable to be dealt explicitly due to computationally enormous costs. We have used a lattice model of protein and denaturant. By varying concentration of denaturant and interaction energy between protein and denaturant, we have measured the change of stability of the protein. This simple model reflects the experimental observation that the free energy of unfolding is a linear function of denaturant concentration in the transition range. We have also performed a simulation under isotropic perturbation. In this case, denaturant molecules are not included and a biasing potential is introduced in order to increase the radius of gyration of protein, which incorporates the effect of denaturant implicitly. The calculated free energy landscape and conformational ensembles sampled under this condition is very close to those of simulation using denaturant molecules interacting with protein. We have applied this simple approach for simulating the effect of denaturant to real proteins.
NASA Technical Reports Server (NTRS)
Boyalakuntla, Kishore; Soni, Bharat K.; Thornburg, Hugh J.; Yu, Robert
1996-01-01
During the past decade, computational simulation of fluid flow around complex configurations has progressed significantly and many notable successes have been reported, however, unsteady time-dependent solutions are not easily obtainable. The present effort involves unsteady time dependent simulation of temporally deforming geometries. Grid generation for a complex configuration can be a time consuming process and temporally varying geometries necessitate the regeneration of such grids for every time step. Traditional grid generation techniques have been tried and demonstrated to be inadequate to such simulations. Non-Uniform Rational B-splines (NURBS) based techniques provide a compact and accurate representation of the geometry. This definition can be coupled with a distribution mesh for a user defined spacing. The present method greatly reduces cpu requirements for time dependent remeshing, facilitating the simulation of more complex unsteady problems. A thrust vectoring nozzle has been chosen to demonstrate the capability as it is of current interest in the aerospace industry for better maneuverability of fighter aircraft in close combat and in post stall regimes. This current effort is the first step towards multidisciplinary design optimization which involves coupling the aerodynamic heat transfer and structural analysis techniques. Applications include simulation of temporally deforming bodies and aeroelastic problems.
NASA Astrophysics Data System (ADS)
Yu, Huidan (Whitney); Chen, Xi; Chen, Rou; Wang, Zhiqiang; Lin, Chen; Kralik, Stephen; Zhao, Ye
2015-11-01
In this work, we demonstrate the validity of 4-D patient-specific computational hemodynamics (PSCH) based on 3-D time-of-flight (TOF) MR angiography (MRA) and 2-D electrocardiogram (ECG) gated phase contrast (PC) images. The mesoscale lattice Boltzmann method (LBM) is employed to segment morphological arterial geometry from TOF MRA, to extract velocity profiles from ECG PC images, and to simulate fluid dynamics on a unified GPU accelerated computational platform. Two healthy volunteers are recruited to participate in the study. For each volunteer, a 3-D high resolution TOF MRA image and 10 2-D ECG gated PC images are acquired to provide the morphological geometry and the time-varying flow velocity profiles for necessary inputs of the PSCH. Validation results will be presented through comparisons of LBM vs. 4D Flow Software for flow rates and LBM simulation vs. MRA measurement for blood flow velocity maps. Indiana University Health (IUH) Values Fund.
Model reduction for agent-based social simulation: coarse-graining a civil violence model.
Zou, Yu; Fonoberov, Vladimir A; Fonoberova, Maria; Mezic, Igor; Kevrekidis, Ioannis G
2012-06-01
Agent-based modeling (ABM) constitutes a powerful computational tool for the exploration of phenomena involving emergent dynamic behavior in the social sciences. This paper demonstrates a computer-assisted approach that bridges the significant gap between the single-agent microscopic level and the macroscopic (coarse-grained population) level, where fundamental questions must be rationally answered and policies guiding the emergent dynamics devised. Our approach will be illustrated through an agent-based model of civil violence. This spatiotemporally varying ABM incorporates interactions between a heterogeneous population of citizens [active (insurgent), inactive, or jailed] and a population of police officers. Detailed simulations exhibit an equilibrium punctuated by periods of social upheavals. We show how to effectively reduce the agent-based dynamics to a stochastic model with only two coarse-grained degrees of freedom: the number of jailed citizens and the number of active ones. The coarse-grained model captures the ABM dynamics while drastically reducing the computation time (by a factor of approximately 20).
Model reduction for agent-based social simulation: Coarse-graining a civil violence model
NASA Astrophysics Data System (ADS)
Zou, Yu; Fonoberov, Vladimir A.; Fonoberova, Maria; Mezic, Igor; Kevrekidis, Ioannis G.
2012-06-01
Agent-based modeling (ABM) constitutes a powerful computational tool for the exploration of phenomena involving emergent dynamic behavior in the social sciences. This paper demonstrates a computer-assisted approach that bridges the significant gap between the single-agent microscopic level and the macroscopic (coarse-grained population) level, where fundamental questions must be rationally answered and policies guiding the emergent dynamics devised. Our approach will be illustrated through an agent-based model of civil violence. This spatiotemporally varying ABM incorporates interactions between a heterogeneous population of citizens [active (insurgent), inactive, or jailed] and a population of police officers. Detailed simulations exhibit an equilibrium punctuated by periods of social upheavals. We show how to effectively reduce the agent-based dynamics to a stochastic model with only two coarse-grained degrees of freedom: the number of jailed citizens and the number of active ones. The coarse-grained model captures the ABM dynamics while drastically reducing the computation time (by a factor of approximately 20).
Gao, Xi; Kong, Bo; Vigil, R Dennis
2017-01-01
A comprehensive quantitative model incorporating the effects of fluid flow patterns, light distribution, and algal growth kinetics on biomass growth rate is developed in order to predict the performance of a Taylor vortex algal photobioreactor for culturing Chlorella vulgaris. A commonly used Lagrangian strategy for coupling the various factors influencing algal growth was employed whereby results from computational fluid dynamics and radiation transport simulations were used to compute numerous microorganism light exposure histories, and this information in turn was used to estimate the global biomass specific growth rate. The simulations provide good quantitative agreement with experimental data and correctly predict the trend in reactor performance as a key reactor operating parameter is varied (inner cylinder rotation speed). However, biomass growth curves are consistently over-predicted and potential causes for these over-predictions and drawbacks of the Lagrangian approach are addressed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Adapting to life: ocean biogeochemical modelling and adaptive remeshing
NASA Astrophysics Data System (ADS)
Hill, J.; Popova, E. E.; Ham, D. A.; Piggott, M. D.; Srokosz, M.
2013-11-01
An outstanding problem in biogeochemical modelling of the ocean is that many of the key processes occur intermittently at small scales, such as the sub-mesoscale, that are not well represented in global ocean models. As an example, state-of-the-art models give values of primary production approximately two orders of magnitude lower than those observed in the ocean's oligotrophic gyres, which cover a third of the Earth's surface. This is partly due to their failure to resolve sub-mesoscale phenomena, which play a significant role in nutrient supply. Simply increasing the resolution of the models may be an inefficient computational solution to this problem. An approach based on recent advances in adaptive mesh computational techniques may offer an alternative. Here the first steps in such an approach are described, using the example of a~simple vertical column (quasi 1-D) ocean biogeochemical model. We present a novel method of simulating ocean biogeochemical behaviour on a vertically adaptive computational mesh, where the mesh changes in response to the biogeochemical and physical state of the system throughout the simulation. We show that the model reproduces the general physical and biological behaviour at three ocean stations (India, Papa and Bermuda) as compared to a high-resolution fixed mesh simulation and to observations. The simulations capture both the seasonal and inter-annual variations. The use of an adaptive mesh does not increase the computational error, but reduces the number of mesh elements by a factor of 2-3, so reducing computational overhead. We then show the potential of this method in two case studies where we change the metric used to determine the varying mesh sizes in order to capture the dynamics of chlorophyll at Bermuda and sinking detritus at Papa. We therefore demonstrate adaptive meshes may provide a~suitable numerical technique for simulating seasonal or transient biogeochemical behaviour at high spatial resolution whilst minimising computational cost.
Optimal social-networking strategy is a function of socioeconomic conditions.
Oishi, Shigehiro; Kesebir, Selin
2012-12-01
In the two studies reported here, we examined the relation among residential mobility, economic conditions, and optimal social-networking strategy. In study 1, a computer simulation showed that regardless of economic conditions, having a broad social network with weak friendship ties is advantageous when friends are likely to move away. By contrast, having a small social network with deep friendship ties is advantageous when the economy is unstable but friends are not likely to move away. In study 2, we examined the validity of the computer simulation using a sample of American adults. Results were consistent with the simulation: American adults living in a zip code where people are residentially stable but economically challenged were happier if they had a narrow but deep social network, whereas in other socioeconomic conditions, people were generally happier if they had a broad but shallow networking strategy. Together, our studies demonstrate that the optimal social-networking strategy varies as a function of socioeconomic conditions.
NASA Technical Reports Server (NTRS)
Assanis, D. N.; Ekchian, J. E.; Frank, R. M.; Heywood, J. B.
1985-01-01
A computer simulation of the turbocharged turbocompounded direct-injection diesel engine system was developed in order to study the performance characteristics of the total system as major design parameters and materials are varied. Quasi-steady flow models of the compressor, turbines, manifolds, intercooler, and ducting are coupled with a multicylinder reciprocator diesel model, where each cylinder undergoes the same thermodynamic cycle. The master cylinder model describes the reciprocator intake, compression, combustion and exhaust processes in sufficient detail to define the mass and energy transfers in each subsystem of the total engine system. Appropriate thermal loading models relate the heat flow through critical system components to material properties and design details. From this information, the simulation predicts the performance gains, and assesses the system design trade-offs which would result from the introduction of selected heat transfer reduction materials in key system components, over a range of operating conditions.
Effective height of chimney for biomass cook stove simulated by computational fluid dynamics
NASA Astrophysics Data System (ADS)
Faisal; Setiawan, A.; Wusnah; Khairil; Luthfi
2018-02-01
This paper presents the results of numerical modelling of temperature distribution and flow pattern in a biomass cooking stove using CFD simulation. The biomass stove has been designed to suite the household cooking process. The stove consists of two pots. The first is the main pot located on the top of the combustion chamber where the heat from the combustion process is directly received. The second pot absorbs the heat from the exhaust gas. A chimney installed at the end of the stove releases the exhaust gas to the ambient air. During the tests, the height of chimney was varied to find the highest temperatures at both pots. Results showed that the height of the chimney at the highest temperatures of the pots is 1.65 m. This chimney height was validated by developing a model for computational fluid dynamics. Both experimental and simulations results show a good agreement and help in tune-fining the design of biomass cooking stove.
Gilet, Estelle; Diard, Julien; Bessière, Pierre
2011-01-01
In this paper, we study the collaboration of perception and action representations involved in cursive letter recognition and production. We propose a mathematical formulation for the whole perception–action loop, based on probabilistic modeling and Bayesian inference, which we call the Bayesian Action–Perception (BAP) model. Being a model of both perception and action processes, the purpose of this model is to study the interaction of these processes. More precisely, the model includes a feedback loop from motor production, which implements an internal simulation of movement. Motor knowledge can therefore be involved during perception tasks. In this paper, we formally define the BAP model and show how it solves the following six varied cognitive tasks using Bayesian inference: i) letter recognition (purely sensory), ii) writer recognition, iii) letter production (with different effectors), iv) copying of trajectories, v) copying of letters, and vi) letter recognition (with internal simulation of movements). We present computer simulations of each of these cognitive tasks, and discuss experimental predictions and theoretical developments. PMID:21674043
Numerical Experiments with a Turbulent Single-Mode Rayleigh-Taylor Instability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cloutman, L.D.
2000-04-01
Direct numerical simulation is a powerful tool for studying turbulent flows. Unfortunately, it is also computationally expensive and often beyond the reach of the largest, fastest computers. Consequently, a variety of turbulence models have been devised to allow tractable and affordable simulations of averaged flow fields. Unfortunately, these present a variety of practical difficulties, including the incorporation of varying degrees of empiricism and phenomenology, which leads to a lack of universality. This unsatisfactory state of affairs has led to the speculation that one can avoid the expense and bother of using a turbulence model by relying on the grid andmore » numerical diffusion of the computational fluid dynamics algorithm to introduce a spectral cutoff on the flow field and to provide dissipation at the grid scale, thereby mimicking two main effects of a large eddy simulation model. This paper shows numerical examples of a single-mode Rayleigh-Taylor instability in which this procedure produces questionable results. We then show a dramatic improvement when two simple subgrid-scale models are employed. This study also illustrates the extreme sensitivity to initial conditions that is a common feature of turbulent flows.« less
Mathematical modeling based on ordinary differential equations: A promising approach to vaccinology
Bonin, Carla Rezende Barbosa; Fernandes, Guilherme Cortes; dos Santos, Rodrigo Weber; Lobosco, Marcelo
2017-01-01
ABSTRACT New contributions that aim to accelerate the development or to improve the efficacy and safety of vaccines arise from many different areas of research and technology. One of these areas is computational science, which traditionally participates in the initial steps, such as the pre-screening of active substances that have the potential to become a vaccine antigen. In this work, we present another promising way to use computational science in vaccinology: mathematical and computational models of important cell and protein dynamics of the immune system. A system of Ordinary Differential Equations represents different immune system populations, such as B cells and T cells, antigen presenting cells and antibodies. In this way, it is possible to simulate, in silico, the immune response to vaccines under development or under study. Distinct scenarios can be simulated by varying parameters of the mathematical model. As a proof of concept, we developed a model of the immune response to vaccination against the yellow fever. Our simulations have shown consistent results when compared with experimental data available in the literature. The model is generic enough to represent the action of other diseases or vaccines in the human immune system, such as dengue and Zika virus. PMID:28027002
An Examination of Parameters Affecting Large Eddy Simulations of Flow Past a Square Cylinder
NASA Technical Reports Server (NTRS)
Mankbadi, M. R.; Georgiadis, N. J.
2014-01-01
Separated flow over a bluff body is analyzed via large eddy simulations. The turbulent flow around a square cylinder features a variety of complex flow phenomena such as highly unsteady vortical structures, reverse flow in the near wall region, and wake turbulence. The formation of spanwise vortices is often times artificially suppressed in computations by either insufficient depth or a coarse spanwise resolution. As the resolution is refined and the domain extended, the artificial turbulent energy exchange between spanwise and streamwise turbulence is eliminated within the wake region. A parametric study is performed highlighting the effects of spanwise vortices where the spanwise computational domain's resolution and depth are varied. For Re=22,000, the mean and turbulent statistics computed from the numerical large eddy simulations (NLES) are in good agreement with experimental data. Von-Karman shedding is observed in the wake of the cylinder. Mesh independence is illustrated by comparing a mesh resolution of 2 million to 16 million. Sensitivities to time stepping were minimized and sampling frequency sensitivities were nonpresent. While increasing the spanwise depth and resolution can be costly, this practice was found to be necessary to eliminating the artificial turbulent energy exchange.
Mathematical modeling based on ordinary differential equations: A promising approach to vaccinology.
Bonin, Carla Rezende Barbosa; Fernandes, Guilherme Cortes; Dos Santos, Rodrigo Weber; Lobosco, Marcelo
2017-02-01
New contributions that aim to accelerate the development or to improve the efficacy and safety of vaccines arise from many different areas of research and technology. One of these areas is computational science, which traditionally participates in the initial steps, such as the pre-screening of active substances that have the potential to become a vaccine antigen. In this work, we present another promising way to use computational science in vaccinology: mathematical and computational models of important cell and protein dynamics of the immune system. A system of Ordinary Differential Equations represents different immune system populations, such as B cells and T cells, antigen presenting cells and antibodies. In this way, it is possible to simulate, in silico, the immune response to vaccines under development or under study. Distinct scenarios can be simulated by varying parameters of the mathematical model. As a proof of concept, we developed a model of the immune response to vaccination against the yellow fever. Our simulations have shown consistent results when compared with experimental data available in the literature. The model is generic enough to represent the action of other diseases or vaccines in the human immune system, such as dengue and Zika virus.
Rojas, David; Kapralos, Bill; Dubrowski, Adam
2016-01-01
Next to practice, feedback is the most important variable in skill acquisition. Feedback can vary in content and the way that it is used for delivery. Health professions education research has extensively examined the different effects provided by the different feedback methodologies. In this paper we compared two different types of knowledge of performance (KP) feedback. The first type was video-based KP feedback while the second type consisted of computer generated KP feedback. Results of this study showed that computer generated performance feedback is more effective than video based performance feedback. The combination of the two feedback methodologies provides trainees with a better understanding.
A Worst-Case Approach for On-Line Flutter Prediction
NASA Technical Reports Server (NTRS)
Lind, Rick C.; Brenner, Martin J.
1998-01-01
Worst-case flutter margins may be computed for a linear model with respect to a set of uncertainty operators using the structured singular value. This paper considers an on-line implementation to compute these robust margins in a flight test program. Uncertainty descriptions are updated at test points to account for unmodeled time-varying dynamics of the airplane by ensuring the robust model is not invalidated by measured flight data. Robust margins computed with respect to this uncertainty remain conservative to the changing dynamics throughout the flight. A simulation clearly demonstrates this method can improve the efficiency of flight testing by accurately predicting the flutter margin to improve safety while reducing the necessary flight time.
NASA Technical Reports Server (NTRS)
Turso, James A.; Litt, Jonathan S.
2004-01-01
A method for accommodating engine deterioration via a scheduled Linear Parameter Varying Quadratic Lyapunov Function (LPVQLF)-Based controller is presented. The LPVQLF design methodology provides a means for developing unconditionally stable, robust control of Linear Parameter Varying (LPV) systems. The controller is scheduled on the Engine Deterioration Index, a function of estimated parameters that relate to engine health, and is computed using a multilayer feedforward neural network. Acceptable thrust response and tight control of exhaust gas temperature (EGT) is accomplished by adjusting the performance weights on these parameters for different levels of engine degradation. Nonlinear simulations demonstrate that the controller achieves specified performance objectives while being robust to engine deterioration as well as engine-to-engine variations.
A complex valued radial basis function network for equalization of fast time varying channels.
Gan, Q; Saratchandran, P; Sundararajan, N; Subramanian, K R
1999-01-01
This paper presents a complex valued radial basis function (RBF) network for equalization of fast time varying channels. A new method for calculating the centers of the RBF network is given. The method allows fixing the number of RBF centers even as the equalizer order is increased so that a good performance is obtained by a high-order RBF equalizer with small number of centers. Simulations are performed on time varying channels using a Rayleigh fading channel model to compare the performance of our RBF with an adaptive maximum-likelihood sequence estimator (MLSE) consisting of a channel estimator and a MLSE implemented by the Viterbi algorithm. The results show that the RBF equalizer produces superior performance with less computational complexity.
Direct numerical simulation of annular flows
NASA Astrophysics Data System (ADS)
Batchvarov, Assen; Kahouadji, Lyes; Chergui, Jalel; Juric, Damir; Shin, Seungwon; Craster, Richard V.; Matar, Omar K.
2017-11-01
Vertical counter-current two-phase flows are investigated using direct numerical simulations. The computations are carried out using Blue, a front-tracking-based CFD solver. Preliminary results show good qualitative agreement with experimental observations in terms of interfacial phenomena; these include three-dimensional, large-amplitude wave formation, the development of long ligaments, and droplet entrainment. The flooding phenomena in these counter current systems are closely investigated. The onset of flooding in our simulations is compared to existing empirical correlations such as Kutateladze-type and Wallis-type. The effect of varying tube diameter and fluid properties on the flooding phenomena is also investigated in this work. EPSRC, UK, MEMPHIS program Grant (EP/K003976/1), RAEng Research Chair (OKM).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Judith C.
The purpose of this grant is to develop the multi-scale theoretical methods to describe the nanoscale oxidation of metal thin films, as the PI (Yang) extensive previous experience in the experimental elucidation of the initial stages of Cu oxidation by primarily in situ transmission electron microscopy methods. Through the use and development of computational tools at varying length (and time) scales, from atomistic quantum mechanical calculation, force field mesoscale simulations, to large scale Kinetic Monte Carlo (KMC) modeling, the fundamental underpinings of the initial stages of Cu oxidation have been elucidated. The development of computational modeling tools allows for acceleratedmore » materials discovery. The theoretical tools developed from this program impact a wide range of technologies that depend on surface reactions, including corrosion, catalysis, and nanomaterials fabrication.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Bin; Pettitt, Bernard M.
Electrostatic free energies of solvation for 15 neutral amino acid side chain analogs are computed. We compare three methods of varying computational complexity and accuracy for three force fields: free energy simulations, Poisson-Boltzmann (PB), and linear response approximation (LRA) using AMBER, CHARMM, and OPLSAA force fields. We find that deviations from simulation start at low charges for solutes. The approximate PB and LRA produce an overestimation of electrostatic solvation free energies for most of molecules studied here. These deviations are remarkably systematic. The variations among force fields are almost as large as the variations found among methods. Our study confirmsmore » that success of the approximate methods for electrostatic solvation free energies comes from their ability to evaluate free energy differences accurately.« less
: A Scalable and Transparent System for Simulating MPI Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S
2010-01-01
is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less
Simulating the X-Ray Image Contrast to Set-Up Techniques with Desired Flaw Detectability
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2015-01-01
The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is being developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing X-ray detector resolution for crack detection. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.
Numerical Modeling of Internal Flow Aerodynamics. Part 2: Unsteady Flows
2004-01-01
fluid- structure coupling, ...). • • • • • Prediction: in this simulation, we want to assess the effect of a change in SRM geometry, propellant...surface reaches the structure ). The third characteristic time describes the slow evolution of the internal geometry. The last characteristic time...incorporates fluid- structure coupling facility, and is parallel. MOPTI® manages exchanges between two principal computational modules: • • A varying
Demonstration of Self-Training Autonomous Neural Networks in Space Vehicle Docking Simulations
NASA Technical Reports Server (NTRS)
Patrick, M. Clinton; Thaler, Stephen L.; Stevenson-Chavis, Katherine
2006-01-01
Neural Networks have been under examination for decades in many areas of research, with varying degrees of success and acceptance. Key goals of computer learning, rapid problem solution, and automatic adaptation have been elusive at best. This paper summarizes efforts at NASA's Marshall Space Flight Center harnessing such technology to autonomous space vehicle docking for the purpose of evaluating applicability to future missions.
Prediction of blood pressure and blood flow in stenosed renal arteries using CFD
NASA Astrophysics Data System (ADS)
Jhunjhunwala, Pooja; Padole, P. M.; Thombre, S. B.; Sane, Atul
2018-04-01
In the present work an attempt is made to develop a diagnostive tool for renal artery stenosis (RAS) which is inexpensive and in-vitro. To analyse the effects of increase in the degree of severity of stenosis on hypertension and blood flow, haemodynamic parameters are studied by performing numerical simulations. A total of 16 stenosed models with varying degree of stenosis severity from 0-97.11% are assessed numerically. Blood is modelled as a shear-thinning, non-Newtonian fluid using the Carreau model. Computational Fluid Dynamics (CFD) analysis is carried out to compute the values of flow parameters like maximum velocity and maximum pressure attained by blood due to stenosis under pulsatile flow. These values are further used to compute the increase in blood pressure and decrease in available blood flow to kidney. The computed available blood flow and secondary hypertension for varying extent of stenosis are mapped by curve fitting technique using MATLAB and a mathematical model is developed. Based on these mathematical models, a quantification tool is developed for tentative prediction of probable availability of blood flow to the kidney and severity of stenosis if secondary hypertension is known.
Visualizing Time-Varying Phenomena In Numerical Simulations Of Unsteady Flows
NASA Technical Reports Server (NTRS)
Lane, David A.
1996-01-01
Streamlines, contour lines, vector plots, and volume slices (cutting planes) are commonly used for flow visualization. These techniques are sometimes referred to as instantaneous flow visualization techniques because calculations are based on an instant of the flowfield in time. Although instantaneous flow visualization techniques are effective for depicting phenomena in steady flows,they sometimes do not adequately depict time-varying phenomena in unsteady flows. Streaklines and timelines are effective visualization techniques for depicting vortex shedding, vortex breakdown, and shock waves in unsteady flows. These techniques are examples of time-dependent flow visualization techniques, which are based on many instants of the flowfields in time. This paper describes the algorithms for computing streaklines and timelines. Using numerically simulated unsteady flows, streaklines and timelines are compared with streamlines, contour lines, and vector plots. It is shown that streaklines and timelines reveal vortex shedding and vortex breakdown more clearly than instantaneous flow visualization techniques.
Effect of varying internal geometry on the static performance of rectangular thrust-reverser ports
NASA Technical Reports Server (NTRS)
Re, Richard J.; Mason, Mary L.
1987-01-01
An investigation has been conducted to evaluate the effects of several geometric parameters on the internal performance of rectangular thrust-reverser ports for nonaxisymmetric nozzles. Internal geometry was varied with a test apparatus which simulated a forward-flight nozzle with a single, fully deployed reverser port. The test apparatus was designed to simulate thrust reversal (conceptually) either in the convergent section of the nozzle or in the constant-area duct just upstream of the nozzle. The main geometric parameters investigated were port angle, port corner radius, port location, and internal flow blocker angle. For all reverser port geometries, the port opening had an aspect ratio (throat width to throat height) of 6.1 and had a constant passage area from the geometric port throat to the exit. Reverser-port internal performance and thrust-vector angles computed from force-balance measurements are presented.
THE DISTRIBUTION OF COOK’S D STATISTIC
Muller, Keith E.; Mok, Mario Chen
2013-01-01
Cook (1977) proposed a diagnostic to quantify the impact of deleting an observation on the estimated regression coefficients of a General Linear Univariate Model (GLUM). Simulations of models with Gaussian response and predictors demonstrate that his suggestion of comparing the diagnostic to the median of the F for overall regression captures an erratically varying proportion of the values. We describe the exact distribution of Cook’s statistic for a GLUM with Gaussian predictors and response. We also present computational forms, simple approximations, and asymptotic results. A simulation supports the accuracy of the results. The methods allow accurate evaluation of a single value or the maximum value from a regression analysis. The approximations work well for a single value, but less well for the maximum. In contrast, the cut-point suggested by Cook provides widely varying tail probabilities. As with all diagnostics, the data analyst must use scientific judgment in deciding how to treat highlighted observations. PMID:24363487
Two-dimensional dynamic stall as simulated in a varying freestream
NASA Technical Reports Server (NTRS)
Pierce, G. A.; Kunz, D. L.; Malone, J. B.
1978-01-01
A low speed wind tunnel equipped with a axial gust generator to simulate the aerodynamic environment of a helicopter rotor was used to study the dynamic stall of a pitching blade in an effort to ascertain to what extent harmonic velocity perturbations in the freestream affect dynamic stall. The aerodynamic moment on a two dimensional, pitching blade model in both constant and pulsating airstream was measured. An operational analog computer was used to perform on-line data reduction and plots of moment versus angle of attack and work done by the moment were obtained. The data taken in the varying freestream were then compared to constant freestream data and to the results of two analytical methods. These comparisons show that the velocity perturbations have a significant effect on the pitching moment which can not be consistently predicted by the analytical methods, but had no drastic effect on the blade stability.
NASA Astrophysics Data System (ADS)
Clarke, Peter; Varghese, Philip; Goldstein, David
2018-01-01
A discrete velocity method is developed for gas mixtures of diatomic molecules with both rotational and vibrational energy states. A full quantized model is described, and rotation-translation and vibration-translation energy exchanges are simulated using a Larsen-Borgnakke exchange model. Elastic and inelastic molecular interactions are modeled during every simulated collision to help produce smooth internal energy distributions. The method is verified by comparing simulations of homogeneous relaxation by our discrete velocity method to numerical solutions of the Jeans and Landau-Teller equations, and to direct simulation Monte Carlo. We compute the structure of a 1D shock using this method, and determine how the rotational energy distribution varies with spatial location in the shock and with position in velocity space.
Constant-Elasticity-of-Substitution Simulation
NASA Technical Reports Server (NTRS)
Reiter, G.
1986-01-01
Program simulates constant elasticity-of-substitution (CES) production function. CES function used by economic analysts to examine production costs as well as uncertainties in production. User provides such input parameters as price of labor, price of capital, and dispersion levels. CES minimizes expected cost to produce capital-uncertainty pair. By varying capital-value input, one obtains series of capital-uncertainty pairs. Capital-uncertainty pairs then used to generate several cost curves. CES program menu driven and features specific print menu for examining selected output curves. Program written in BASIC for interactive execution and implemented on IBM PC-series computer.
Chang, C T; Zeng, F; Li, X J; Dong, W S; Lu, S H; Gao, S; Pan, F
2016-01-07
The simulation of synaptic plasticity using new materials is critical in the study of brain-inspired computing. Devices composed of Ba(CF3SO3)2-doped polyethylene oxide (PEO) electrolyte film were fabricated and with pulse responses found to resemble the synaptic short-term plasticity (STP) of both short-term depression (STD) and short-term facilitation (STF) synapses. The values of the charge and discharge peaks of the pulse responses did not vary with input number when the pulse frequency was sufficiently low(~1 Hz). However, when the frequency was increased, the charge and discharge peaks decreased and increased, respectively, in gradual trends and approached stable values with respect to the input number. These stable values varied with the input frequency, which resulted in the depressed and potentiated weight modifications of the charge and discharge peaks, respectively. These electrical properties simulated the high and low band-pass filtering effects of STD and STF, respectively. The simulations were consistent with biological results and the corresponding biological parameters were successfully extracted. The study verified the feasibility of using organic electrolytes to mimic STP.
Chang, C. T.; Zeng, F.; Li, X. J.; Dong, W. S.; Lu, S. H.; Gao, S.; Pan, F.
2016-01-01
The simulation of synaptic plasticity using new materials is critical in the study of brain-inspired computing. Devices composed of Ba(CF3SO3)2-doped polyethylene oxide (PEO) electrolyte film were fabricated and with pulse responses found to resemble the synaptic short-term plasticity (STP) of both short-term depression (STD) and short-term facilitation (STF) synapses. The values of the charge and discharge peaks of the pulse responses did not vary with input number when the pulse frequency was sufficiently low(~1 Hz). However, when the frequency was increased, the charge and discharge peaks decreased and increased, respectively, in gradual trends and approached stable values with respect to the input number. These stable values varied with the input frequency, which resulted in the depressed and potentiated weight modifications of the charge and discharge peaks, respectively. These electrical properties simulated the high and low band-pass filtering effects of STD and STF, respectively. The simulations were consistent with biological results and the corresponding biological parameters were successfully extracted. The study verified the feasibility of using organic electrolytes to mimic STP. PMID:26739613
NASA Astrophysics Data System (ADS)
Chang, C. T.; Zeng, F.; Li, X. J.; Dong, W. S.; Lu, S. H.; Gao, S.; Pan, F.
2016-01-01
The simulation of synaptic plasticity using new materials is critical in the study of brain-inspired computing. Devices composed of Ba(CF3SO3)2-doped polyethylene oxide (PEO) electrolyte film were fabricated and with pulse responses found to resemble the synaptic short-term plasticity (STP) of both short-term depression (STD) and short-term facilitation (STF) synapses. The values of the charge and discharge peaks of the pulse responses did not vary with input number when the pulse frequency was sufficiently low(~1 Hz). However, when the frequency was increased, the charge and discharge peaks decreased and increased, respectively, in gradual trends and approached stable values with respect to the input number. These stable values varied with the input frequency, which resulted in the depressed and potentiated weight modifications of the charge and discharge peaks, respectively. These electrical properties simulated the high and low band-pass filtering effects of STD and STF, respectively. The simulations were consistent with biological results and the corresponding biological parameters were successfully extracted. The study verified the feasibility of using organic electrolytes to mimic STP.
Frequency Distribution in Domestic Microwave Ovens and Its Influence on Heating Pattern.
Luan, Donglei; Wang, Yifen; Tang, Juming; Jain, Deepali
2017-02-01
In this study, snapshots of operating frequency profiles of domestic microwave ovens were collected to reveal the extent of microwave frequency variations under different operation conditions. A computer simulation model was developed based on the finite difference time domain method to analyze the influence of the shifting frequency on heating patterns of foods in a microwave oven. The results showed that the operating frequencies of empty and loaded domestic microwave ovens varied widely even among ovens of the same model purchased on the same date. Each microwave oven had its unique characteristic operating frequencies, which were also affected by the location and shape of the load. The simulated heating patterns of a gellan gel model food when heated on a rotary plate agreed well with the experimental results, which supported the reliability of the developed simulation model. Simulation indicated that the heating patterns of a stationary model food load changed with the varying operating frequency. However, the heating pattern of a rotary model food load was not sensitive to microwave frequencies due to the severe edge heating overshadowing the effects of the frequency variations. © 2016 Institute of Food Technologists®.
Pizzi, Rita; Wang, Rui; Rossetti, Danilo
2016-01-01
This paper describes a computational approach to the theoretical problems involved in the Young's single-photon double-slit experiment, focusing on a simulation of this experiment in the absence of measuring devices. Specifically, the human visual system is used in place of a photomultiplier or similar apparatus. Beginning with the assumption that the human eye perceives light in the presence of very few photons, we measure human eye performance as a sensor in a double-slit one-photon-at-a-time experimental setup. To interpret the results, we implement a simulation algorithm and compare its results with those of human subjects under identical experimental conditions. In order to evaluate the perceptive parameters exactly, which vary depending on the light conditions and on the subject’s sensitivity, we first review the existing literature on the biophysics of the human eye in the presence of a dim light source, and then use the known values of the experimental variables to set the parameters of the computational simulation. The results of the simulation and their comparison with the experiment involving human subjects are reported and discussed. It is found that, while the computer simulation indicates that the human eye has the capacity to detect the corpuscular nature of photons under these conditions, this was not observed in practice. The possible reasons for the difference between theoretical prediction and experimental results are discussed. PMID:26816029
Nelson, Matthew A.; Brown, Michael J.; Halverson, Scot A.; ...
2016-07-28
Here, the Quick Urban & Industrial Complex (QUIC) atmospheric transport, and dispersion modelling, system was evaluated against the Joint Urban 2003 tracer-gas measurements. This was done using the wind and turbulence fields computed by the Weather Research and Forecasting (WRF) model. We compare the simulated and observed plume transport when using WRF-model-simulated wind fields, and local on-site wind measurements. Degradation of the WRF-model-based plume simulations was cased by errors in the simulated wind direction, and limitations in reproducing the small-scale wind-field variability. We explore two methods for importing turbulence from the WRF model simulations into the QUIC system. The firstmore » method uses parametrized turbulence profiles computed from WRF-model-computed boundary-layer similarity parameters; and the second method directly imports turbulent kinetic energy from the WRF model. Using the WRF model’s Mellor-Yamada-Janjic boundary-layer scheme, the parametrized turbulence profiles and the direct import of turbulent kinetic energy were found to overpredict and underpredict the observed turbulence quantities, respectively. Near-source building effects were found to propagate several km downwind. These building effects and the temporal/spatial variations in the observed wind field were often found to have a stronger influence over the lateral and vertical plume spread than the intensity of turbulence. Correcting the WRF model wind directions using a single observational location improved the performance of the WRF-model-based simulations, but using the spatially-varying flow fields generated from multiple observation profiles generally provided the best performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Matthew A.; Brown, Michael J.; Halverson, Scot A.
Here, the Quick Urban & Industrial Complex (QUIC) atmospheric transport, and dispersion modelling, system was evaluated against the Joint Urban 2003 tracer-gas measurements. This was done using the wind and turbulence fields computed by the Weather Research and Forecasting (WRF) model. We compare the simulated and observed plume transport when using WRF-model-simulated wind fields, and local on-site wind measurements. Degradation of the WRF-model-based plume simulations was cased by errors in the simulated wind direction, and limitations in reproducing the small-scale wind-field variability. We explore two methods for importing turbulence from the WRF model simulations into the QUIC system. The firstmore » method uses parametrized turbulence profiles computed from WRF-model-computed boundary-layer similarity parameters; and the second method directly imports turbulent kinetic energy from the WRF model. Using the WRF model’s Mellor-Yamada-Janjic boundary-layer scheme, the parametrized turbulence profiles and the direct import of turbulent kinetic energy were found to overpredict and underpredict the observed turbulence quantities, respectively. Near-source building effects were found to propagate several km downwind. These building effects and the temporal/spatial variations in the observed wind field were often found to have a stronger influence over the lateral and vertical plume spread than the intensity of turbulence. Correcting the WRF model wind directions using a single observational location improved the performance of the WRF-model-based simulations, but using the spatially-varying flow fields generated from multiple observation profiles generally provided the best performance.« less
Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.
2017-01-01
Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.
The intensity dependence of lesion position shift during focused ultrasound surgery.
Meaney, P M; Cahill, M D; ter Haar, G R
2000-03-01
Knowledge of the spatial distribution of intensity loss from an ultrasonic beam is critical for predicting lesion formation in focused ultrasound (US) surgery (FUS). To date, most models have used linear propagation models to predict intensity profiles required to compute the temporally varying temperature distributions used to compute thermal dose contours. These are used to predict the extent of thermal damage. However, these simulations fail to describe adequately the abnormal lesion formation behaviour observed during ex vivo experiments in cases for which the transducer drive levels are varied over a wide range. In such experiments, the extent of thermal damage has been observed to move significantly closer to the transducer with increased transducer drive levels than would be predicted using linear-propagation models. The first set of simulations described herein use the KZK (Khokhlov-Zabolotskaya-Kuznetsov) nonlinear propagation model with the parabolic approximation for highly focused US waves to demonstrate that both the peak intensity and the lesion positions do, indeed, move closer to the transducer. This illustrates that, for accurate modelling of heating during FUS, nonlinear effects should be considered. Additionally, a first order approximation has been employed that attempts to account for the abnormal heat deposition distributions that accompany high transducer drive level FUS exposures where cavitation and boiling may be present. The results of these simulations are presented. It is suggested that this type of approach may be a useful tool in understanding thermal damage mechanisms.
Krujatz, Felix; Illing, Rico; Krautwer, Tobias; Liao, Jing; Helbig, Karsten; Goy, Katharina; Opitz, Jörg; Cuniberti, Gianaurelio; Bley, Thomas; Weber, Jost
2015-12-01
Externally illuminated photobioreactors (PBRs) are widely used in studies on the use of phototrophic microorganisms as sources of bioenergy and other photobiotechnology research. In this work, straightforward simulation techniques were used to describe effects of varying fluid flow conditions in a continuous hydrogen-producing PBR on the rate of photofermentative hydrogen production (rH2 ) by Rhodobacter sphaeroides DSM 158. A ZEMAX optical ray tracing simulation was performed to quantify the illumination intensity reaching the interior of the cylindrical PBR vessel. 24.2% of the emitted energy was lost through optical effects, or did not reach the PBR surface. In a dense culture of continuously producing bacteria during chemostatic cultivation, the illumination intensity became completely attenuated within the first centimeter of the PBR radius as described by an empirical three-parametric model implemented in Mathcad. The bacterial movement in chemostatic steady-state conditions was influenced by varying the fluid Reynolds number. The "Computational Fluid Dynamics" and "Particle Tracing" tools of COMSOL Multiphysics were used to visualize the fluid flow pattern and cellular trajectories through well-illuminated zones near the PBR periphery and dark zones in the center of the PBR. A moderate turbulence (Reynolds number = 12,600) and fluctuating illumination of 1.5 Hz were found to yield the highest continuous rH2 by R. sphaeroides DSM 158 (170.5 mL L(-1) h(-1) ) in this study. © 2015 Wiley Periodicals, Inc.
WE-D-303-01: Development and Application of Digital Human Phantoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Segars, P.
2015-06-15
Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less
On the simulation and mitigation of anisoplanatic optical turbulence for long range imaging
NASA Astrophysics Data System (ADS)
Hardie, Russell C.; LeMaster, Daniel A.
2017-05-01
We describe a numerical wave propagation method for simulating long range imaging of an extended scene under anisoplanatic conditions. Our approach computes an array of point spread functions (PSFs) for a 2D grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. To validate the simulation we compare simulated outputs with the theoretical anisoplanatic tilt correlation and differential tilt variance. This is in addition to comparing the long- and short-exposure PSFs, and isoplanatic angle. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. The simulation tool is also used here to quantitatively evaluate a recently proposed block- matching and Wiener filtering (BMWF) method for turbulence mitigation. In this method block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged and processed with a Wiener filter for restoration. A novel aspect of the proposed BMWF method is that the PSF model used for restoration takes into account the level of geometric correction achieved during image registration. This way, the Wiener filter is able fully exploit the reduced blurring achieved by registration. The BMWF method is relatively simple computationally, and yet, has excellent performance in comparison to state-of-the-art benchmark methods.
Cloud-Based Orchestration of a Model-Based Power and Data Analysis Toolchain
NASA Technical Reports Server (NTRS)
Post, Ethan; Cole, Bjorn; Dinkel, Kevin; Kim, Hongman; Lee, Erich; Nairouz, Bassem
2016-01-01
The proposed Europa Mission concept contains many engineering and scientific instruments that consume varying amounts of power and produce varying amounts of data throughout the mission. System-level power and data usage must be well understood and analyzed to verify design requirements. Numerous cross-disciplinary tools and analysis models are used to simulate the system-level spacecraft power and data behavior. This paper addresses the problem of orchestrating a consistent set of models, tools, and data in a unified analysis toolchain when ownership is distributed among numerous domain experts. An analysis and simulation environment was developed as a way to manage the complexity of the power and data analysis toolchain and to reduce the simulation turnaround time. A system model data repository is used as the trusted store of high-level inputs and results while other remote servers are used for archival of larger data sets and for analysis tool execution. Simulation data passes through numerous domain-specific analysis tools and end-to-end simulation execution is enabled through a web-based tool. The use of a cloud-based service facilitates coordination among distributed developers and enables scalable computation and storage needs, and ensures a consistent execution environment. Configuration management is emphasized to maintain traceability between current and historical simulation runs and their corresponding versions of models, tools and data.
Combustion-Powered Actuation for Dynamic Stall Suppression - Simulations and Low-Mach Experiments
NASA Technical Reports Server (NTRS)
Matalanis, Claude G.; Min, Byung-Young; Bowles, Patrick O.; Jee, Solkeun; Wake, Brian E.; Crittenden, Tom; Woo, George; Glezer, Ari
2014-01-01
An investigation on dynamic-stall suppression capabilities of combustion-powered actuation (COMPACT) applied to a tabbed VR-12 airfoil is presented. In the first section, results from computational fluid dynamics (CFD) simulations carried out at Mach numbers from 0.3 to 0.5 are presented. Several geometric parameters are varied including the slot chordwise location and angle. Actuation pulse amplitude, frequency, and timing are also varied. The simulations suggest that cycle-averaged lift increases of approximately 4% and 8% with respect to the baseline airfoil are possible at Mach numbers of 0.4 and 0.3 for deep and near-deep dynamic-stall conditions. In the second section, static-stall results from low-speed wind-tunnel experiments are presented. Low-speed experiments and high-speed CFD suggest that slots oriented tangential to the airfoil surface produce stronger benefits than slots oriented normal to the chordline. Low-speed experiments confirm that chordwise slot locations suitable for Mach 0.3-0.4 stall suppression (based on CFD) will also be effective at lower Mach numbers.
Performance of DPSK with convolutional encoding on time-varying fading channels
NASA Technical Reports Server (NTRS)
Mui, S. Y.; Modestino, J. W.
1977-01-01
The bit error probability performance of a differentially-coherent phase-shift keyed (DPSK) modem with convolutional encoding and Viterbi decoding on time-varying fading channels is examined. Both the Rician and the lognormal channels are considered. Bit error probability upper bounds on fully-interleaved (zero-memory) fading channels are derived and substantiated by computer simulation. It is shown that the resulting coded system performance is a relatively insensitive function of the choice of channel model provided that the channel parameters are related according to the correspondence developed as part of this paper. Finally, a comparison of DPSK with a number of other modulation strategies is provided.
Computer simulation of position and maximum of linear polarization of asteroids
NASA Astrophysics Data System (ADS)
Petrov, Dmitry; Kiselev, Nikolai
2018-01-01
The ground-based observations of near-Earth asteroids at large phase angles have shown some feature: the linear polarization maximum position of the high-albedo E-type asteroids shifted markedly towards smaller phase angles (αmax ≈ 70°) with respect to that for the moderate-albedo S-type asteroids (αmax ≈ 110°), weakly depending on the wavelength. To study this phenomenon, the theoretical approach and the modified T-matrix method (the so-called Sh-matrices method) were used. Theoretical approach was devoted to finding the values of αmax, corresponding to maximal values of positive polarization Pmax. Computer simulations were performed for an ensemble of random Gaussian particles, whose scattering properties were averaged over with different particle orientations and size parameters in the range X = 2.0 ... 21.0, with the power law distribution X - k, where k = 3.6. The real parts of the refractive index mr were 1.5, 1.6 and 1.7. Imaginary part of refractive index varied from mi = 0.0 to mi = 0.5. Both theoretical approach and computer simulation showed that the value of αmax strongly depends on the refractive index. The increase of mi leads to increased αmax and Pmax. In addition, computer simulation shows that the increase of the real part of the refractive index reduces Pmax. Whereas E-type high-albedo asteroids have smaller values of mi, than S -type asteroids, we can conclude, that value of αmax of E-type asteroids should be smaller than for S -type ones. This is in qualitative agreement with the observed effect in asteroids.
Metabolic flexibility of mitochondrial respiratory chain disorders predicted by computer modelling.
Zieliński, Łukasz P; Smith, Anthony C; Smith, Alexander G; Robinson, Alan J
2016-11-01
Mitochondrial respiratory chain dysfunction causes a variety of life-threatening diseases affecting about 1 in 4300 adults. These diseases are genetically heterogeneous, but have the same outcome; reduced activity of mitochondrial respiratory chain complexes causing decreased ATP production and potentially toxic accumulation of metabolites. Severity and tissue specificity of these effects varies between patients by unknown mechanisms and treatment options are limited. So far most research has focused on the complexes themselves, and the impact on overall cellular metabolism is largely unclear. To illustrate how computer modelling can be used to better understand the potential impact of these disorders and inspire new research directions and treatments, we simulated them using a computer model of human cardiomyocyte mitochondrial metabolism containing over 300 characterised reactions and transport steps with experimental parameters taken from the literature. Overall, simulations were consistent with patient symptoms, supporting their biological and medical significance. These simulations predicted: complex I deficiencies could be compensated using multiple pathways; complex II deficiencies had less metabolic flexibility due to impacting both the TCA cycle and the respiratory chain; and complex III and IV deficiencies caused greatest decreases in ATP production with metabolic consequences that parallel hypoxia. Our study demonstrates how results from computer models can be compared to a clinical phenotype and used as a tool for hypothesis generation for subsequent experimental testing. These simulations can enhance understanding of dysfunctional mitochondrial metabolism and suggest new avenues for research into treatment of mitochondrial disease and other areas of mitochondrial dysfunction. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Resolution Enhancement In Ultrasonic Imaging By A Time-Varying Filter
NASA Astrophysics Data System (ADS)
Ching, N. H.; Rosenfeld, D.; Braun, M.
1987-09-01
The study reported here investigates the use of a time-varying filter to compensate for the spreading of ultrasonic pulses due to the frequency dependence of attenuation by tissues. The effect of this pulse spreading is to degrade progressively the axial resolution with increasing depth. The form of compensation required to correct for this effect is impossible to realize exactly. A novel time-varying filter utilizing a bank of bandpass filters is proposed as a realizable approximation of the required compensation. The performance of this filter is evaluated by means of a computer simulation. The limits of its application are discussed. Apart from improving the axial resolution, and hence the accuracy of axial measurements, the compensating filter could be used in implementing tissue characterization algorithms based on attenuation data.
A Computing based Simulation Model for Missile Guidance in Planar Domain
NASA Astrophysics Data System (ADS)
Chauhan, Deepak Singh; Sharma, Rajiv
2017-10-01
This paper presents the design, development and implementation of a computing based simulation model for interceptor missile guidance for countering an anti-ship missile through a navigation law. It investigates the possibility of deriving, testing and implementing an efficient variation of the PN and RPN laws. A new guidance law [true combined proportional navigation (TCPN) guidance law] that combines the strengths of both the PN and RPN and has a superior capturability in a specified zone of interest is presented in this paper. The presented proportional navigation (PN) guidance law is modeled in a two dimensional planar engagement model and its performance is studied with respect to a varying navigation ratio (N) that is dependent on the `heading error (HE)' and missile lead angle. The advantage of varying navigation ratio is: if N' > 2, Vc > 0, Vm > 0, then the sign of navigation ratio is determined by cos (ɛ + HE) and for cos (ɛ + HE) ≥ 0 and N > 0, the formulation reduces to that of PN and for cos (ɛ + HE) < 0 and N < 0, the formulation reduces to that of RPN. Hence, depending upon the values of cos (ɛ + HE) the presented navigation guidance strategy is shuffled between the PN navigation ratio and the RPN navigation ratio. The theoretical framework of TCPN guidance law is implemented in two dimensional setting of parameters. An important feature of TCPN is the HE and the aim is to achieve lower values of the heading error in simulation. The presented results in this paper show the efficiency of simulation model and also establish that TCPN can be an accurate guidance strategy that has its own range of application and suitability.
OASIS - ORBIT ANALYSIS AND SIMULATION SOFTWARE
NASA Technical Reports Server (NTRS)
Wu, S. C.
1994-01-01
The Orbit Analysis and Simulation Software, OASIS, is a software system developed for covariance and simulation analyses of problems involving earth satellites, especially the Global Positioning System (GPS). It provides a flexible, versatile and efficient accuracy analysis tool for earth satellite navigation and GPS-based geodetic studies. To make future modifications and enhancements easy, the system is modular, with five major modules: PATH/VARY, REGRES, PMOD, FILTER/SMOOTHER, and OUTPUT PROCESSOR. PATH/VARY generates satellite trajectories. Among the factors taken into consideration are: 1) the gravitational effects of the planets, moon and sun; 2) space vehicle orientation and shapes; 3) solar pressure; 4) solar radiation reflected from the surface of the earth; 5) atmospheric drag; and 6) space vehicle gas leaks. The REGRES module reads the user's input, then determines if a measurement should be made based on geometry and time. PMOD modifies a previously generated REGRES file to facilitate various analysis needs. FILTER/SMOOTHER is especially suited to a multi-satellite precise orbit determination and geodetic-type problems. It can be used for any situation where parameters are simultaneously estimated from measurements and a priori information. Examples of nonspacecraft areas of potential application might be Very Long Baseline Interferometry (VLBI) geodesy and radio source catalogue studies. OUTPUT PROCESSOR translates covariance analysis results generated by FILTER/SMOOTHER into user-desired easy-to-read quantities, performs mapping of orbit covariances and simulated solutions, transforms results into different coordinate systems, and computes post-fit residuals. The OASIS program was developed in 1986. It is designed to be implemented on a DEC VAX 11/780 computer using VAX VMS 3.7 or higher. It can also be implemented on a Micro VAX II provided sufficient disk space is available.
Kanarska, Yuliya; Walton, Otis
2015-11-30
Fluid-granular flows are common phenomena in nature and industry. Here, an efficient computational technique based on the distributed Lagrange multiplier method is utilized to simulate complex fluid-granular flows. Each particle is explicitly resolved on an Eulerian grid as a separate domain, using solid volume fractions. The fluid equations are solved through the entire computational domain, however, Lagrange multiplier constrains are applied inside the particle domain such that the fluid within any volume associated with a solid particle moves as an incompressible rigid body. The particle–particle interactions are implemented using explicit force-displacement interactions for frictional inelastic particles similar to the DEMmore » method with some modifications using the volume of an overlapping region as an input to the contact forces. Here, a parallel implementation of the method is based on the SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) library.« less
Miller, Ross H; Hamill, Joseph
2009-08-01
Biomechanical aspects of running injuries are often inferred from external loading measurements. However, previous research has suggested that relationships between external loading and potential injury-inducing internal loads can be complex and nonintuitive. Further, the loading response to training interventions can vary widely between subjects. In this study, we use a subject-specific computer simulation approach to estimate internal and external loading of the distal tibia during the impact phase for two runners when running in shoes with different midsole cushioning parameters. The results suggest that: (1) changes in tibial loading induced by footwear are not reflected by changes in ground reaction force (GRF) magnitudes; (2) the GRF loading rate is a better surrogate measure of tibial loading and stress fracture risk than the GRF magnitude; and (3) averaging results across groups may potentially mask differential responses to training interventions between individuals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Daniel J.; Lee, Choonsik; Tien, Christopher
2013-01-15
Purpose: To validate the accuracy of a Monte Carlo source model of the Siemens SOMATOM Sensation 16 CT scanner using organ doses measured in physical anthropomorphic phantoms. Methods: The x-ray output of the Siemens SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code, MCNPX version 2.6. The resulting source model was able to perform various simulated axial and helical computed tomographic (CT) scans of varying scan parameters, including beam energy, filtration, pitch, and beam collimation. Two custom-built anthropomorphic phantoms were used to take dose measurements on the CT scanner: an adult male and amore » 9-month-old. The adult male is a physical replica of University of Florida reference adult male hybrid computational phantom, while the 9-month-old is a replica of University of Florida Series B 9-month-old voxel computational phantom. Each phantom underwent a series of axial and helical CT scans, during which organ doses were measured using fiber-optic coupled plastic scintillator dosimeters developed at University of Florida. The physical setup was reproduced and simulated in MCNPX using the CT source model and the computational phantoms upon which the anthropomorphic phantoms were constructed. Average organ doses were then calculated based upon these MCNPX results. Results: For all CT scans, good agreement was seen between measured and simulated organ doses. For the adult male, the percent differences were within 16% for axial scans, and within 18% for helical scans. For the 9-month-old, the percent differences were all within 15% for both the axial and helical scans. These results are comparable to previously published validation studies using GE scanners and commercially available anthropomorphic phantoms. Conclusions: Overall results of this study show that the Monte Carlo source model can be used to accurately and reliably calculate organ doses for patients undergoing a variety of axial or helical CT examinations on the Siemens SOMATOM Sensation 16 scanner.« less
Numerical Performance Prediction of a Miniature Ramjet at Mach 4
2012-09-01
with the computational fluids dynamic (CFD) code from ANSYS - CFX . The nozzle-throat area was varied to increase the backpressure and this pushed the...normal shock that was sitting within the inlet, out to the lip of the inlet cowl. Using the eddy dissipation combustion model in ANSYS - CFX , a...improved accuracy in turbulence modeling. 14. SUBJECT TERMS Mach 4, Ramjet, Drag, Turbulence Modeling, Simulation, ANSYS CFX 15. NUMBER
European Scientific Notes. Volume 37, Number 1.
1983-01-31
instantoneous sea-state condition can be tions vary widely in their realism , with computed from a special data base coded some producing dynamic color pictures...between the variables of accuracy, approach channels, the alignment of practicality, realism , and expense. jetties, and the establishment of Because the...tidal current variables The system certainly seems to be valid, have been played into some of the and the smooth dynamics, realism , and simulator runs
Development of Aspen: A microanalytic simulation model of the US economy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pryor, R.J.; Basu, N.; Quint, T.
1996-02-01
This report describes the development of an agent-based microanalytic simulation model of the US economy. The microsimulation model capitalizes on recent technological advances in evolutionary learning and parallel computing. Results are reported for a test problem that was run using the model. The test results demonstrate the model`s ability to predict business-like cycles in an economy where prices and inventories are allowed to vary. Since most economic forecasting models have difficulty predicting any kind of cyclic behavior. These results show the potential of microanalytic simulation models to improve economic policy analysis and to provide new insights into underlying economic principles.more » Work already has begun on a more detailed model.« less
High-fidelity meshes from tissue samples for diffusion MRI simulations.
Panagiotaki, Eleftheria; Hall, Matt G; Zhang, Hui; Siow, Bernard; Lythgoe, Mark F; Alexander, Daniel C
2010-01-01
This paper presents a method for constructing detailed geometric models of tissue microstructure for synthesizing realistic diffusion MRI data. We construct three-dimensional mesh models from confocal microscopy image stacks using the marching cubes algorithm. Random-walk simulations within the resulting meshes provide synthetic diffusion MRI measurements. Experiments optimise simulation parameters and complexity of the meshes to achieve accuracy and reproducibility while minimizing computation time. Finally we assess the quality of the synthesized data from the mesh models by comparison with scanner data as well as synthetic data from simple geometric models and simplified meshes that vary only in two dimensions. The results support the extra complexity of the three-dimensional mesh compared to simpler models although sensitivity to the mesh resolution is quite robust.
NASA Technical Reports Server (NTRS)
Abedin, M. N.; Prabhu, D. R.; Winfree, W. P.; Johnston, P. H.
1992-01-01
The effect on the system acoustic response of variations in the adhesive thickness, coupling thickness, and paint thickness is considered. Both simulations and experimental measurements are used to characterize and classify A-scans from test regions, and to study the effects of various parameters such as paint thickness and epoxy thickness on the variations in the reflected signals. A 1D model of sound propagation in multilayered structures is used to verify the validity of the measured signals, and is also used to computationally generate signals for a class of test locations with gradually varying parameters. This approach exploits the ability of numerical simulations to provide a good understanding of the ultrasonic pulses reflected at disbonds.
Effect of Mesostructure and Fragmentation on Planar Shock Response of Dry Sand
NASA Astrophysics Data System (ADS)
Dwivedi, Sunil; Hatanpaa, Benjamin; Effs, Kijana; Ferri, Brian; Thadhani, Naresh
2017-06-01
The objective of the present work is to gain insight into the role of grain arrangements (mesostructure) and fragmentation on the shock response of dry sand under planar plate impact loading. Mesoscale simulations of the dry sand sample were carried out for initial porosities of 20% and 30% using CUBIT, LS-DYNA, and TECPLOT software. The mesostructure was varied as ordered (grains with edge contacts) and disordered (grains with point contacts) for the same porosity. The grain fragmentation was modeled by erosion method with erosion parameter of 0.5 and 0.75. The results show that computed Us-Up slope for 20% porosity with ordered mesostructure is negative at lower impact velocities and changes to positive when velocity is increased. However, the disordered mesostructure yields positive Us-Up slope at 20% porosity irrespective of the impact velocity. The Us-Up slope for 30% porous sand is positive irrespective of the mesostructure and impact velocity. More importantly, allowing grain fragmentation, the in-situ average longitudinal stress reduces from the computed Hugoniot stress by more than 25%. These results suggest the need for detailed simulations with varying mesostructure and more realistic fragmentation model as well experiments for a dry sand sample at lower porosities. Work supported by HDTRA-1-12-1-0004 and FA9550-12-1-0128 Grants.
Computational Models of Protein Kinematics and Dynamics: Beyond Simulation
Gipson, Bryant; Hsu, David; Kavraki, Lydia E.; Latombe, Jean-Claude
2016-01-01
Physics-based simulation represents a powerful method for investigating the time-varying behavior of dynamic protein systems at high spatial and temporal resolution. Such simulations, however, can be prohibitively difficult or lengthy for large proteins or when probing the lower-resolution, long-timescale behaviors of proteins generally. Importantly, not all questions about a protein system require full space and time resolution to produce an informative answer. For instance, by avoiding the simulation of uncorrelated, high-frequency atomic movements, a larger, domain-level picture of protein dynamics can be revealed. The purpose of this review is to highlight the growing body of complementary work that goes beyond simulation. In particular, this review focuses on methods that address kinematics and dynamics, as well as those that address larger organizational questions and can quickly yield useful information about the long-timescale behavior of a protein. PMID:22524225
The use of three-parameter rating table lookup programs, RDRAT and PARM3, in hydraulic flow models
Sanders, C.L.
1995-01-01
Subroutines RDRAT and PARM3 enable computer programs such as the BRANCH open-channel unsteady-flow model to route flows through or over combinations of critical-flow sections, culverts, bridges, road- overflow sections, fixed spillways, and(or) dams. The subroutines also obstruct upstream flow to simulate operation of flapper-type tide gates. A multiplier can be applied by date and time to simulate varying numbers of tide gates being open or alternative construction scenarios for multiple culverts. The subroutines use three-parameter (headwater, tailwater, and discharge) rating table lookup methods. These tables may be manually prepared using other programs that do step-backwater computations or compute flow through bridges and culverts or over dams. The subroutine, therefore, precludes the necessity of incorporating considerable hydraulic computational code into the client program, and provides complete flexibility for users of the model for routing flow through almost any affixed structure or combination of structures. The subroutines are written in Fortran 77 language, and have minimal exchange of information with the BRANCH model or other possible client programs. The report documents the interpolation methodology, data input requirements, and software.
Lemkul, Justin A; Roux, Benoît; van der Spoel, David; MacKerell, Alexander D
2015-07-15
Explicit treatment of electronic polarization in empirical force fields used for molecular dynamics simulations represents an important advancement in simulation methodology. A straightforward means of treating electronic polarization in these simulations is the inclusion of Drude oscillators, which are auxiliary, charge-carrying particles bonded to the cores of atoms in the system. The additional degrees of freedom make these simulations more computationally expensive relative to simulations using traditional fixed-charge (additive) force fields. Thus, efficient tools are needed for conducting these simulations. Here, we present the implementation of highly scalable algorithms in the GROMACS simulation package that allow for the simulation of polarizable systems using extended Lagrangian dynamics with a dual Nosé-Hoover thermostat as well as simulations using a full self-consistent field treatment of polarization. The performance of systems of varying size is evaluated, showing that the present code parallelizes efficiently and is the fastest implementation of the extended Lagrangian methods currently available for simulations using the Drude polarizable force field. © 2015 Wiley Periodicals, Inc.
An engineering closure for heavily under-resolved coarse-grid CFD in large applications
NASA Astrophysics Data System (ADS)
Class, Andreas G.; Yu, Fujiang; Jordan, Thomas
2016-11-01
Even though high performance computation allows very detailed description of a wide range of scales in scientific computations, engineering simulations used for design studies commonly merely resolve the large scales thus speeding up simulation time. The coarse-grid CFD (CGCFD) methodology is developed for flows with repeated flow patterns as often observed in heat exchangers or porous structures. It is proposed to use inviscid Euler equations on a very coarse numerical mesh. This coarse mesh needs not to conform to the geometry in all details. To reinstall physics on all smaller scales cheap subgrid models are employed. Subgrid models are systematically constructed by analyzing well-resolved generic representative simulations. By varying the flow conditions in these simulations correlations are obtained. These comprehend for each individual coarse mesh cell a volume force vector and volume porosity. Moreover, for all vertices, surface porosities are derived. CGCFD is related to the immersed boundary method as both exploit volume forces and non-body conformal meshes. Yet, CGCFD differs with respect to the coarser mesh and the use of Euler equations. We will describe the methodology based on a simple test case and the application of the method to a 127 pin wire-wrap fuel bundle.
Challenges to the development of complex virtual reality surgical simulations.
Seymour, N E; Røtnes, J S
2006-11-01
Virtual reality simulation in surgical training has become more widely used and intensely investigated in an effort to develop safer, more efficient, measurable training processes. The development of virtual reality simulation of surgical procedures has begun, but well-described technical obstacles must be overcome to permit varied training in a clinically realistic computer-generated environment. These challenges include development of realistic surgical interfaces and physical objects within the computer-generated environment, modeling of realistic interactions between objects, rendering of the surgical field, and development of signal processing for complex events associated with surgery. Of these, the realistic modeling of tissue objects that are fully responsive to surgical manipulations is the most challenging. Threats to early success include relatively limited resources for development and procurement, as well as smaller potential for return on investment than in other simulation industries that face similar problems. Despite these difficulties, steady progress continues to be made in these areas. If executed properly, virtual reality offers inherent advantages over other training systems in creating a realistic surgical environment and facilitating measurement of surgeon performance. Once developed, complex new virtual reality training devices must be validated for their usefulness in formative training and assessment of skill to be established.
Discrete Particle Method for Simulating Hypervelocity Impact Phenomena.
Watson, Erkai; Steinhauser, Martin O
2017-04-02
In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI) phenomena which is based on the Discrete Element Method (DEM). Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms -1 . We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy-conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength.
Leake, S.A.; Prudic, David E.
1988-01-01
The process of permanent compaction is not routinely included in simulations of groundwater flow. To simulate storage changes from both elastic and inelastic compaction, a computer program was written for use with the U. S. Geological Survey modular finite-difference groundwater flow model. The new program is called the Interbed-Storage Package. In the Interbed-Storage Package, elastic compaction or expansion is assumed to be proportional to change in head. The constant of proportionality is the product of skeletal component of elastic specific storage and thickness of the sediments. Similarly, inelastic compaction is assumed to be proportional to decline in head. The constant of proportionality is the product of the skeletal component of inelastic specific storage and the thickness of the sediments. Storage changes are incorporated into the groundwater flow model by adding an additional term to the flow equation. Within a model time step, the package appropriately apportions storage changes between elastic and inelastic components on the basis of the relation of simulated head to the previous minimum head. Another package that allows for a time-varying specified-head boundary is also documented. This package was written to reduce the data requirements for test simulations of the Interbed-Storage Package. (USGS)
Discrete Particle Method for Simulating Hypervelocity Impact Phenomena
Watson, Erkai; Steinhauser, Martin O.
2017-01-01
In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI) phenomena which is based on the Discrete Element Method (DEM). Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms−1. We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy–conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength. PMID:28772739
Huang, Jie; Zeng, Xiaoping; Jian, Xin; Tan, Xiaoheng; Zhang, Qi
2017-01-01
The spectrum allocation for cognitive radio sensor networks (CRSNs) has received considerable research attention under the assumption that the spectrum environment is static. However, in practice, the spectrum environment varies over time due to primary user/secondary user (PU/SU) activity and mobility, resulting in time-varied spectrum resources. This paper studies resource allocation for chunk-based multi-carrier CRSNs with time-varied spectrum resources. We present a novel opportunistic capacity model through a continuous time semi-Markov chain (CTSMC) to describe the time-varied spectrum resources of chunks and, based on this, a joint power and chunk allocation model by considering the opportunistically available capacity of chunks is proposed. To reduce the computational complexity, we split this model into two sub-problems and solve them via the Lagrangian dual method. Simulation results illustrate that the proposed opportunistic capacity-based resource allocation algorithm can achieve better performance compared with traditional algorithms when the spectrum environment is time-varied. PMID:28106803
NASA Astrophysics Data System (ADS)
Vallières, Martin; Laberge, Sébastien; Diamant, André; El Naqa, Issam
2017-11-01
Texture-based radiomic models constructed from medical images have the potential to support cancer treatment management via personalized assessment of tumour aggressiveness. While the identification of stable texture features under varying imaging settings is crucial for the translation of radiomics analysis into routine clinical practice, we hypothesize in this work that a complementary optimization of image acquisition parameters prior to texture feature extraction could enhance the predictive performance of texture-based radiomic models. As a proof of concept, we evaluated the possibility of enhancing a model constructed for the early prediction of lung metastases in soft-tissue sarcomas by optimizing PET and MR image acquisition protocols via computerized simulations of image acquisitions with varying parameters. Simulated PET images from 30 STS patients were acquired by varying the extent of axial data combined per slice (‘span’). Simulated T 1-weighted and T 2-weighted MR images were acquired by varying the repetition time and echo time in a spin-echo pulse sequence, respectively. We analyzed the impact of the variations of PET and MR image acquisition parameters on individual textures, and we investigated how these variations could enhance the global response and the predictive properties of a texture-based model. Our results suggest that it is feasible to identify an optimal set of image acquisition parameters to improve prediction performance. The model constructed with textures extracted from simulated images acquired with a standard clinical set of acquisition parameters reached an average AUC of 0.84 +/- 0.01 in bootstrap testing experiments. In comparison, the model performance significantly increased using an optimal set of image acquisition parameters (p = 0.04 ), with an average AUC of 0.89 +/- 0.01 . Ultimately, specific acquisition protocols optimized to generate superior radiomics measurements for a given clinical problem could be developed and standardized via dedicated computer simulations and thereafter validated using clinical scanners.
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak
1996-01-01
Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.
Design and testing of a magnetic suspension and damping system for a space telescope
NASA Technical Reports Server (NTRS)
Ockman, N. J.
1972-01-01
The basic equations of motion are derived for a two dimensional, three degree of freedom simulation of a space telescope coupled to a spacecraft by means of a magnetic suspension and isolation system. The system consists of paramagnetic or ferromagnetic discs confined to the magnetic field between two Helmholtz coils. Damping is introduced by varying the magnetic field in proportion to a velocity signal derived from the telescope. The equations of motion are nonlinear, similar in behavior to the one-dimensional Van der Pol equation. The computer simulation was verified by testing a 264-kilogram air bearing platform which simulates the telescope in a frictionless environment. The simulation demonstrated effective isolation capabilities for disturbance frequencies above resonance. Damping in the system improved the response near resonance and prevented the build-up of large oscillatory amplitudes.
Man-vehicle systems research facility advanced aircraft flight simulator throttle mechanism
NASA Technical Reports Server (NTRS)
Kurasaki, S. S.; Vallotton, W. C.
1985-01-01
The Advanced Aircraft Flight Simulator is equipped with a motorized mechanism that simulates a two engine throttle control system that can be operated via a computer driven performance management system or manually by the pilots. The throttle control system incorporates features to simulate normal engine operations and thrust reverse and vary the force feel to meet a variety of research needs. While additional testing to integrate the work required is principally now in software design, since the mechanical aspects function correctly. The mechanism is an important part of the flight control system and provides the capability to conduct human factors research of flight crews with advanced aircraft systems under various flight conditions such as go arounds, coupled instrument flight rule approaches, normal and ground operations and emergencies that would or would not normally be experienced in actual flight.
Self-consistent field theory simulations of polymers on arbitrary domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouaknin, Gaddiel, E-mail: gaddielouaknin@umail.ucsb.edu; Laachi, Nabil; Delaney, Kris
2016-12-15
We introduce a framework for simulating the mesoscale self-assembly of block copolymers in arbitrary confined geometries subject to Neumann boundary conditions. We employ a hybrid finite difference/volume approach to discretize the mean-field equations on an irregular domain represented implicitly by a level-set function. The numerical treatment of the Neumann boundary conditions is sharp, i.e. it avoids an artificial smearing in the irregular domain boundary. This strategy enables the study of self-assembly in confined domains and enables the computation of physically meaningful quantities at the domain interface. In addition, we employ adaptive grids encoded with Quad-/Oc-trees in parallel to automatically refinemore » the grid where the statistical fields vary rapidly as well as at the boundary of the confined domain. This approach results in a significant reduction in the number of degrees of freedom and makes the simulations in arbitrary domains using effective boundary conditions computationally efficient in terms of both speed and memory requirement. Finally, in the case of regular periodic domains, where pseudo-spectral approaches are superior to finite differences in terms of CPU time and accuracy, we use the adaptive strategy to store chain propagators, reducing the memory footprint without loss of accuracy in computed physical observables.« less
NASA Astrophysics Data System (ADS)
Wolfs, Vincent; Willems, Patrick
2013-10-01
Many applications in support of water management decisions require hydrodynamic models with limited calculation time, including real time control of river flooding, uncertainty and sensitivity analyses by Monte-Carlo simulations, and long term simulations in support of the statistical analysis of the model simulation results (e.g. flood frequency analysis). Several computationally efficient hydrodynamic models exist, but little attention is given to the modelling of floodplains. This paper presents a methodology that can emulate output from a full hydrodynamic model by predicting one or several levels in a floodplain, together with the flow rate between river and floodplain. The overtopping of the embankment is modelled as an overflow at a weir. Adaptive neuro fuzzy inference systems (ANFIS) are exploited to cope with the varying factors affecting the flow. Different input sets and identification methods are considered in model construction. Because of the dual use of simplified physically based equations and data-driven techniques, the ANFIS consist of very few rules with a low number of input variables. A second calculation scheme can be followed for exceptionally large floods. The obtained nominal emulation model was tested for four floodplains along the river Dender in Belgium. Results show that the obtained models are accurate with low computational cost.
Singh, Karandeep; Ahn, Chang-Won; Paik, Euihyun; Bae, Jang Won; Lee, Chun-Hee
2018-01-01
Artificial life (ALife) examines systems related to natural life, its processes, and its evolution, using simulations with computer models, robotics, and biochemistry. In this article, we focus on the computer modeling, or "soft," aspects of ALife and prepare a framework for scientists and modelers to be able to support such experiments. The framework is designed and built to be a parallel as well as distributed agent-based modeling environment, and does not require end users to have expertise in parallel or distributed computing. Furthermore, we use this framework to implement a hybrid model using microsimulation and agent-based modeling techniques to generate an artificial society. We leverage this artificial society to simulate and analyze population dynamics using Korean population census data. The agents in this model derive their decisional behaviors from real data (microsimulation feature) and interact among themselves (agent-based modeling feature) to proceed in the simulation. The behaviors, interactions, and social scenarios of the agents are varied to perform an analysis of population dynamics. We also estimate the future cost of pension policies based on the future population structure of the artificial society. The proposed framework and model demonstrates how ALife techniques can be used by researchers in relation to social issues and policies.
Yılmaz, Bülent; Çiftçi, Emre
2013-06-01
Extracorporeal Shock Wave Lithotripsy (ESWL) is based on disintegration of the kidney stone by delivering high-energy shock waves that are created outside the body and transmitted through the skin and body tissues. Nowadays high-energy shock waves are also used in orthopedic operations and investigated to be used in the treatment of myocardial infarction and cancer. Because of these new application areas novel lithotriptor designs are needed for different kinds of treatment strategies. In this study our aim was to develop a versatile computer simulation environment which would give the device designers working on various medical applications that use shock wave principle a substantial amount of flexibility while testing the effects of new parameters such as reflector size, material properties of the medium, water temperature, and different clinical scenarios. For this purpose, we created a finite-difference time-domain (FDTD)-based computational model in which most of the physical system parameters were defined as an input and/or as a variable in the simulations. We constructed a realistic computational model of a commercial electrohydraulic lithotriptor and optimized our simulation program using the results that were obtained by the manufacturer in an experimental setup. We, then, compared the simulation results with the results from an experimental setup in which oxygen level in water was varied. Finally, we studied the effects of changing the input parameters like ellipsoid size and material, temperature change in the wave propagation media, and shock wave source point misalignment. The simulation results were consistent with the experimental results and expected effects of variation in physical parameters of the system. The results of this study encourage further investigation and provide adequate evidence that the numerical modeling of a shock wave therapy system is feasible and can provide a practical means to test novel ideas in new device design procedures. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
An experiment on the use of disposable plastics as a reinforcement in concrete beams
NASA Technical Reports Server (NTRS)
Chowdhury, Mostafiz R.
1992-01-01
Illustrated here is the concept of reinforced concrete structures by the use of computer simulation and an inexpensive hands-on design experiment. The students in our construction management program use disposable plastic as a reinforcement to demonstrate their understanding of reinforced concrete and prestressed concrete beams. The plastics used for such an experiment vary from plastic bottles to steel reinforced auto tires. This experiment will show the extent to which plastic reinforcement increases the strength of a concrete beam. The procedure of using such throw-away plastics in an experiment to explain the interaction between the reinforcement material and concrete, and a comparison of the test results for using different types of waste plastics are discussed. A computer analysis to simulate the structural response is used to compare the test results and to understand the analytical background of reinforced concrete design. This interaction of using computers to analyze structures and to relate the output results with real experimentation is found to be a very useful method for teaching a math-based analytical subject to our non-engineering students.
Mitigating Communication Delays in Remotely Connected Hardware-in-the-loop Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cale, James; Johnson, Brian; Dall'Anese, Emiliano
Here, this paper introduces a potential approach for mitigating the effects of communication delays between multiple, closed-loop hardware-in-the-loop experiments which are virtually connected, yet physically separated. The method consists of an analytical method for the compensation of communication delays, along with the supporting computational and communication infrastructure. The control design leverages tools for the design of observers for the compensation of measurement errors in systems with time-varying delays. The proposed methodology is validated through computer simulation and hardware experimentation connecting hardware-in-the-loop experiments conducted between laboratories separated by a distance of over 100 km.
Mitigating Communication Delays in Remotely Connected Hardware-in-the-loop Experiments
Cale, James; Johnson, Brian; Dall'Anese, Emiliano; ...
2018-03-30
Here, this paper introduces a potential approach for mitigating the effects of communication delays between multiple, closed-loop hardware-in-the-loop experiments which are virtually connected, yet physically separated. The method consists of an analytical method for the compensation of communication delays, along with the supporting computational and communication infrastructure. The control design leverages tools for the design of observers for the compensation of measurement errors in systems with time-varying delays. The proposed methodology is validated through computer simulation and hardware experimentation connecting hardware-in-the-loop experiments conducted between laboratories separated by a distance of over 100 km.
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl L.; Wornom, Stephen F.
1991-01-01
Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.
Improved first-order uncertainty method for water-quality modeling
Melching, C.S.; Anmangandla, S.
1992-01-01
Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.
Numerical Simulation of the Water Cycle Change Over the 20th Century
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Schubert, Siegfried D.
2003-01-01
We have used numerical models to test the impact of the change in Sea Surface Temperatures (SSTs) and carbon dioxide (CO2) concentration on the global circulation, particularly focusing on the hydrologic cycle, namely the global cycling of water and continental recycling of water. We have run four numerical simulations using mean annual SST from the early part of the 20th century (1900-1920) and the later part (1980-2000). In addition, we vary the CO2 concentrations for these periods as well. The duration of the simulations is 15 years, and the spatial resolution is 2 degrees. We use passive tracers to study the geographical sources of water. Surface evaporation from predetermined continental and oceanic regions provides the source of water for each passive tracer. In this way, we compute the percent of precipitation of each region over the globe. This can also be used to estimate precipitation recycling. In addition, we are using the passive tracers to independently compute the global cycling of water (compared to the traditional, Q/P calculation).
Line-of-sight pointing accuracy/stability analysis and computer simulation for small spacecraft
NASA Astrophysics Data System (ADS)
Algrain, Marcelo C.; Powers, Richard M.
1996-06-01
This paper presents a case study where a comprehensive computer simulation is developed to determine the driving factors contributing to spacecraft pointing accuracy and stability. The simulation is implemented using XMATH/SystemBuild software from Integrated Systems, Inc. The paper is written in a tutorial manner and models for major system components are described. Among them are spacecraft bus, attitude controller, reaction wheel assembly, star-tracker unit, inertial reference unit, and gyro drift estimators (Kalman filter). THe predicted spacecraft performance is analyzed for a variety of input commands and system disturbances. The primary deterministic inputs are desired attitude angles and rate setpoints. The stochastic inputs include random torque disturbances acting on the spacecraft, random gyro bias noise, gyro random walk, and star-tracker noise. These inputs are varied over a wide range to determine their effects on pointing accuracy and stability. The results are presented in the form of trade-off curves designed to facilitate the proper selection of subsystems so that overall spacecraft pointing accuracy and stability requirements are met.
Turbulent flow in a 180 deg bend: Modeling and computations
NASA Technical Reports Server (NTRS)
Kaul, Upender K.
1989-01-01
A low Reynolds number k-epsilon turbulence model was presented which yields accurate predictions of the kinetic energy near the wall. The model is validated with the experimental channel flow data of Kreplin and Eckelmann. The predictions are also compared with earlier results from direct simulation of turbulent channel flow. The model is especially useful for internal flows where the inflow boundary condition of epsilon is not easily prescribed. The model partly derives from some observations based on earlier direct simulation results of near-wall turbulence. The low Reynolds number turbulence model together with an existing curvature correction appropriate to spinning cylinder flows was used to simulate the flow in a U-bend with the same radius of curvature as the Space Shuttle Main Engine (SSME) Turn-Around Duct (TAD). The present computations indicate a space varying curvature correction parameter as opposed to a constant parameter as used in the spinning cylinder flows. Comparison with limited available experimental data is made. The comparison is favorable, but detailed experimental data is needed to further improve the curvature model.
Computer-aided software development process design
NASA Technical Reports Server (NTRS)
Lin, Chi Y.; Levary, Reuven R.
1989-01-01
The authors describe an intelligent tool designed to aid managers of software development projects in planning, managing, and controlling the development process of medium- to large-scale software projects. Its purpose is to reduce uncertainties in the budget, personnel, and schedule planning of software development projects. It is based on dynamic model for the software development and maintenance life-cycle process. This dynamic process is composed of a number of time-varying, interacting developmental phases, each characterized by its intended functions and requirements. System dynamics is used as a modeling methodology. The resulting Software LIfe-Cycle Simulator (SLICS) and the hybrid expert simulation system of which it is a subsystem are described.
Ballistic missile precession frequency extraction by spectrogram's texture
NASA Astrophysics Data System (ADS)
Wu, Longlong; Xu, Shiyou; Li, Gang; Chen, Zengping
2013-10-01
In order to extract precession frequency, an crucial parameter in ballistic target recognition, which reflected the kinematical characteristics as well as structural and mass distribution features, we developed a dynamic RCS signal model for a conical ballistic missile warhead, with a log-norm multiplicative noise, substituting the familiar additive noise, derived formulas of micro-Doppler induced by precession motion, and analyzed time-varying micro-Doppler features utilizing time-frequency transforms, extracted precession frequency by measuring the spectrogram's texture, verified them by computer simulation studies. Simulation demonstrates the excellent performance of the method proposed in extracting the precession frequency, especially in the case of low SNR.
The Formation of Filamentary Structures in Radiative Cluster Winds
NASA Astrophysics Data System (ADS)
Rodríguez-González, Ary; Esquivel, Alejandro; Raga, Alejandro C.; Cantó, Jorge
We explore the dynamics of a "cluster wind" flow in the regime in which the shocks resulting from the interaction of winds from nearby stars are radiative. We show that for a cluster with low-intermedia mass stars, the wind interactions are indeed likely to be radiative. We then compute three dimensional, radiative simulations of a cluster of 75 young stars, exploring the effects of varying the wind parameters and the density of the initial ISM that permeates the volume of the cluster. These simulations show that the ISM is compressed by the action of the winds into a structure of dense knots and filaments.
Computer Simulations of Deltas with Varying Fluvial Input and Tidal Forcing
NASA Astrophysics Data System (ADS)
Sun, T.
2015-12-01
Deltas are important depositional systems because many large hydrocarbon reservoirs in the world today are found in delta deposits. Deltas form when water and sediments carried by fluvial channels are emptied to an open body of water, and form delta shaped deposits. Depending on the relative importance of the physical processes that controls the forming and the growth of deltas, deltas can often be classified into three different types, namely fluvial, tidal and wave dominated delta. Many previous works, using examples from modern systems, tank experiments, outcrops, and 2 and 3D seismic data sets, have studied the shape, morphology and stratigraphic architectures corresponding to each of the deltas' types. However, few studies have focused on the change of these properties as a function of the relative change of the key controls, and most of the studies are qualitative. Here, using computer simulations, the dynamics of delta evolutions under an increasing amount of tidal influences are studied. The computer model used is fully based on the physics of fluid flow and sediment transport. In the model, tidal influences are taken into account by setting proper boundary conditions that varies both temporally and spatially. The model is capable of capturing many important natural geomorphic and sedimentary processes in fluvial and tidal systems, such as channel initiation, formation of channel levees, growth of mouth bars, bifurcation of channels around channel mouth bars, and channel avulsion. By systematically varying tidal range and fluvial input, the following properties are investigated quantitatively: (1) the presence and the form of tidal beds as a function of tidal range, (2) change of stratigraphic architecture of distributary channel mouth bars or tidal bars as tidal range changes, (3) the transport and sorting of different grainsizes and the overall facie distributions in the delta with different tidal ranges, and (4) the conditions and locations of mud drapes with different magnitude of tidal forcing.
Understanding resonance graphs using Easy Java Simulations (EJS) and why we use EJS
NASA Astrophysics Data System (ADS)
Wee, Loo Kang; Lee, Tat Leong; Chew, Charles; Wong, Darren; Tan, Samuel
2015-03-01
This paper reports a computer model simulation created using Easy Java Simulation (EJS) for learners to visualize how the steady-state amplitude of a driven oscillating system varies with the frequency of the periodic driving force. The simulation shows (N = 100) identical spring-mass systems being subjected to (1) a periodic driving force of equal amplitude but different driving frequencies, and (2) different amounts of damping. The simulation aims to create a visually intuitive way of understanding how the series of amplitude versus driving frequency graphs are obtained by showing how the displacement of the system changes over time as it transits from the transient to the steady state. A suggested ‘how to use’ the model is added to help educators and students in their teaching and learning, where we explain the theoretical steady-state equation time conditions when the model begins to allow data recording of maximum amplitudes to closely match the theoretical equation, and the steps to collect different runs of the degree of damping. We also discuss two of the design features in our computer model: displaying the instantaneous oscillation together with the achieved steady-state amplitudes, and the explicit world view overlay with scientific representation with different degrees of damping runs. Three advantages of using EJS include: (1) open source codes and creative commons attribution licenses for scaling up of interactively engaging educational practices; (2) the models made can run on almost any device, including Android and iOS; and (3) it allows the redefinition of physics educational practices through computer modeling.
NASA Astrophysics Data System (ADS)
Saxena, Nishank; Hows, Amie; Hofmann, Ronny; Alpak, Faruk O.; Freeman, Justin; Hunter, Sander; Appel, Matthias
2018-06-01
This study defines the optimal operating envelope of the Digital Rock technology from the perspective of imaging and numerical simulations of transport properties. Imaging larger volumes of rocks for Digital Rock Physics (DRP) analysis improves the chances of achieving a Representative Elementary Volume (REV) at which flow-based simulations (1) do not vary with change in rock volume, and (2) is insensitive to the choice of boundary conditions. However, this often comes at the expense of image resolution. This trade-off exists due to the finiteness of current state-of-the-art imaging detectors. Imaging and analyzing digital rocks that sample the REV and still sufficiently resolve pore throats is critical to ensure simulation quality and robustness of rock property trends for further analysis. We find that at least 10 voxels are needed to sufficiently resolve pore throats for single phase fluid flow simulations. If this condition is not met, additional analyses and corrections may allow for meaningful comparisons between simulation results and laboratory measurements of permeability, but some cases may fall outside the current technical feasibility of DRP. On the other hand, we find that the ratio of field of view and effective grain size provides a reliable measure of the REV for siliciclastic rocks. If this ratio is greater than 5, the coefficient of variation for single-phase permeability simulations drops below 15%. These imaging considerations are crucial when comparing digitally computed rock flow properties with those measured in the laboratory. We find that the current imaging methods are sufficient to achieve both REV (with respect to numerical boundary conditions) and required image resolution to perform digital core analysis for coarse to fine-grained sandstones.
Bethge, Anja; Schumacher, Udo; Wree, Andreas; Wedemann, Gero
2012-01-01
Metastasis formation remains an enigmatic process and one of the main questions recently asked is whether metastases are able to generate further metastases. Different models have been proposed to answer this question; however, their clinical significance remains unclear. Therefore a computer model was developed that permits comparison of the different models quantitatively with clinical data and that additionally predicts the outcome of treatment interventions. The computer model is based on discrete events simulation approach. On the basis of a case from an untreated patient with hepatocellular carcinoma and its multiple metastases in the liver, it was evaluated whether metastases are able to metastasise and in particular if late disseminated tumour cells are still capable to form metastases. Additionally, the resection of the primary tumour was simulated. The simulation results were compared with clinical data. The simulation results reveal that the number of metastases varies significantly between scenarios where metastases metastasise and scenarios where they do not. In contrast, the total tumour mass is nearly unaffected by the two different modes of metastasis formation. Furthermore, the results provide evidence that metastasis formation is an early event and that late disseminated tumour cells are still capable of forming metastases. Simulations also allow estimating how the resection of the primary tumour delays the patient's death. The simulation results indicate that for this particular case of a hepatocellular carcinoma late metastases, i.e., metastases from metastases, are irrelevant in terms of total tumour mass. Hence metastases seeded from metastases are clinically irrelevant in our model system. Only the first metastases seeded from the primary tumour contribute significantly to the tumour burden and thus cause the patient's death.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouyang, L; Yan, H; Jia, X
2014-06-01
Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different parameters of the system design affect its performance in scatter estimation and image reconstruction accuracy. The goal of this work is to optimize the geometric design of the moving block system. Methods: In the moving blocker system, a blocker consisting of lead strips is inserted between the x-ray source and imaging object and moving back and forth along rotation axis during CBCT acquisition. CT image of an anthropomorphic pelvic phantom was used in the simulation study. Scatter signal was simulated bymore » Monte Carlo calculation with various combinations of the lead strip width and the gap between neighboring lead strips, ranging from 4 mm to 80 mm (projected at the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline interpolation from the blocked region. Scatter estimation accuracy was quantified as relative root mean squared error by comparing the interpolated scatter to the Monte Carlo simulated scatter. CBCT was reconstructed by total variation minimization from the unblocked region, under various combinations of the lead strip width and gap. Reconstruction accuracy in each condition is quantified by CT number error as comparing to a CBCT reconstructed from unblocked full projection data. Results: Scatter estimation error varied from 0.5% to 2.6% as the lead strip width and the gap varied from 4mm to 80mm. CT number error in the reconstructed CBCT images varied from 12 to 44. Highest reconstruction accuracy is achieved when the blocker lead strip width is 8 mm and the gap is 48 mm. Conclusions: Accurate scatter estimation can be achieved in large range of combinations of lead strip width and gap. However, image reconstruction accuracy is greatly affected by the geometry design of the blocker.« less
Computing the Thermodynamic State of a Cryogenic Fluid
NASA Technical Reports Server (NTRS)
Willen, G. Scott; Hanna, Gregory J.; Anderson, Kevin R.
2005-01-01
The Cryogenic Tank Analysis Program (CTAP) predicts the time-varying thermodynamic state of a cryogenic fluid in a tank or a Dewar flask. CTAP is designed to be compatible with EASY5x, which is a commercial software package that can be used to simulate a variety of processes and equipment systems. The mathematical model implemented in CTAP is a first-order differential equation for the pressure as a function of time.
Computational modeling and analysis for left ventricle motion using CT/Echo image fusion
NASA Astrophysics Data System (ADS)
Kim, Ji-Yeon; Kang, Nahyup; Lee, Hyoung-Euk; Kim, James D. K.
2014-03-01
In order to diagnose heart disease such as myocardial infarction, 2D strain through the speckle tracking echocardiography (STE) or the tagged MRI is often used. However out-of-plane strain measurement using STE or tagged MRI is inaccurate. Therefore, strain for whole organ which are analyzed by simulation of 3D cardiac model can be applied in clinical diagnosis. To simulate cardiac contraction in a cycle, cardiac physical properties should be reflected in cardiac model. The myocardial wall in left ventricle is represented as a transversely orthotropic hyperelastic material, with the fiber orientation varying sequentially from the epicardial surface, through about 0° at the midwall, to the endocardial surface. A time-varying elastance model is simulated to contract myocardial fiber, and physiological intraventricular systolic pressure curves are employed for the cardiac dynamics simulation in a cycle. And an exact description of the cardiac motion should be acquired in order that essential boundary conditions for cardiac simulation are obtained effectively. Real time cardiac motion can be acquired by using echocardiography and exact cardiac geometrical 3D model can be reconstructed using 3D CT data. In this research, image fusion technology from CT and echocardiography is employed in order to consider patient-specific left ventricle movement. Finally, longitudinal strain from speckle tracking echocardiography which is known to fit actual left ventricle deformation relatively well is used to verify these results.
[Application of ordinary Kriging method in entomologic ecology].
Zhang, Runjie; Zhou, Qiang; Chen, Cuixian; Wang, Shousong
2003-01-01
Geostatistics is a statistic method based on regional variables and using the tool of variogram to analyze the spatial structure and the patterns of organism. In simulating the variogram within a great range, though optimal simulation cannot be obtained, the simulation method of a dialogue between human and computer can be used to optimize the parameters of the spherical models. In this paper, the method mentioned above and the weighted polynomial regression were utilized to simulate the one-step spherical model, the two-step spherical model and linear function model, and the available nearby samples were used to draw on the ordinary Kriging procedure, which provided a best linear unbiased estimate of the constraint of the unbiased estimation. The sum of square deviation between the estimating and measuring values of varying theory models were figured out, and the relative graphs were shown. It was showed that the simulation based on the two-step spherical model was the best simulation, and the one-step spherical model was better than the linear function model.
Kreck, Cara A; Mancera, Ricardo L
2014-02-20
Molecular dynamics simulations allow detailed study of the experimentally inaccessible liquid state of supercooled water below its homogeneous nucleation temperature and the characterization of the glass transition. Simple, nonpolarizable intermolecular potentials are commonly used in classical molecular dynamics simulations of water and aqueous systems due to their lower computational cost and their ability to reproduce a wide range of properties. Because the quality of these predictions varies between the potentials, the predicted glass transition of water is likely to be influenced by the choice of potential. We have thus conducted an extensive comparative investigation of various three-, four-, five-, and six-point water potentials in both the NPT and NVT ensembles. The T(g) predicted from NPT simulations is strongly correlated with the temperature of minimum density, whereas the maximum in the heat capacity plot corresponds to the minimum in the thermal expansion coefficient. In the NVT ensemble, these points are instead related to the maximum in the internal pressure and the minimum of its derivative, respectively. A detailed analysis of the hydrogen-bonding properties at the glass transition reveals that the extent of hydrogen-bonds lost upon the melting of the glassy state is related to the height of the heat capacity peak and varies between water potentials.
A Very High Order, Adaptable MESA Implementation for Aeroacoustic Computations
NASA Technical Reports Server (NTRS)
Dydson, Roger W.; Goodrich, John W.
2000-01-01
Since computational efficiency and wave resolution scale with accuracy, the ideal would be infinitely high accuracy for problems with widely varying wavelength scales. Currently, many of the computational aeroacoustics methods are limited to 4th order accurate Runge-Kutta methods in time which limits their resolution and efficiency. However, a new procedure for implementing the Modified Expansion Solution Approximation (MESA) schemes, based upon Hermitian divided differences, is presented which extends the effective accuracy of the MESA schemes to 57th order in space and time when using 128 bit floating point precision. This new approach has the advantages of reducing round-off error, being easy to program. and is more computationally efficient when compared to previous approaches. Its accuracy is limited only by the floating point hardware. The advantages of this new approach are demonstrated by solving the linearized Euler equations in an open bi-periodic domain. A 500th order MESA scheme can now be created in seconds, making these schemes ideally suited for the next generation of high performance 256-bit (double quadruple) or higher precision computers. This ease of creation makes it possible to adapt the algorithm to the mesh in time instead of its converse: this is ideal for resolving varying wavelength scales which occur in noise generation simulations. And finally, the sources of round-off error which effect the very high order methods are examined and remedies provided that effectively increase the accuracy of the MESA schemes while using current computer technology.
Garcia, A G; Godoy, W A C
2017-06-01
Studies of the influence of biological parameters on the spatial distribution of lepidopteran insects can provide useful information for managing agricultural pests, since the larvae of many species cause serious impacts on crops. Computational models to simulate the spatial dynamics of insect populations are increasingly used, because of their efficiency in representing insect movement. In this study, we used a cellular automata model to explore different patterns of population distribution of Spodoptera frugiperda (J.E. Smith) (Lepidoptera: Noctuidae), when the values of two biological parameters that are able to influence the spatial pattern (larval viability and adult longevity) are varied. We mapped the spatial patterns observed as the parameters varied. Additionally, by using population data for S. frugiperda obtained in different hosts under laboratory conditions, we were able to describe the expected spatial patterns occurring in corn, cotton, millet, and soybean crops based on the parameters varied. The results are discussed from the perspective of insect ecology and pest management. We concluded that computational approaches can be important tools to study the relationship between the biological parameters and spatial distributions of lepidopteran insect pests.
Multi-level Hierarchical Poly Tree computer architectures
NASA Technical Reports Server (NTRS)
Padovan, Joe; Gute, Doug
1990-01-01
Based on the concept of hierarchical substructuring, this paper develops an optimal multi-level Hierarchical Poly Tree (HPT) parallel computer architecture scheme which is applicable to the solution of finite element and difference simulations. Emphasis is given to minimizing computational effort, in-core/out-of-core memory requirements, and the data transfer between processors. In addition, a simplified communications network that reduces the number of I/O channels between processors is presented. HPT configurations that yield optimal superlinearities are also demonstrated. Moreover, to generalize the scope of applicability, special attention is given to developing: (1) multi-level reduction trees which provide an orderly/optimal procedure by which model densification/simplification can be achieved, as well as (2) methodologies enabling processor grading that yields architectures with varying types of multi-level granularity.
Diffusion across the modified polyethylene separator GX in the heat-sterilizable AgO-Zn battery
NASA Technical Reports Server (NTRS)
Lutwack, R.
1973-01-01
Models of diffusion across an inert membrane have been studied using the computer program CINDA. The models were constructed to simulate various conditions obtained in the consideration of the diffusion of Ag (OH)2 ions in the AgO-Zn battery. The effects on concentrations across the membrane at the steady state and on the fluxout as a function of time were used to examine the consequences of stepwise reducing the number of sources of ions, of stepwise blocking the source and sink surfaces, of varying the magnitude of the diffusion coefficient for a uniform membrane, of varying the diffusion coefficient across the membrane, and of excluding volumes to diffusion.
The use of digital games and simulators in veterinary education: an overview with examples.
de Bie, M H; Lipman, L J A
2012-01-01
In view of current technological possibilities and the popularity of games, the interest in games for educational purposes is remarkably on the rise. This article outlines the (future) use of (digital) games and simulators in several disciplines, especially in the veterinary curriculum. The different types of game-based learning (GBL)-varying from simple interactive computer board games to more complex virtual simulation strategies-will be discussed as well as the benefits, possibilities, and limitations of the educational use of games. The real breakthrough seems to be a few years away. Technological developments in the future might diminish the limitations and stumbling blocks that currently exist. Consequently, educational games will play a new and increasingly important role in the future veterinary curriculum, providing an attractive and useful way of learning.
Experimental and numerical modeling of heat transfer in directed thermoplates
Khalil, Imane; Hayes, Ryan; Pratt, Quinn; ...
2018-03-20
We present three-dimensional numerical simulations to quantify the design specifications of a directional thermoplate expanded channel heat exchanger, also called dimpleplate. Parametric thermofluidic simulations were performed independently varying the number of spot welds, the diameter of the spot welds, and the thickness of the fluid channel within the laminar flow regime. Results from computational fluid dynamics simulations show an improvement in heat transfer is achieved under a variety of conditions: when the thermoplate has a relatively large cross-sectional area normal to the flow, a ratio of spot weld spacing to channel length of 0.2, and a ratio of the spotmore » weld diameter with respect to channel width of 0.3. Lastly, experimental results performed to validate the model are also presented.« less
Experimental and numerical modeling of heat transfer in directed thermoplates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, Imane; Hayes, Ryan; Pratt, Quinn
We present three-dimensional numerical simulations to quantify the design specifications of a directional thermoplate expanded channel heat exchanger, also called dimpleplate. Parametric thermofluidic simulations were performed independently varying the number of spot welds, the diameter of the spot welds, and the thickness of the fluid channel within the laminar flow regime. Results from computational fluid dynamics simulations show an improvement in heat transfer is achieved under a variety of conditions: when the thermoplate has a relatively large cross-sectional area normal to the flow, a ratio of spot weld spacing to channel length of 0.2, and a ratio of the spotmore » weld diameter with respect to channel width of 0.3. Lastly, experimental results performed to validate the model are also presented.« less
Time-varying changes in the simulated structure of the Brewer-Dobson Circulation
NASA Astrophysics Data System (ADS)
Garfinkel, Chaim I.; Aquila, Valentina; Waugh, Darryn W.; Oman, Luke D.
2017-01-01
A series of simulations using the NASA Goddard Earth Observing System Chemistry Climate Model are analyzed in order to assess changes in the Brewer-Dobson Circulation (BDC) over the past 55 years. When trends are computed over the past 55 years, the BDC accelerates throughout the stratosphere, consistent with previous modeling results. However, over the second half of the simulations (i.e., since the late 1980s), the model simulates structural changes in the BDC as the temporal evolution of the BDC varies between regions in the stratosphere. In the mid-stratosphere in the midlatitude Northern Hemisphere, the BDC does not accelerate in the ensemble mean of our simulations despite increases in greenhouse gas concentrations and warming sea surface temperatures, and it even decelerates in one ensemble member. This deceleration is reminiscent of changes inferred from satellite instruments and in situ measurements. In contrast, the BDC in the lower stratosphere continues to accelerate. The main forcing agents for the recent slowdown in the mid-stratosphere appear to be declining ozone-depleting substance (ODS) concentrations and the timing of volcanic eruptions. Changes in both mean age of air and the tropical upwelling of the residual circulation indicate a lack of recent acceleration. We therefore clarify that the statement that is often made that climate models simulate a decreasing age throughout the stratosphere only applies over long time periods and is not necessarily the case for the past 25 years, when most tracer measurements were taken.
Time-Varying Changes in the Simulated Structure of the Brewer-Dobson Circulation
NASA Technical Reports Server (NTRS)
Garfinkel, Chaim I.; Aquila, Valentina; Waugh, Darryn W.; Oman, Luke D.
2017-01-01
A series of simulations using the NASA Goddard Earth Observing System Chemistry Climate Model are analyzed in order to assess changes in the Brewer-Dobson Circulation (BDC) over the past 55 years. When trends are computed over the past 55 years, the BDC accelerates throughout the stratosphere, consistent with previous modeling results. However, over the second half of the simulations (i.e., since the late 1980s), the model simulates structural changes in the BDC as the temporal evolution of the BDC varies between regions in the stratosphere. In the mid-stratosphere in the midlatitude Northern Hemisphere, the BDC does not accelerate in the ensemble mean of our simulations despite increases in greenhouse gas concentrations and warming sea surface temperatures, and it even decelerates in one ensemble member. This deceleration is reminiscent of changes inferred from satellite instruments and in situ measurements. In contrast, the BDC in the lower stratosphere continues to accelerate. The main forcing agents for the recent slowdown in the mid-stratosphere appear to be declining ozone-depleting substance (ODS) concentrations and the timing of volcanic eruptions. Changes in both mean age of air and the tropical upwelling of the residual circulation indicate a lack of recent acceleration. We therefore clarify that the statement that is often made that climate models simulate a decreasing age throughout the stratosphere only applies over long time periods and is not necessarily the case for the past 25 years, when most tracer measurements were taken.
Need for speed: An optimized gridding approach for spatially explicit disease simulations.
Sellman, Stefan; Tsao, Kimberly; Tildesley, Michael J; Brommesson, Peter; Webb, Colleen T; Wennergren, Uno; Keeling, Matt J; Lindström, Tom
2018-04-01
Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power.
Need for speed: An optimized gridding approach for spatially explicit disease simulations
Tildesley, Michael J.; Brommesson, Peter; Webb, Colleen T.; Wennergren, Uno; Lindström, Tom
2018-01-01
Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power. PMID:29624574
Bott, Oliver Johannes; Dresing, Klaus; Wagner, Markus; Raab, Björn-Werner; Teistler, Michael
2011-01-01
Mobile image intensifier systems (C-arms) are used frequently in orthopedic and reconstructive surgery, especially in trauma and emergency settings, but image quality and radiation exposure levels may vary widely, depending on the extent of the C-arm operator's knowledge and experience. Current training programs consist mainly of theoretical instruction in C-arm operation, the physical foundations of radiography, and radiation avoidance, and are largely lacking in hands-on application. A computer-based simulation program such as that tested by the authors may be one way to improve the effectiveness of C-arm training. In computer simulations of various scenarios commonly encountered in the operating room, trainees using the virtX program interact with three-dimensional models to test their knowledge base and improve their skill levels. Radiographs showing the simulated patient anatomy and surgical implants are "reconstructed" from data computed on the basis of the trainee's positioning of models of a C-arm, patient, and table, and are displayed in real time on the desktop monitor. Trainee performance is signaled in real time by color graphics in several control panels and, on completion of the exercise, is compared in detail with the performance of an expert operator. Testing of this computer-based training program in continuing medical education courses for operating room personnel showed an improvement in the overall understanding of underlying principles of intraoperative radiography performed with a C-arm, with resultant higher image quality, lower overall radiation exposure, and greater time efficiency. Supplemental material available at http://radiographics.rsna.org/lookup/suppl/doi:10.1148/rg.313105125/-/DC1. Copyright © RSNA, 2011.
Boundary Conditions for Jet Flow Computations
NASA Technical Reports Server (NTRS)
Hayder, M. E.; Turkel, E.
1994-01-01
Ongoing activities are focused on capturing the sound source in a supersonic jet through careful large eddy simulation (LES). One issue that is addressed is the effect of the boundary conditions, both inflow and outflow, on the predicted flow fluctuations, which represent the sound source. In this study, we examine the accuracy of several boundary conditions to determine their suitability for computations of time-dependent flows. Various boundary conditions are used to compute the flow field of a laminar axisymmetric jet excited at the inflow by a disturbance given by the corresponding eigenfunction of the linearized stability equations. We solve the full time dependent Navier-Stokes equations by a high order numerical scheme. For very small excitations, the computed growth of the modes closely corresponds to that predicted by the linear theory. We then vary the excitation level to see the effect of the boundary conditions in the nonlinear flow regime.
Neuromorphic Kalman filter implementation in IBM’s TrueNorth
NASA Astrophysics Data System (ADS)
Carney, R.; Bouchard, K.; Calafiura, P.; Clark, D.; Donofrio, D.; Garcia-Sciveres, M.; Livezey, J.
2017-10-01
Following the advent of a post-Moore’s law field of computation, novel architectures continue to emerge. With composite, multi-million connection neuromorphic chips like IBM’s TrueNorth, neural engineering has now become a feasible technology in this novel computing paradigm. High Energy Physics experiments are continuously exploring new methods of computation and data handling, including neuromorphic, to support the growing challenges of the field and be prepared for future commodity computing trends. This work details the first instance of a Kalman filter implementation in IBM’s neuromorphic architecture, TrueNorth, for both parallel and serial spike trains. The implementation is tested on multiple simulated systems and its performance is evaluated with respect to an equivalent non-spiking Kalman filter. The limits of the implementation are explored whilst varying the size of weight and threshold registers, the number of spikes used to encode a state, size of neuron block for spatial encoding, and neuron potential reset schemes.
NASA Technical Reports Server (NTRS)
Hirt, Stefanie M.; Reich, David B.; O'Connor, Michael B.
2010-01-01
Computational fluid dynamics was used to study the effectiveness of micro-ramp vortex generators to control oblique shock boundary layer interactions. Simulations were based on experiments previously conducted in the 15 x 15 cm supersonic wind tunnel at NASA Glenn Research Center. Four micro-ramp geometries were tested at Mach 2.0 varying the height, chord length, and spanwise spacing between micro-ramps. The overall flow field was examined. Additionally, key parameters such as boundary-layer displacement thickness, momentum thickness and incompressible shape factor were also examined. The computational results predicted the effects of the micro-ramps well, including the trends for the impact that the devices had on the shock boundary layer interaction. However, computing the shock boundary layer interaction itself proved to be problematic since the calculations predicted more pronounced adverse effects on the boundary layer due to the shock than were seen in the experiment.
NASA Technical Reports Server (NTRS)
Hirt, Stephanie M.; Reich, David B.; O'Connor, Michael B.
2012-01-01
Computational fluid dynamics was used to study the effectiveness of micro-ramp vortex generators to control oblique shock boundary layer interactions. Simulations were based on experiments previously conducted in the 15- by 15-cm supersonic wind tunnel at the NASA Glenn Research Center. Four micro-ramp geometries were tested at Mach 2.0 varying the height, chord length, and spanwise spacing between micro-ramps. The overall flow field was examined. Additionally, key parameters such as boundary-layer displacement thickness, momentum thickness and incompressible shape factor were also examined. The computational results predicted the effects of the microramps well, including the trends for the impact that the devices had on the shock boundary layer interaction. However, computing the shock boundary layer interaction itself proved to be problematic since the calculations predicted more pronounced adverse effects on the boundary layer due to the shock than were seen in the experiment.
NASA Astrophysics Data System (ADS)
Roth, Steven; Oakes, Jessica; Shadden, Shawn
2015-11-01
Particle deposition in the human lungs can occur with every breathe. Airbourne particles can range from toxic constituents (e.g. tobacco smoke and air pollution) to aerosolized particles designed for drug treatment (e.g. insulin to treat diabetes). The effect of various realistic airway geometries on complex flow structures, and thus particle deposition sites, has yet to be extensively investigated using computational fluid dynamics (CFD). In this work, we created an image-based geometric airway model of the human lung and performed CFD simulations by employing multi-domain methods. Following the flow simulations, Lagrangian particle tracking was used to study the effect of cross-sectional shape on deposition sites in the conducting airways. From a single human lung model, the cross-sectional ellipticity (the ratio of major and minor diameters) of the left and right main bronchi was varied systematically from 2:1 to 1:1. The influence of the airway ellipticity on the surrounding flow field and particle deposition was determined.
Cosmic reionization on computers: The faint end of the galaxy luminosity function
Gnedin, Nickolay Y.
2016-07-01
Using numerical cosmological simulations completed under the “Cosmic Reionization On Computers” project, I explore theoretical predictions for the faint end of the galaxy UV luminosity functions atmore » $$z\\gtrsim 6$$. A commonly used Schechter function approximation with the magnitude cut at $${M}_{{\\rm{cut}}}\\sim -13$$ provides a reasonable fit to the actual luminosity function of simulated galaxies. When the Schechter functional form is forced on the luminosity functions from the simulations, the magnitude cut $${M}_{{\\rm{cut}}}$$ is found to vary between -12 and -14 with a mild redshift dependence. Here, an analytical model of reionization from Madau et al., as used by Robertson et al., provides a good description of the simulated results, which can be improved even further by adding two physically motivated modifications to the original Madau et al. equation.« less
Failure Analysis of a Sheet Metal Blanking Process Based on Damage Coupling Model
NASA Astrophysics Data System (ADS)
Wen, Y.; Chen, Z. H.; Zang, Y.
2013-11-01
In this paper, a blanking process of sheet metal is studied by the methods of numerical simulation and experimental observation. The effects of varying technological parameters related to the quality of products are investigated. An elastoplastic constitutive equation accounting for isotropic ductile damage is implemented into the finite element code ABAQUS with a user-defined material subroutine UMAT. The simulations of the damage evolution and ductile fracture in a sheet metal blanking process have been carried out by the FEM. In order to guarantee computation accuracy and avoid numerical divergence during large plastic deformation, a specified remeshing technique is successively applied when severe element distortion occurs. In the simulation, the evolutions of damage at different stage of the blanking process have been evaluated and the distributions of damage obtained from simulation are in proper agreement with the experimental results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odlyzko, Michael L.; Mkhoyan, K. Andre, E-mail: mkhoyan@umn.edu; Himmetoglu, Burak
2016-07-15
Annular dark field scanning transmission electron microscopy (ADF-STEM) image simulations were performed for zone-axis-oriented light-element single crystals, using a multislice method adapted to include charge redistribution due to chemical bonding. Examination of these image simulations alongside calculations of the propagation of the focused electron probe reveal that the evolution of the probe intensity with thickness exhibits significant sensitivity to interatomic charge transfer, accounting for observed thickness-dependent bonding sensitivity of contrast in all ADF-STEM imaging conditions. Because changes in image contrast relative to conventional neutral atom simulations scale directly with the net interatomic charge transfer, the strongest effects are seen inmore » crystals with highly polar bonding, while no effects are seen for nonpolar bonding. Although the bonding dependence of ADF-STEM image contrast varies with detector geometry, imaging parameters, and material temperature, these simulations predict the bonding effects to be experimentally measureable.« less
The YAV-8B simulation and modeling. Volume 2: Program listing
NASA Technical Reports Server (NTRS)
1983-01-01
Detailed mathematical models of varying complexity representative of the YAV-8B aircraft are defined and documented. These models are used in parameter estimation and in linear analysis computer programs while investigating YAV-8B aircraft handling qualities. Both a six degree of freedom nonlinear model and a linearized three degree of freedom longitudinal and lateral directional model were developed. The nonlinear model is based on the mathematical model used on the MCAIR YAV-8B manned flight simulator. This simulator model has undergone periodic updating based on the results of approximately 360 YAV-8B flights and 8000 hours of wind tunnel testing. Qualified YAV-8B flight test pilots have commented that the handling qualities characteristics of the simulator are quite representative of the real aircraft. These comments are validated herein by comparing data from both static and dynamic flight test maneuvers to the same obtained using the nonlinear program.
Cosmic reionization on computers: The faint end of the galaxy luminosity function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gnedin, Nickolay Y.
Using numerical cosmological simulations completed under the “Cosmic Reionization On Computers” project, I explore theoretical predictions for the faint end of the galaxy UV luminosity functions atmore » $$z\\gtrsim 6$$. A commonly used Schechter function approximation with the magnitude cut at $${M}_{{\\rm{cut}}}\\sim -13$$ provides a reasonable fit to the actual luminosity function of simulated galaxies. When the Schechter functional form is forced on the luminosity functions from the simulations, the magnitude cut $${M}_{{\\rm{cut}}}$$ is found to vary between -12 and -14 with a mild redshift dependence. Here, an analytical model of reionization from Madau et al., as used by Robertson et al., provides a good description of the simulated results, which can be improved even further by adding two physically motivated modifications to the original Madau et al. equation.« less
Deblurring for spatial and temporal varying motion with optical computing
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Xue, Dongfeng; Hui, Zhao
2016-05-01
A way to estimate and remove spatially and temporally varying motion blur is proposed, which is based on an optical computing system. The translation and rotation motion can be independently estimated from the joint transform correlator (JTC) system without iterative optimization. The inspiration comes from the fact that the JTC system is immune to rotation motion in a Cartesian coordinate system. The work scheme of the JTC system is designed to keep switching between the Cartesian coordinate system and polar coordinate system in different time intervals with the ping-pang handover. In the ping interval, the JTC system works in the Cartesian coordinate system to obtain a translation motion vector with optical computing speed. In the pang interval, the JTC system works in the polar coordinate system. The rotation motion is transformed to the translation motion through coordinate transformation. Then the rotation motion vector can also be obtained from JTC instantaneously. To deal with continuous spatially variant motion blur, submotion vectors based on the projective motion path blur model are proposed. The submotion vectors model is more effective and accurate at modeling spatially variant motion blur than conventional methods. The simulation and real experiment results demonstrate its overall effectiveness.
Performance of computer vision in vivo flow cytometry with low fluorescence contrast
NASA Astrophysics Data System (ADS)
Markovic, Stacey; Li, Siyuan; Niedre, Mark
2015-03-01
Detection and enumeration of circulating cells in the bloodstream of small animals are important in many areas of preclinical biomedical research, including cancer metastasis, immunology, and reproductive medicine. Optical in vivo flow cytometry (IVFC) represents a class of technologies that allow noninvasive and continuous enumeration of circulating cells without drawing blood samples. We recently developed a technique termed computer vision in vivo flow cytometry (CV-IVFC) that uses a high-sensitivity fluorescence camera and an automated computer vision algorithm to interrogate relatively large circulating blood volumes in the ear of a mouse. We detected circulating cells at concentrations as low as 20 cells/mL. In the present work, we characterized the performance of CV-IVFC with low-contrast imaging conditions with (1) weak cell fluorescent labeling using cell-simulating fluorescent microspheres with varying brightness and (2) high background tissue autofluorescence by varying autofluorescence properties of optical phantoms. Our analysis indicates that CV-IVFC can robustly track and enumerate circulating cells with at least 50% sensitivity even in conditions with two orders of magnitude degraded contrast than our previous in vivo work. These results support the significant potential utility of CV-IVFC in a wide range of in vivo biological models.
NASA Technical Reports Server (NTRS)
Chaussee, Denny S.
1993-01-01
The steady 3D viscous flow past the ONERA M6 wing and a slender delta wing-body with trailing edge control surfaces has been computed. A cell-centered finite-volume Navier-Stokes patched zonal method has been used for the numerical simulation. Both diagonalized and LUSGS schemes have been implemented. Besides the standard nonplanar zonal interfacing techniques, a new virtual zone capability has been employed. For code validation, the transonic flow past the ONERA M5 wing is calculated for angles-of-attack of 3.06 deg and 5.06 deg and compared with the available experiments. The wing-body computational results are compared with experimental data for both trailing-edge flaps deflected. The experimental flow conditions are M subinfinity = 0.4, a turbulent Reynolds number of 5.41 million based on a mean aerodynamic chord of 25.959 inches, adiabatic wall, and angles-of-attack varying from 0 deg to 23.85 deg. The computational results are presented for the 23.85 deg angle-of-attack case. The effects of the base flow due to a model sting, the varying second and fourth order numerical dissipation, and the turbulence model are all considered.
Ahmed, Muneeb; Liu, Zhengjun; Humphries, Stanley; Goldberg, S Nahum
2008-11-01
To use an established computer simulation model of radiofrequency (RF) ablation to characterize the combined effects of varying perfusion, and electrical and thermal conductivity on RF heating. Two-compartment computer simulation of RF heating using 2-D and 3-D finite element analysis (ETherm) was performed in three phases (n = 88 matrices, 144 data points each). In each phase, RF application was systematically modeled on a clinically relevant template of application parameters (i.e., varying tumor and surrounding tissue perfusion: 0-5 kg/m(3)-s) for internally cooled 3 cm single and 2.5 cm cluster electrodes for tumor diameters ranging from 2-5 cm, and RF application times (6-20 min). In the first phase, outer thermal conductivity was changed to reflect three common clinical scenarios: soft tissue, fat, and ascites (0.5, 0.23, and 0.7 W/m- degrees C, respectively). In the second phase, electrical conductivity was changed to reflect different tumor electrical conductivities (0.5 and 4.0 S/m, representing soft tissue and adjuvant saline injection, respectively) and background electrical conductivity representing soft tissue, lung, and kidney (0.5, 0.1, and 3.3 S/m, respectively). In the third phase, the best and worst combinations of electrical and thermal conductivity characteristics were modeled in combination. Tissue heating patterns and the time required to heat the entire tumor +/-a 5 mm margin to >50 degrees C were assessed. Increasing background tissue thermal conductivity increases the time required to achieve a 50 degrees C isotherm for all tumor sizes and electrode types, but enabled ablation of a given tumor size at higher tissue perfusions. An inner thermal conductivity equivalent to soft tissue (0.5 W/m- degrees C) surrounded by fat (0.23 W/m- degrees C) permitted the greatest degree of tumor heating in the shortest time, while soft tissue surrounded by ascites (0.7 W/m- degrees C) took longer to achieve the 50 degrees C isotherm, and complete ablation could not be achieved at higher inner/outer perfusions (>4 kg/m(3)-s). For varied electrical conductivities in the setting of varied perfusion, greatest RF heating occurred for inner electrical conductivities simulating injection of saline around the electrode with an outer electrical conductivity of soft tissue, and the least amount of heating occurring while simulating renal cell carcinoma in normal kidney. Characterization of these scenarios demonstrated the role of electrical and thermal conductivity interactions, with the greatest differences in effect seen in the 3-4 cm tumor range, as almost all 2 cm tumors and almost no 5 cm tumors could be treated. Optimal combinations of thermal and electrical conductivity can partially negate the effect of perfusion. For clinically relevant tumor sizes, thermal and electrical conductivity impact which tumors can be successfully ablated even in the setting of almost non-existent perfusion.
Qiao, Shan; Shen, Guofeng; Bai, Jingfeng; Chen, Yazhu
2013-08-01
In the high-intensity focused ultrasound treatment of liver tumors, ultrasound propagation is affected by the rib cage. Because of the diffraction and absorption of the bone, the sound distribution at the focal plane is altered, and more importantly, overheating on the rib surface might occur. To overcome these problems, a geometric correction method is applied to turn off the elements blocked by the ribs. The potential of steering the focus of the phased-array along the propagation direction to improve the transcostal treatment was investigated by simulations and experiments using different rib models and transducers. The ultrasound propagation through the ribs was computed by a hybrid method including the Rayleigh-Sommerfeld integral, k-space method, and angular spectrum method. A modified correction method was proposed to adjust the output of elements based on their relative area in the projected "shadow" of the ribs. The simulation results showed that an increase in the specific absorption rate gain up to 300% was obtained by varying the focal length although the optimal value varied in each situation. Therefore, acoustic simulation is required for each clinical case to determine a satisfactory treatment plan.
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa A.; Abdol-Hamid, Khaled S.; Massey, Steven J.
2009-01-01
In this chapter numerical simulations of the flow around F-16XL are performed as a contribution to the Cranked Arrow Wing Aerodynamic Project International (CAWAPI) using the PAB3D CFD code. Two turbulence models are used in the calculations: a standard k-epsilon model, and the Shih-Zhu-Lumley (SZL) algebraic stress model. Seven flight conditions are simulated for the flow around the F-16XL where the free stream Mach number varies from 0.242 to 0.97. The range of angles of attack varies from 0 deg to 20 deg. Computational results, surface static pressure, boundary layer velocity profiles, and skin friction are presented and compared with flight data. Numerical results are generally in good agreement with flight data, considering that only one grid resolution is utilized for the different flight conditions simulated in this study. The Algebraic Stress Model (ASM) results are closer to the flight data than the k-epsilon model results. The ASM predicted a stronger primary vortex, however, the origin of the vortex and footprint is approximately the same as in the k-epsilon predictions.
A minimal model of self-sustaining turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Vaughan L.; Gayme, Dennice F.; Farrell, Brian F.
2015-10-15
In this work, we examine the turbulence maintained in a Restricted Nonlinear (RNL) model of plane Couette flow. This model is a computationally efficient approximation of the second order statistical state dynamics obtained by partitioning the flow into a streamwise averaged mean flow and perturbations about that mean, a closure referred to herein as the RNL{sub ∞} model. The RNL model investigated here employs a single member of the infinite ensemble that comprises the covariance of the RNL{sub ∞} dynamics. The RNL system has previously been shown to support self-sustaining turbulence with a mean flow and structural features that aremore » consistent with direct numerical simulations (DNS). Regardless of the number of streamwise Fourier components used in the simulation, the RNL system’s self-sustaining turbulent state is supported by a small number of streamwise varying modes. Remarkably, further truncation of the RNL system’s support to as few as one streamwise varying mode can suffice to sustain the turbulent state. The close correspondence between RNL simulations and DNS that has been previously demonstrated along with the results presented here suggest that the fundamental mechanisms underlying wall-turbulence can be analyzed using these highly simplified RNL systems.« less
A Novel Mean-Value Model of the Cardiovascular System Including a Left Ventricular Assist Device.
Ochsner, Gregor; Amacher, Raffael; Schmid Daners, Marianne
2017-06-01
Time-varying elastance models (TVEMs) are often used for simulation studies of the cardiovascular system with a left ventricular assist device (LVAD). Because these models are computationally expensive, they cannot be used for long-term simulation studies. In addition, their equilibria are periodic solutions, which prevent the extraction of a linear time-invariant model that could be used e.g. for the design of a physiological controller. In the current paper, we present a new type of model to overcome these problems: the mean-value model (MVM). The MVM captures the behavior of the cardiovascular system by representative mean values that do not change within the cardiac cycle. For this purpose, each time-varying element is manually converted to its mean-value counterpart. We compare the derived MVM to a similar TVEM in two simulation experiments. In both cases, the MVM is able to fully capture the inter-cycle dynamics of the TVEM. We hope that the new MVM will become a useful tool for researchers working on physiological control algorithms. This paper provides a plant model that enables for the first time the use of tools from classical control theory in the field of physiological LVAD control.
NASA Technical Reports Server (NTRS)
Beacom, John Francis; Dominik, Kurt G.; Melott, Adrian L.; Perkins, Sam P.; Shandarin, Sergei F.
1991-01-01
Results are presented from a series of gravitational clustering simulations in two dimensions. These simulations are a significant departure from previous work, since in two dimensions one can have large dynamic range in both length scale and mass using present computer technology. Controlled experiments were conducted by varying the slope of power-law initial density fluctuation spectra and varying cutoffs at large k, while holding constant the phases of individual Fourier components and the scale of nonlinearity. Filaments are found in many different simulations, even with pure power-law initial conditions. By direct comparison, filaments, called 'second-generation pancakes' are shown to arise as a consequence of mild nonlinearity on scales much larger than the correlation length and are not relics of an initial lattice or due to sparse sampling of the Fourier components. Bumps of low amplitude in the two-point correlation are found to be generic but usually only statistical fluctuations. Power spectra are much easier to relate to initial conditions, and seem to follow a simple triangular shape (on log-log plot) in the nonlinear regime. The rms density fluctuation with Gaussian smoothing is the most stable indicator of nonlinearity.
Design, Construction and Testing of an In-Pile Loop for PWR (Pressurized Water Reactor) Simulation.
1987-06-01
computer modeling remains at best semiempirical (C-i), this large variation in scaling factor makes extrapolation of data impossible. The DIDO Water...in a full scale PWR are not practical. The reactor plant is not controlled to tolerances necessary for research, and utilities are reluctant to vary...MIT Reactor Safeguards Committee, in revision 1 to the PCCL Safety Evaluation Report (SER), for final approval to begin in-pile testing and
An Isopycnal Box Model with predictive deep-ocean structure for biogeochemical cycling applications
NASA Astrophysics Data System (ADS)
Goodwin, Philip
2012-07-01
To simulate global ocean biogeochemical tracer budgets a model must accurately determine both the volume and surface origins of each water-mass. Water-mass volumes are dynamically linked to the ocean circulation in General Circulation Models, but at the cost of high computational load. In computationally efficient Box Models the water-mass volumes are simply prescribed and do not vary when the circulation transport rates or water mass densities are perturbed. A new computationally efficient Isopycnal Box Model is presented in which the sub-surface box volumes are internally calculated from the prescribed circulation using a diffusive conceptual model of the thermocline, in which upwelling of cold dense water is balanced by a downward diffusion of heat. The volumes of the sub-surface boxes are set so that the density stratification satisfies an assumed link between diapycnal diffusivity, κd, and buoyancy frequency, N: κd = c/(Nα), where c and α are user prescribed parameters. In contrast to conventional Box Models, the volumes of the sub-surface ocean boxes in the Isopycnal Box Model are dynamically linked to circulation, and automatically respond to circulation perturbations. This dynamical link allows an important facet of ocean biogeochemical cycling to be simulated in a highly computationally efficient model framework.
Sources of spurious force oscillations from an immersed boundary method for moving-body problems
NASA Astrophysics Data System (ADS)
Lee, Jongho; Kim, Jungwoo; Choi, Haecheon; Yang, Kyung-Soo
2011-04-01
When a discrete-forcing immersed boundary method is applied to moving-body problems, it produces spurious force oscillations on a solid body. In the present study, we identify two sources of these force oscillations. One source is from the spatial discontinuity in the pressure across the immersed boundary when a grid point located inside a solid body becomes that of fluid with a body motion. The addition of mass source/sink together with momentum forcing proposed by Kim et al. [J. Kim, D. Kim, H. Choi, An immersed-boundary finite volume method for simulations of flow in complex geometries, Journal of Computational Physics 171 (2001) 132-150] reduces the spurious force oscillations by alleviating this pressure discontinuity. The other source is from the temporal discontinuity in the velocity at the grid points where fluid becomes solid with a body motion. The magnitude of velocity discontinuity decreases with decreasing the grid spacing near the immersed boundary. Four moving-body problems are simulated by varying the grid spacing at a fixed computational time step and at a constant CFL number, respectively. It is found that the spurious force oscillations decrease with decreasing the grid spacing and increasing the computational time step size, but they depend more on the grid spacing than on the computational time step size.
A unified dislocation density-dependent physical-based constitutive model for cold metal forming
NASA Astrophysics Data System (ADS)
Schacht, K.; Motaman, A. H.; Prahl, U.; Bleck, W.
2017-10-01
Dislocation-density-dependent physical-based constitutive models of metal plasticity while are computationally efficient and history-dependent, can accurately account for varying process parameters such as strain, strain rate and temperature; different loading modes such as continuous deformation, creep and relaxation; microscopic metallurgical processes; and varying chemical composition within an alloy family. Since these models are founded on essential phenomena dominating the deformation, they have a larger range of usability and validity. Also, they are suitable for manufacturing chain simulations since they can efficiently compute the cumulative effect of the various manufacturing processes by following the material state through the entire manufacturing chain and also interpass periods and give a realistic prediction of the material behavior and final product properties. In the physical-based constitutive model of cold metal plasticity introduced in this study, physical processes influencing cold and warm plastic deformation in polycrystalline metals are described using physical/metallurgical internal variables such as dislocation density and effective grain size. The evolution of these internal variables are calculated using adequate equations that describe the physical processes dominating the material behavior during cold plastic deformation. For validation, the model is numerically implemented in general implicit isotropic elasto-viscoplasticity algorithm as a user-defined material subroutine (UMAT) in ABAQUS/Standard and used for finite element simulation of upsetting tests and a complete cold forging cycle of case hardenable MnCr steel family.
Sharifi, Tayebeh; Ghayeb, Yousef
2018-05-01
Peroxisome proliferator-activated receptors (PPARs) compose a family of nuclear receptors, PPARα, PPARβ, and PPARγ, which mediate the effects of lipidic ligands at the transcriptional level. Among these, the PPARγ has been known to regulate adipocyte differentiation, fatty acid storage and glucose metabolism, and is a target of antidiabetic drugs. In this work, the interactions between PPARγ and its six known antagonists were investigated using computational methods such as molecular docking, molecular dynamics (MD) simulations, and the hybrid quantum mechanics/molecular mechanics (QM/MM). The binding energies evaluated by molecular docking varied between -22.59 and -35.15 kJ mol - 1 . In addition, MD simulations were performed to investigate the binding modes and PPARγ conformational changes upon binding of antagonists. Analysis of the root-mean-square fluctuations (RMSF) of backbone atoms shows that H3 of PPARγ has a higher mobility in the absence of antagonists and moderate conformational changes were observed. The interaction energies between antagonists and each PPARγ residue involved in the interactions were studied by QM/MM calculations. These calculations reveal that antagonists with different structures show different interaction energies with the same residue of PPARγ. Therefore, it can be concluded that the key residues vary depending on the structure of the ligand, which binds to PPARγ.
Dynamic partitioning for hybrid simulation of the bistable HIV-1 transactivation network.
Griffith, Mark; Courtney, Tod; Peccoud, Jean; Sanders, William H
2006-11-15
The stochastic kinetics of a well-mixed chemical system, governed by the chemical Master equation, can be simulated using the exact methods of Gillespie. However, these methods do not scale well as systems become more complex and larger models are built to include reactions with widely varying rates, since the computational burden of simulation increases with the number of reaction events. Continuous models may provide an approximate solution and are computationally less costly, but they fail to capture the stochastic behavior of small populations of macromolecules. In this article we present a hybrid simulation algorithm that dynamically partitions the system into subsets of continuous and discrete reactions, approximates the continuous reactions deterministically as a system of ordinary differential equations (ODE) and uses a Monte Carlo method for generating discrete reaction events according to a time-dependent propensity. Our approach to partitioning is improved such that we dynamically partition the system of reactions, based on a threshold relative to the distribution of propensities in the discrete subset. We have implemented the hybrid algorithm in an extensible framework, utilizing two rigorous ODE solvers to approximate the continuous reactions, and use an example model to illustrate the accuracy and potential speedup of the algorithm when compared with exact stochastic simulation. Software and benchmark models used for this publication can be made available upon request from the authors.
NASA Astrophysics Data System (ADS)
Kacprzak, T.; Herbel, J.; Amara, A.; Réfrégier, A.
2018-02-01
Approximate Bayesian Computation (ABC) is a method to obtain a posterior distribution without a likelihood function, using simulations and a set of distance metrics. For that reason, it has recently been gaining popularity as an analysis tool in cosmology and astrophysics. Its drawback, however, is a slow convergence rate. We propose a novel method, which we call qABC, to accelerate ABC with Quantile Regression. In this method, we create a model of quantiles of distance measure as a function of input parameters. This model is trained on a small number of simulations and estimates which regions of the prior space are likely to be accepted into the posterior. Other regions are then immediately rejected. This procedure is then repeated as more simulations are available. We apply it to the practical problem of estimation of redshift distribution of cosmological samples, using forward modelling developed in previous work. The qABC method converges to nearly same posterior as the basic ABC. It uses, however, only 20% of the number of simulations compared to basic ABC, achieving a fivefold gain in execution time for our problem. For other problems the acceleration rate may vary; it depends on how close the prior is to the final posterior. We discuss possible improvements and extensions to this method.
Modelling of Dispersed Gas-Liquid Flow using LBGK and LPT Approach
NASA Astrophysics Data System (ADS)
Agarwal, Alankar; Prakash, Akshay; Ravindra, B.
2017-11-01
The dynamics of gas bubbles play a significant, if not crucial, role in a large variety of industrial process that involves using reactors. Many of these processes are still not well understood in terms of optimal scale-up strategies.An accurate modeling of bubbles and bubble swarms become important for high fidelity bioreactor simulations. This study is a part of the development of robust bubble fluid interaction modules for simulation of industrial-scale reactors. The work presents the simulation of a single bubble rising in a quiescent water tank using current models presented in the literature for bubble-fluid interaction. In this multiphase benchmark problem, the continuous phase (water) is discretized using the Lattice Bhatnagar-Gross and Krook (LBGK) model of Lattice Boltzmann Method (LBM), while the dispersed gas phase (i.e. air-bubble) modeled with the Lagrangian particle tracking (LPT) approach. The cheap clipped fourth order polynomial function is used to model the interaction between two phases. The model is validated by comparing the simulation results for terminal velocity of a bubble at varying bubble diameter and the influence of bubble motion in liquid velocity with the theoretical and previously available experimental data. This work is supported by the ``Centre for Development of Advanced Computing (C-DAC), Pune'' by providing the advanced computational facility in PARAM Yuva-II.
NASA Astrophysics Data System (ADS)
Ramotar, Lokendra; Rohrauer, Greg L.; Filion, Ryan; MacDonald, Kathryn
2017-03-01
The development of a dynamic thermal battery model for hybrid and electric vehicles is realized. A thermal equivalent circuit model is created which aims to capture and understand the heat propagation from the cells through the entire pack and to the environment using a production vehicle battery pack for model validation. The inclusion of production hardware and the liquid battery thermal management system components into the model considers physical and geometric properties to calculate thermal resistances of components (conduction, convection and radiation) along with their associated heat capacity. Various heat sources/sinks comprise the remaining model elements. Analog equivalent circuit simulations using PSpice are compared to experimental results to validate internal temperature nodes and heat rates measured through various elements, which are then employed to refine the model further. Agreement with experimental results indicates the proposed method allows for a comprehensive real-time battery pack analysis at little computational expense when compared to other types of computer based simulations. Elevated road and ambient conditions in Mesa, Arizona are simulated on a parked vehicle with varying quiescent cooling rates to examine the effect on the diurnal battery temperature for longer term static exposure. A typical daily driving schedule is also simulated and examined.
Numerical Simulation of a High-Lift Configuration Embedded with High Momentum Fluidic Actuators
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Duda, Benjamin; Fares, Ehab; Lin, John C.
2016-01-01
Numerical simulations have been performed for a vertical tail configuration with deflected rudder. The suction surface of the main element of this configuration, just upstream of the hinge line, is embedded with an array of 32 fluidic actuators that produce oscillating sweeping jets. Such oscillating jets have been found to be very effective for flow control applications in the past. In the current paper, a high-fidelity computational fluid dynamics (CFD) code known as the PowerFLOW R code is used to simulate the entire flow field associated with this configuration, including the flow inside the actuators. A fully compressible version of the PowerFLOW R code valid for high speed flows is used for the present simulations to accurately represent the transonic flow regimes encountered in the flow field due to the actuators operating at higher mass flow (momentum) rates required to mitigate reverse flow regions on a highly-deflected rudder surface. The computed results for the surface pressure and integrated forces compare favorably with measured data. In addition, numerical solutions predict the correct trends in forces with active flow control compared to the no control case. The effect of varying the rudder deflection angle on integrated forces and surface pressures is also presented.
GPU-powered model analysis with PySB/cupSODA.
Harris, Leonard A; Nobile, Marco S; Pino, James C; Lubbock, Alexander L R; Besozzi, Daniela; Mauri, Giancarlo; Cazzaniga, Paolo; Lopez, Carlos F
2017-11-01
A major barrier to the practical utilization of large, complex models of biochemical systems is the lack of open-source computational tools to evaluate model behaviors over high-dimensional parameter spaces. This is due to the high computational expense of performing thousands to millions of model simulations required for statistical analysis. To address this need, we have implemented a user-friendly interface between cupSODA, a GPU-powered kinetic simulator, and PySB, a Python-based modeling and simulation framework. For three example models of varying size, we show that for large numbers of simulations PySB/cupSODA achieves order-of-magnitude speedups relative to a CPU-based ordinary differential equation integrator. The PySB/cupSODA interface has been integrated into the PySB modeling framework (version 1.4.0), which can be installed from the Python Package Index (PyPI) using a Python package manager such as pip. cupSODA source code and precompiled binaries (Linux, Mac OS/X, Windows) are available at github.com/aresio/cupSODA (requires an Nvidia GPU; developer.nvidia.com/cuda-gpus). Additional information about PySB is available at pysb.org. paolo.cazzaniga@unibg.it or c.lopez@vanderbilt.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Adaptive Mesh Refinement for Microelectronic Device Design
NASA Technical Reports Server (NTRS)
Cwik, Tom; Lou, John; Norton, Charles
1999-01-01
Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of the spectrum as well as active device simulations that model charge transport and Maxwell's equations will be presented.
Terascale direct numerical simulations of turbulent combustion using S3D
NASA Astrophysics Data System (ADS)
Chen, J. H.; Choudhary, A.; de Supinski, B.; DeVries, M.; Hawkes, E. R.; Klasky, S.; Liao, W. K.; Ma, K. L.; Mellor-Crummey, J.; Podhorszki, N.; Sankaran, R.; Shende, S.; Yoo, C. S.
2009-01-01
Computational science is paramount to the understanding of underlying processes in internal combustion engines of the future that will utilize non-petroleum-based alternative fuels, including carbon-neutral biofuels, and burn in new combustion regimes that will attain high efficiency while minimizing emissions of particulates and nitrogen oxides. Next-generation engines will likely operate at higher pressures, with greater amounts of dilution and utilize alternative fuels that exhibit a wide range of chemical and physical properties. Therefore, there is a significant role for high-fidelity simulations, direct numerical simulations (DNS), specifically designed to capture key turbulence-chemistry interactions in these relatively uncharted combustion regimes, and in particular, that can discriminate the effects of differences in fuel properties. In DNS, all of the relevant turbulence and flame scales are resolved numerically using high-order accurate numerical algorithms. As a consequence terascale DNS are computationally intensive, require massive amounts of computing power and generate tens of terabytes of data. Recent results from terascale DNS of turbulent flames are presented here, illustrating its role in elucidating flame stabilization mechanisms in a lifted turbulent hydrogen/air jet flame in a hot air coflow, and the flame structure of a fuel-lean turbulent premixed jet flame. Computing at this scale requires close collaborations between computer and combustion scientists to provide optimized scaleable algorithms and software for terascale simulations, efficient collective parallel I/O, tools for volume visualization of multiscale, multivariate data and automating the combustion workflow. The enabling computer science, applied to combustion science, is also required in many other terascale physics and engineering simulations. In particular, performance monitoring is used to identify the performance of key kernels in the DNS code, S3D and especially memory intensive loops in the code. Through the careful application of loop transformations, data reuse in cache is exploited thereby reducing memory bandwidth needs, and hence, improving S3D's nodal performance. To enhance collective parallel I/O in S3D, an MPI-I/O caching design is used to construct a two-stage write-behind method for improving the performance of write-only operations. The simulations generate tens of terabytes of data requiring analysis. Interactive exploration of the simulation data is enabled by multivariate time-varying volume visualization. The visualization highlights spatial and temporal correlations between multiple reactive scalar fields using an intuitive user interface based on parallel coordinates and time histogram. Finally, an automated combustion workflow is designed using Kepler to manage large-scale data movement, data morphing, and archival and to provide a graphical display of run-time diagnostics.
Fiore, Vincenzo G; Kottler, Benjamin; Gu, Xiaosi; Hirth, Frank
2017-01-01
The central complex in the insect brain is a composite of midline neuropils involved in processing sensory cues and mediating behavioral outputs to orchestrate spatial navigation. Despite recent advances, however, the neural mechanisms underlying sensory integration and motor action selections have remained largely elusive. In particular, it is not yet understood how the central complex exploits sensory inputs to realize motor functions associated with spatial navigation. Here we report an in silico interrogation of central complex-mediated spatial navigation with a special emphasis on the ellipsoid body. Based on known connectivity and function, we developed a computational model to test how the local connectome of the central complex can mediate sensorimotor integration to guide different forms of behavioral outputs. Our simulations show integration of multiple sensory sources can be effectively performed in the ellipsoid body. This processed information is used to trigger continuous sequences of action selections resulting in self-motion, obstacle avoidance and the navigation of simulated environments of varying complexity. The motor responses to perceived sensory stimuli can be stored in the neural structure of the central complex to simulate navigation relying on a collective of guidance cues, akin to sensory-driven innate or habitual behaviors. By comparing behaviors under different conditions of accessible sources of input information, we show the simulated insect computes visual inputs and body posture to estimate its position in space. Finally, we tested whether the local connectome of the central complex might also allow the flexibility required to recall an intentional behavioral sequence, among different courses of actions. Our simulations suggest that the central complex can encode combined representations of motor and spatial information to pursue a goal and thus successfully guide orientation behavior. Together, the observed computational features identify central complex circuitry, and especially the ellipsoid body, as a key neural correlate involved in spatial navigation.
Fiore, Vincenzo G.; Kottler, Benjamin; Gu, Xiaosi; Hirth, Frank
2017-01-01
The central complex in the insect brain is a composite of midline neuropils involved in processing sensory cues and mediating behavioral outputs to orchestrate spatial navigation. Despite recent advances, however, the neural mechanisms underlying sensory integration and motor action selections have remained largely elusive. In particular, it is not yet understood how the central complex exploits sensory inputs to realize motor functions associated with spatial navigation. Here we report an in silico interrogation of central complex-mediated spatial navigation with a special emphasis on the ellipsoid body. Based on known connectivity and function, we developed a computational model to test how the local connectome of the central complex can mediate sensorimotor integration to guide different forms of behavioral outputs. Our simulations show integration of multiple sensory sources can be effectively performed in the ellipsoid body. This processed information is used to trigger continuous sequences of action selections resulting in self-motion, obstacle avoidance and the navigation of simulated environments of varying complexity. The motor responses to perceived sensory stimuli can be stored in the neural structure of the central complex to simulate navigation relying on a collective of guidance cues, akin to sensory-driven innate or habitual behaviors. By comparing behaviors under different conditions of accessible sources of input information, we show the simulated insect computes visual inputs and body posture to estimate its position in space. Finally, we tested whether the local connectome of the central complex might also allow the flexibility required to recall an intentional behavioral sequence, among different courses of actions. Our simulations suggest that the central complex can encode combined representations of motor and spatial information to pursue a goal and thus successfully guide orientation behavior. Together, the observed computational features identify central complex circuitry, and especially the ellipsoid body, as a key neural correlate involved in spatial navigation. PMID:28824390
Adapting to life: ocean biogeochemical modelling and adaptive remeshing
NASA Astrophysics Data System (ADS)
Hill, J.; Popova, E. E.; Ham, D. A.; Piggott, M. D.; Srokosz, M.
2014-05-01
An outstanding problem in biogeochemical modelling of the ocean is that many of the key processes occur intermittently at small scales, such as the sub-mesoscale, that are not well represented in global ocean models. This is partly due to their failure to resolve sub-mesoscale phenomena, which play a significant role in vertical nutrient supply. Simply increasing the resolution of the models may be an inefficient computational solution to this problem. An approach based on recent advances in adaptive mesh computational techniques may offer an alternative. Here the first steps in such an approach are described, using the example of a simple vertical column (quasi-1-D) ocean biogeochemical model. We present a novel method of simulating ocean biogeochemical behaviour on a vertically adaptive computational mesh, where the mesh changes in response to the biogeochemical and physical state of the system throughout the simulation. We show that the model reproduces the general physical and biological behaviour at three ocean stations (India, Papa and Bermuda) as compared to a high-resolution fixed mesh simulation and to observations. The use of an adaptive mesh does not increase the computational error, but reduces the number of mesh elements by a factor of 2-3. Unlike previous work the adaptivity metric used is flexible and we show that capturing the physical behaviour of the model is paramount to achieving a reasonable solution. Adding biological quantities to the adaptivity metric further refines the solution. We then show the potential of this method in two case studies where we change the adaptivity metric used to determine the varying mesh sizes in order to capture the dynamics of chlorophyll at Bermuda and sinking detritus at Papa. We therefore demonstrate that adaptive meshes may provide a suitable numerical technique for simulating seasonal or transient biogeochemical behaviour at high vertical resolution whilst minimising the number of elements in the mesh. More work is required to move this to fully 3-D simulations.
Xiao, Lin; Liao, Bolin; Li, Shuai; Chen, Ke
2018-02-01
In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhang, Zhijun; Li, Zhijun; Zhang, Yunong; Luo, Yamei; Li, Yuanqing
2015-12-01
We propose a dual-arm cyclic-motion-generation (DACMG) scheme by a neural-dynamic method, which can remedy the joint-angle-drift phenomenon of a humanoid robot. In particular, according to a neural-dynamic design method, first, a cyclic-motion performance index is exploited and applied. This cyclic-motion performance index is then integrated into a quadratic programming (QP)-type scheme with time-varying constraints, called the time-varying-constrained DACMG (TVC-DACMG) scheme. The scheme includes the kinematic motion equations of two arms and the time-varying joint limits. The scheme can not only generate the cyclic motion of two arms for a humanoid robot but also control the arms to move to the desired position. In addition, the scheme considers the physical limit avoidance. To solve the QP problem, a recurrent neural network is presented and used to obtain the optimal solutions. Computer simulations and physical experiments demonstrate the effectiveness and the accuracy of such a TVC-DACMG scheme and the neural network solver.
Comparison of two methods to determine fan performance curves using computational fluid dynamics
NASA Astrophysics Data System (ADS)
Onma, Patinya; Chantrasmi, Tonkid
2018-01-01
This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.
NASA Astrophysics Data System (ADS)
Mekhonoshina, E. V.; Modorskii, V. Ya.
2016-10-01
This paper describes simulation of oscillation modes in the elastic rotor supports with the gas-dynamic flow influence on the rotor in the magnetic suspension in the course of computational experiments. The system of engineering analysis ANSYS 15.0 was used as a numerical tool. The finite volume method for gas dynamics and finite element method for evaluating components of the stress-strain state (SSS) were applied for computation. The research varied magnetic suspension rigidity and estimated the SSS components in the system "gas-dynamic flow - compressor rotor - magnetic suspensions." The influence of aeroelastic effects on the impeller and the rotor on the deformability of vibration magnetic suspension was detected.
NASA Technical Reports Server (NTRS)
Vasu, George; Pack, George J
1951-01-01
Correlation has been established between transient engine and control data obtained experimentally and data obtained by simulating the engine and control with an analog computer. This correlation was established at sea-level conditions for a turbine-propeller engine with a relay-type speed control. The behavior of the controlled engine at altitudes of 20,000 and 35,000 feet was determined with an analog computer using the altitude pressure and temperature generalization factors to calculate the new engine constants for these altitudes. Because the engine response varies considerably at altitude some type of compensation appears desirable and four methods of compensation are discussed.
Users manual for flight control design programs
NASA Technical Reports Server (NTRS)
Nalbandian, J. Y.
1975-01-01
Computer programs for the design of analog and digital flight control systems are documented. The program DIGADAPT uses linear-quadratic-gaussian synthesis algorithms in the design of command response controllers and state estimators, and it applies covariance propagation analysis to the selection of sampling intervals for digital systems. Program SCHED executes correlation and regression analyses for the development of gain and trim schedules to be used in open-loop explicit-adaptive control laws. A linear-time-varying simulation of aircraft motions is provided by the program TVHIS, which includes guidance and control logic, as well as models for control actuator dynamics. The programs are coded in FORTRAN and are compiled and executed on both IBM and CDC computers.
Atomistic study of mixing at high Z / low Z interfaces at Warm Dense Matter Conditions
NASA Astrophysics Data System (ADS)
Haxhimali, Tomorr; Glosli, James; Rudd, Robert; Lawrence Livermore National Laboratory Team
2016-10-01
We use atomistic simulations to study different aspects of mixing occurring at an initially sharp interface of high Z and low Z plasmas in the Warm/Hot Dense Matter regime. We consider a system of Diamond (the low Z component) in contact with Ag (the high Z component), which undergoes rapid isochoric heating from room temperature up to 10 eV, rapidly changing the solids into warm dense matter at solid density. We simulate the motion of ions via the screened Coulomb potential. The electric field, the electron density and ionizations level are computed on the fly by solving Poisson equation. The spatially varying screening lengths computed from the electron cloud are included in this effective interaction; the electrons are not simulated explicitly. We compute the electric field generated at the Ag-C interface as well as the dynamics of the ions during the mixing process occurring at the plasma interface. Preliminary results indicate an anomalous transport of high Z ions (Ag) into the low Z component (C); a phenomenon that is partially related to the enhanced transport of ions due to the generated electric field. These results are in agreement with recent experimental observation on Au-diamond plasma interface. This work was performed under the auspices of the US Dept. of Energy by Lawrence Livermore National Security, LLC under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Adhikari, S.; Ivins, E. R.; Larour, E. Y.
2015-12-01
Perturbations in gravitational and rotational potentials caused by climate driven mass redistribution on the earth's surface, such as ice sheet melting and terrestrial water storage, affect the spatiotemporal variability in global and regional sea level. Here we present a numerically accurate, computationally efficient, high-resolution model for sea level. Unlike contemporary models that are based on spherical-harmonic formulation, the model can operate efficiently in a flexible embedded finite-element mesh system, thus capturing the physics operating at km-scale yet capable of simulating geophysical quantities that are inherently of global scale with minimal computational cost. One obvious application is to compute evolution of sea level fingerprints and associated geodetic and astronomical observables (e.g., geoid height, gravity anomaly, solid-earth deformation, polar motion, and geocentric motion) as a companion to a numerical 3-D thermo-mechanical ice sheet simulation, thus capturing global signatures of climate driven mass redistribution. We evaluate some important time-varying signatures of GRACE inferred ice sheet mass balance and continental hydrological budget; for example, we identify dominant sources of ongoing sea-level change at the selected tide gauge stations, and explain the relative contribution of different sources to the observed polar drift. We also report our progress on ice-sheet/solid-earth/sea-level model coupling efforts toward realistic simulation of Pine Island Glacier over the past several hundred years.
Ponnath, Abhilash
2010-01-01
Sensitivity to acoustic amplitude modulation in crickets differs between species and depends on carrier frequency (e.g., calling song vs. bat-ultrasound bands). Using computational tools, we explore how Ca2+-dependent mechanisms underlying selective attention can contribute to such differences in amplitude modulation sensitivity. For omega neuron 1 (ON1), selective attention is mediated by Ca2+-dependent feedback: [Ca2+]internal increases with excitation, activating a Ca2+-dependent after-hyperpolarizing current. We propose that Ca2+ removal rate and the size of the after-hyperpolarizing current can determine ON1’s temporal modulation transfer function (TMTF). This is tested using a conductance-based simulation calibrated to responses in vivo. The model shows that parameter values that simulate responses to single pulses are sufficient in simulating responses to modulated stimuli: no special modulation-sensitive mechanisms are necessary, as high and low-pass portions of the TMTF are due to Ca2+-dependent spike frequency adaptation and post-synaptic potential depression, respectively. Furthermore, variance in the two biophysical parameters is sufficient to produce TMTFs of varying bandwidth, shifting amplitude modulation sensitivity like that in different species and in response to different carrier frequencies. Thus, the hypothesis that the size of after-hyperpolarizing current and the rate of Ca2+ removal can affect amplitude modulation sensitivity is computationally validated. PMID:20559640
Chao, Edmund Y S; Armiger, Robert S; Yoshida, Hiroaki; Lim, Jonathan; Haraguchi, Naoki
2007-03-08
The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the "Virtual Human" reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of these unique database and simulation technology. This integrated system, model library and database will impact on orthopaedic education, basic research, device development and application, and clinical patient care related to musculoskeletal joint system reconstruction, trauma management, and rehabilitation.
Chao, Edmund YS; Armiger, Robert S; Yoshida, Hiroaki; Lim, Jonathan; Haraguchi, Naoki
2007-01-01
The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the "Virtual Human" reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of these unique database and simulation technology. This integrated system, model library and database will impact on orthopaedic education, basic research, device development and application, and clinical patient care related to musculoskeletal joint system reconstruction, trauma management, and rehabilitation. PMID:17343764
Richmond, Paul; Buesing, Lars; Giugliano, Michele; Vasilaki, Eleni
2011-05-04
High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a "non-democratic" mechanism), achieve mediocre learning results at best. In absence of recurrent connections, where all neurons "vote" independently ("democratic") for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated.
High Speed Jet Noise Prediction Using Large Eddy Simulation
NASA Technical Reports Server (NTRS)
Lele, Sanjiva K.
2002-01-01
Current methods for predicting the noise of high speed jets are largely empirical. These empirical methods are based on the jet noise data gathered by varying primarily the jet flow speed, and jet temperature for a fixed nozzle geometry. Efforts have been made to correlate the noise data of co-annular (multi-stream) jets and for the changes associated with the forward flight within these empirical correlations. But ultimately these emipirical methods fail to provide suitable guidance in the selection of new, low-noise nozzle designs. This motivates the development of a new class of prediction methods which are based on computational simulations, in an attempt to remove the empiricism of the present day noise predictions.
Adaptive-Grid Methods for Phase Field Models of Microstructure Development
NASA Technical Reports Server (NTRS)
Provatas, Nikolas; Goldenfeld, Nigel; Dantzig, Jonathan A.
1999-01-01
In this work the authors show how the phase field model can be solved in a computationally efficient manner that opens a new large-scale simulational window on solidification physics. Our method uses a finite element, adaptive-grid formulation, and exploits the fact that the phase and temperature fields vary significantly only near the interface. We illustrate how our method allows efficient simulation of phase-field models in very large systems, and verify the predictions of solvability theory at intermediate undercooling. We then present new results at low undercoolings that suggest that solvability theory may not give the correct tip speed in that regime. We model solidification using the phase-field model used by Karma and Rappel.
Facility requirements for cockpit traffic display research
NASA Technical Reports Server (NTRS)
Chappell, S. L.; Kreifeldt, J. G.
1982-01-01
It is pointed out that much research is being conducted regarding the use of a cockpit display of traffic information (CDTI) for safe and efficient air traffic flow. A CDTI is a graphic display which shows the pilot the position of other aircraft relative to his or her aircraft. The present investigation is concerned with the facility requirements for the CDTI research. The facilities currently used for this research vary in fidelity from one CDTI-equipped simulator with computer-generated traffic, to four simulators with autopilot-like controls, all having a CDTI. Three groups of subjects were employed in the conducted study. Each of the groups included one controller, and three airline and four general aviation pilots.
Simulation of trading strategies in the electricity market
NASA Astrophysics Data System (ADS)
Charkiewicz, Kamil; Nowak, Robert
2011-10-01
The main objective of the energy market existence is reduction of the total cost of production, transport and distribution of energy, and so the prices paid by terminal consumers. Energy market contains few markets that are varying on operational rules, the important segments: the Futures Contract Market and Next Day Market are analyzed in presented approach. The computer system was developed to simulate the Polish Energy Market. This system use the multi-agent approach, where each agent is the separate shared library with defined interface. The software was used to compare strategies for players in energy market, where the strategies uses auto-regression, k-nearest neighbours, neural network and mixed algorithm, to predict the next price.
Intra-arterial pressure measurement in neonates: dynamic response requirements.
van Genderingen, H R; Gevers, M; Hack, W W
1995-02-01
A computer simulation of a catheter manometer system was used to quantify measurement errors in neonatal blood pressure parameters. Accurate intra-arterial pressure recordings of 21 critically ill newborns were fed into this simulated system. The dynamic characteristics, natural frequency and damping coefficient, were varied from 2.5 to 60 Hz and from 0.1 to 1.4, respectively. As a result, errors in systolic, diastolic and pulse arterial pressure were obtained as a function of natural frequency and damping coefficient. Iso-error curves for 2%, 5% and 10% were constructed. Using these curves, the maximum inaccuracy of any neonatal catheter manometer system can be determined and used in the clinical setting.
Depth-of-interaction estimates in pixelated scintillator sensors using Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Sharma, Diksha; Sze, Christina; Bhandari, Harish; Nagarkar, Vivek; Badano, Aldo
2017-01-01
Image quality in thick scintillator detectors can be improved by minimizing parallax errors through depth-of-interaction (DOI) estimation. A novel sensor for low-energy single photon imaging having a thick, transparent, crystalline pixelated micro-columnar CsI:Tl scintillator structure has been described, with possible future application in small-animal single photon emission computed tomography (SPECT) imaging when using thicker structures under development. In order to understand the fundamental limits of this new structure, we introduce cartesianDETECT2, an open-source optical transport package that uses Monte Carlo methods to obtain estimates of DOI for improving spatial resolution of nuclear imaging applications. Optical photon paths are calculated as a function of varying simulation parameters such as columnar surface roughness, bulk, and top-surface absorption. We use scanning electron microscope images to estimate appropriate surface roughness coefficients. Simulation results are analyzed to model and establish patterns between DOI and photon scattering. The effect of varying starting locations of optical photons on the spatial response is studied. Bulk and top-surface absorption fractions were varied to investigate their effect on spatial response as a function of DOI. We investigated the accuracy of our DOI estimation model for a particular screen with various training and testing sets, and for all cases the percent error between the estimated and actual DOI over the majority of the detector thickness was ±5% with a maximum error of up to ±10% at deeper DOIs. In addition, we found that cartesianDETECT2 is computationally five times more efficient than MANTIS. Findings indicate that DOI estimates can be extracted from a double-Gaussian model of the detector response. We observed that our model predicts DOI in pixelated scintillator detectors reasonably well.
Simultaneously optimizing dose and schedule of a new cytotoxic agent.
Braun, Thomas M; Thall, Peter F; Nguyen, Hoang; de Lima, Marcos
2007-01-01
Traditionally, phase I clinical trial designs are based upon one predefined course of treatment while varying among patients the dose given at each administration. In actual medical practice, patients receive a schedule comprised of several courses of treatment, and some patients may receive one or more dose reductions or delays during treatment. Consequently, the overall risk of toxicity for each patient is a function of both actual schedule of treatment and the differing doses used at each adminstration. Our goal is to provide a practical phase I clinical trial design that more accurately reflects actual medical practice by accounting for both dose per administration and schedule. We propose an outcome-adaptive Bayesian design that simultaneously optimizes both dose and schedule in terms of the overall risk of toxicity, based on time-to-toxicity outcomes. We use computer simulation as a tool to calibrate design parameters. We describe a phase I trial in allogeneic bone marrow transplantation that was designed and is currently being conducted using our new method. Our computer simulations demonstrate that our method outperforms any method that searches for an optimal dose but does not allow schedule to vary, both in terms of the probability of identifying optimal (dose, schedule) combinations, and the numbers of patients assigned to those combinations in the trial. Our design requires greater sample sizes than those seen in traditional phase I studies due to the larger number of treatment combinations examined. Our design also assumes that the effects of multiple administrations are independent of each other and that the hazard of toxicity is the same for all administrations. Our design is the first for phase I clinical trials that is sufficiently flexible and practical to truly reflect clinical practice by varying both dose and the timing and number of administrations given to each patient.
Large-eddy simulation, fuel rod vibration and grid-to-rod fretting in pressurized water reactors
Christon, Mark A.; Lu, Roger; Bakosi, Jozsef; ...
2016-10-01
Grid-to-rod fretting (GTRF) in pressurized water reactors is a flow-induced vibration phenomenon that results in wear and fretting of the cladding material on fuel rods. GTRF is responsible for over 70% of the fuel failures in pressurized water reactors in the United States. Predicting the GTRF wear and concomitant interval between failures is important because of the large costs associated with reactor shutdown and replacement of fuel rod assemblies. The GTRF-induced wear process involves turbulent flow, mechanical vibration, tribology, and time-varying irradiated material properties in complex fuel assembly geometries. This paper presents a new approach for predicting GTRF induced fuelmore » rod wear that uses high-resolution implicit large-eddy simulation to drive nonlinear transient dynamics computations. The GTRF fluid–structure problem is separated into the simulation of the turbulent flow field in the complex-geometry fuel-rod bundles using implicit large-eddy simulation, the calculation of statistics of the resulting fluctuating structural forces, and the nonlinear transient dynamics analysis of the fuel rod. Ultimately, the methods developed here, can be used, in conjunction with operational management, to improve reactor core designs in which fuel rod failures are minimized or potentially eliminated. Furthermore, robustness of the behavior of both the structural forces computed from the turbulent flow simulations and the results from the transient dynamics analyses highlight the progress made towards achieving a predictive simulation capability for the GTRF problem.« less
Large-eddy simulation, fuel rod vibration and grid-to-rod fretting in pressurized water reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christon, Mark A.; Lu, Roger; Bakosi, Jozsef
Grid-to-rod fretting (GTRF) in pressurized water reactors is a flow-induced vibration phenomenon that results in wear and fretting of the cladding material on fuel rods. GTRF is responsible for over 70% of the fuel failures in pressurized water reactors in the United States. Predicting the GTRF wear and concomitant interval between failures is important because of the large costs associated with reactor shutdown and replacement of fuel rod assemblies. The GTRF-induced wear process involves turbulent flow, mechanical vibration, tribology, and time-varying irradiated material properties in complex fuel assembly geometries. This paper presents a new approach for predicting GTRF induced fuelmore » rod wear that uses high-resolution implicit large-eddy simulation to drive nonlinear transient dynamics computations. The GTRF fluid–structure problem is separated into the simulation of the turbulent flow field in the complex-geometry fuel-rod bundles using implicit large-eddy simulation, the calculation of statistics of the resulting fluctuating structural forces, and the nonlinear transient dynamics analysis of the fuel rod. Ultimately, the methods developed here, can be used, in conjunction with operational management, to improve reactor core designs in which fuel rod failures are minimized or potentially eliminated. Furthermore, robustness of the behavior of both the structural forces computed from the turbulent flow simulations and the results from the transient dynamics analyses highlight the progress made towards achieving a predictive simulation capability for the GTRF problem.« less
Hydrodynamics of Fishlike Swimming: Effects of swimming kinematics and Reynolds number
NASA Astrophysics Data System (ADS)
Gilmanov, Anvar; Posada, Nicolas; Sotiropoulos, Fotis
2003-11-01
We carry out a series of numerical simulations to investigate the effects of swimming kinematics and Reynolds number on the flow past a three-dimensional fishlike body undergoing undulatory motion. The simulated body shape is that of a real mackerel fish. The mackerel was frozen and subsequently sliced in several thin fillets whose dimensions were carefully measured and used to construct the fishlike body shape used in the simulations. The flow induced by the undulating body is simulated by solving the 3D, unsteady, incompressible Navier-Stokes equations with the second-order accurate, hybrid Cartesian/Immersed Boundary formulation of Gilmanov and Sotiropoulos (J. Comp. Physics, under review, 2003). We consider in-line swimming at constant speed and carry out simulations for various types of swimming kinematics, varying the tailbeat amplitude, frequency, and Reynolds number (300
NASA Technical Reports Server (NTRS)
Ross, M. D.; Linton, S. W.; Parnas, B. R.
2000-01-01
A quasi-three-dimensional finite-volume numerical simulator was developed to study passive voltage spread in vestibular macular afferents. The method, borrowed from computational fluid dynamics, discretizes events transpiring in small volumes over time. The afferent simulated had three calyces with processes. The number of processes and synapses, and direction and timing of synapse activation, were varied. Simultaneous synapse activation resulted in shortest latency, while directional activation (proximal to distal and distal to proximal) yielded most regular discharges. Color-coded visualizations showed that the simulator discretized events and demonstrated that discharge produced a distal spread of voltage from the spike initiator into the ending. The simulations indicate that directional input, morphology, and timing of synapse activation can affect discharge properties, as must also distal spread of voltage from the spike initiator. The finite volume method has generality and can be applied to more complex neurons to explore discrete synaptic effects in four dimensions.
NASA Technical Reports Server (NTRS)
Whitten, G. Z.; Hogo, H.
1976-01-01
Jet aircraft emissions data from the literature were used as initial conditions for a series of computer simulations of photochemical smog formation in static air. The chemical kinetics mechanism used in these simulations was an updated version which contains certain parameters designed to account for hydrocarbon reactivity. These parameters were varied to simulate the reaction rate constants and average carbon numbers associated with the jet emissions. The roles of surface effects, variable light sources, NO/NO2 ratio, continuous emissions, and untested mechanistic parameters were also assessed. The results of these calculations indicate that the present jet emissions are capable of producing oxidant by themselves. The hydrocarbon/nitrous oxides ratio of present jet aircraft emissions is much higher than that of automobiles. These two ratios appear to bracket the hydrocarbon/nitrous oxides ratio that maximizes ozone production. Hence an enhanced effect is seen in the simulation when jet exhaust emissions are mixed with automobile emissions.
Keane, Robert E.; Rollins, Matthew; Zhu, Zhi-Liang
2007-01-01
Canopy and surface fuels in many fire-prone forests of the United States have increased over the last 70 years as a result of modern fire exclusion policies, grazing, and other land management activities. The Healthy Forest Restoration Act and National Fire Plan establish a national commitment to reduce fire hazard and restore fire-adapted ecosystems across the USA. The primary index used to prioritize treatment areas across the nation is Fire Regime Condition Class (FRCC) computed as departures of current conditions from the historical fire and landscape conditions. This paper describes a process that uses an extensive set of ecological models to map FRCC from a departure statistic computed from simulated time series of historical landscape composition. This mapping process uses a data-driven, biophysical approach where georeferenced field data, biogeochemical simulation models, and spatial data libraries are integrated using spatial statistical modeling to map environmental gradients that are then used to predict vegetation and fuels characteristics over space. These characteristics are then fed into a landscape fire and succession simulation model to simulate a time series of historical landscape compositions that are then compared to the composition of current landscapes to compute departure, and the FRCC values. Intermediate products from this process are then used to create ancillary vegetation, fuels, and fire regime layers that are useful in the eventual planning and implementation of fuel and restoration treatments at local scales. The complex integration of varied ecological models at different scales is described and problems encountered during the implementation of this process in the LANDFIRE prototype project are addressed.
The manual control of vehicles undergoing slow transitions in dynamic characteristics
NASA Technical Reports Server (NTRS)
Moriarty, T. E.
1974-01-01
The manual control was studied of a vehicle with slowly time-varying dynamics to develop analytic and computer techniques necessary for the study of time-varying systems. The human operator is considered as he controls a time-varying plant in which the changes are neither abrupt nor so slow that the time variations are unimportant. An experiment in which pilots controlled the longitudinal mode of a simulated time-varying aircraft is described. The vehicle changed from a pure double integrator to a damped second order system, either instantaneously or smoothly over time intervals of 30, 75, or 120 seconds. The regulator task consisted of trying to null the error term resulting from injected random disturbances with bandwidths of 0.8, 1.4, and 2.0 radians per second. Each of the twelve experimental conditons was replicated ten times. It is shown that the pilot's performance in the time-varying task is essentially equivalent to his performance in stationary tasks which correspond to various points in the transition. A rudimentary model for the pilot-vehicle-regulator is presented.
Hu, Suxing; Collins, Lee A.; Goncharov, V. N.; ...
2016-05-26
Using first-principles (FP) methods, we have performed ab initio compute for the equation of state (EOS), thermal conductivity, and opacity of deuterium-tritium (DT) in a wide range of densities and temperatures for inertial confinement fusion (ICF) applications. These systematic investigations have recently been expanded to accurately compute the plasma properties of CH ablators under extreme conditions. In particular, the first-principles EOS and thermal-conductivity tables of CH are self-consistently built from such FP calculations, which are benchmarked by experimental measurements. When compared with the traditional models used for these plasma properties in hydrocodes, significant differences have been identified in the warmmore » dense plasma regime. When these FP-calculated properties of DT and CH were used in our hydrodynamic simulations of ICF implosions, we found that the target performance in terms of neutron yield and energy gain can vary by a factor of 2 to 3, relative to traditional model simulations.« less
Fast modeling of flux trapping cascaded explosively driven magnetic flux compression generators.
Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Liu, Chebo
2013-01-01
To predict the performance of flux trapping cascaded flux compression generators, a calculation model based on an equivalent circuit is investigated. The system circuit is analyzed according to its operation characteristics in different steps. Flux conservation coefficients are added to the driving terms of circuit differential equations to account for intrinsic flux losses. To calculate the currents in the circuit by solving the circuit equations, a simple zero-dimensional model is used to calculate the time-varying inductance and dc resistance of the generator. Then a fast computer code is programmed based on this calculation model. As an example, a two-staged flux trapping generator is simulated by using this computer code. Good agreements are achieved by comparing the simulation results with the measurements. Furthermore, it is obvious that this fast calculation model can be easily applied to predict performances of other flux trapping cascaded flux compression generators with complex structures such as conical stator or conical armature sections and so on for design purpose.
NASA Astrophysics Data System (ADS)
Algrain, Marcelo C.; Powers, Richard M.
1997-05-01
A case study, written in a tutorial manner, is presented where a comprehensive computer simulation is developed to determine the driving factors contributing to spacecraft pointing accuracy and stability. Models for major system components are described. Among them are spacecraft bus, attitude controller, reaction wheel assembly, star-tracker unit, inertial reference unit, and gyro drift estimators (Kalman filter). The predicted spacecraft performance is analyzed for a variety of input commands and system disturbances. The primary deterministic inputs are the desired attitude angles and rate set points. The stochastic inputs include random torque disturbances acting on the spacecraft, random gyro bias noise, gyro random walk, and star-tracker noise. These inputs are varied over a wide range to determine their effects on pointing accuracy and stability. The results are presented in the form of trade- off curves designed to facilitate the proper selection of subsystems so that overall spacecraft pointing accuracy and stability requirements are met.
Optimal subinterval selection approach for power system transient stability simulation
Kim, Soobae; Overbye, Thomas J.
2015-10-21
Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less
A Computational Observer For Performing Contrast-Detail Analysis Of Ultrasound Images
NASA Astrophysics Data System (ADS)
Lopez, H.; Loew, M. H.
1988-06-01
Contrast-Detail (C/D) analysis allows the quantitative determination of an imaging system's ability to display a range of varying-size targets as a function of contrast. Using this technique, a contrast-detail plot is obtained which can, in theory, be used to compare image quality from one imaging system to another. The C/D plot, however, is usually obtained by using data from human observer readings. We have shown earlier(7) that the performance of human observers in the task of threshold detection of simulated lesions embedded in random ultrasound noise is highly inaccurate and non-reproducible for untrained observers. We present an objective, computational method for the determination of the C/D curve for ultrasound images. This method utilizes digital images of the C/D phantom developed at CDRH, and lesion-detection algorithms that simulate the Bayesian approach using the likelihood function for an ideal observer. We present the results of this method, and discuss the relationship to the human observer and to the comparability of image quality between systems.
NASA Astrophysics Data System (ADS)
Cheng, Shiwang; Carrillo, Jan-Michael Y.; Carroll, Bobby; Sumpter, Bobby G.; Sokolov, Alexei P.
There are growing experimental evidences showing the existence of an interfacial layer that has a finite thickness with slowing down dynamics in polymer nanocomposites (PNCs). Moreover, it is believed that the interfacial layer plays a significant role on various macroscopic properties of PNCs. A thicker interfacial layer is found to have more pronounced effect on the macroscopic properties such as the mechanical enhancement. However, it is not clear what molecular parameter controls the interfacial layer thickness. Inspired by our recent computer simulations that showed the chain rigidity correlated well with the interfacial layer thickness, we performed systematic experimental studies on different polymer nanocomposites by varying the chain stiffness. Combining small-angle X-ray scattering, broadband dielectric spectroscopy and temperature modulated differential scanning calorimetry, we find a good correlation between the polymer Kuhn length and the thickness of the interfacial layer, confirming the earlier computer simulations results. Our findings provide a direct guidance for the design of new PNCs with desired properties.
Cai, Zuowei; Huang, Lihong; Zhang, Lingling
2015-05-01
This paper investigates the problem of exponential synchronization of time-varying delayed neural networks with discontinuous neuron activations. Under the extended Filippov differential inclusion framework, by designing discontinuous state-feedback controller and using some analytic techniques, new testable algebraic criteria are obtained to realize two different kinds of global exponential synchronization of the drive-response system. Moreover, we give the estimated rate of exponential synchronization which depends on the delays and system parameters. The obtained results extend some previous works on synchronization of delayed neural networks not only with continuous activations but also with discontinuous activations. Finally, numerical examples are provided to show the correctness of our analysis via computer simulations. Our method and theoretical results have a leading significance in the design of synchronized neural network circuits involving discontinuous factors and time-varying delays. Copyright © 2015 Elsevier Ltd. All rights reserved.
User data dissemination concepts for earth resources
NASA Technical Reports Server (NTRS)
Davies, R.; Scott, M.; Mitchell, C.; Torbett, A.
1976-01-01
Domestic data dissemination networks for earth-resources data in the 1985-1995 time frame were evaluated. The following topics were addressed: (1) earth-resources data sources and expected data volumes, (2) future user demand in terms of data volume and timeliness, (3) space-to-space and earth point-to-point transmission link requirements and implementation, (4) preprocessing requirements and implementation, (5) network costs, and (6) technological development to support this implementation. This study was parametric in that the data input (supply) was varied by a factor of about fifteen while the user request (demand) was varied by a factor of about nineteen. Correspondingly, the time from observation to delivery to the user was varied. This parametric evaluation was performed by a computer simulation that was based on network alternatives and resulted in preliminary transmission and preprocessing requirements. The earth-resource data sources considered were: shuttle sorties, synchronous satellites (e.g., SEOS), aircraft, and satellites in polar orbits.
Computationally efficient statistical differential equation modeling using homogenization
Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.
2013-01-01
Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.
Validation of computer simulation training for esophagogastroduodenoscopy: Pilot study.
Sedlack, Robert E
2007-08-01
Little is known regarding the value of esophagogastroduodenoscopy (EGD) simulators in education. The purpose of the present paper was to validate the use of computer simulation in novice EGD training. In phase 1, expert endoscopists evaluated various aspects of simulation fidelity as compared to live endoscopy. Additionally, computer-recorded performance metrics were assessed by comparing the recorded scores from users of three different experience levels. In phase 2, the transfer of simulation-acquired skills to the clinical setting was assessed in a two-group, randomized pilot study. The setting was a large gastroenterology (GI) Fellowship training program; in phase 1, 21 subjects (seven expert, intermediate and novice endoscopist), made up the three experience groups. In phase 2, eight novice GI fellows were involved in the two-group, randomized portion of the study examining the transfer of simulation skills to the clinical setting. During the initial validation phase, each of the 21 subjects completed two standardized EDG scenarios on a computer simulator and their performance scores were recorded for seven parameters. Following this, staff participants completed a questionnaire evaluating various aspects of the simulator's fidelity. Finally, four novice GI fellows were randomly assigned to receive 6 h of simulator-augmented training (SAT group) in EGD prior to beginning 1 month of patient-based EGD training. The remaining fellows experienced 1 month of patient-based training alone (PBT group). Results of the seven measured performance parameters were compared between three groups of varying experience using a Wilcoxon ranked sum test. The staffs' simulator fidelity survey used a 7-point Likert scale (1, very unrealistic; 4, neutral; 7, very realistic) for each of the parameters examined. During the second phase of this study, supervising staff rated both SAT and PBT fellows' patient-based performance daily. Scoring in each skill was completed using a 7-point Likert scale (1, strongly disagree; 4, neutral; 7, strongly agree). Median scores were compared between groups using the Wilcoxon ranked sum test. Staff evaluations of fidelity found that only two of the parameters examined (anatomy and scope maneuverability) had a significant degree of realism. The remaining areas were felt to be limited in their fidelity. Of the computer-recorded performance scores, only the novice group could be reliably identified from the other two experience groups. In the clinical application phase, the median Patient Discomfort ratings were superior in the PBT group (6; interquartile range [IQR], 5-6) as compared to the SAT group (5; IQR, 4-6; P = 0.015). PBT fellows' ratings were also superior in Sedation, Patient Discomfort, Independence and Competence during various phases of the evaluation. At no point were SAT fellows rated higher than the PBT group in any of the parameters examined. This EGD simulator has limitations to the degree of fidelity and can differentiate only novice endoscopists from other levels of experience. Finally, skills learned during EGD simulation training do not appear to translate well into patient-based endoscopy skills. These findings suggest against a key element of validity for the use of this computer simulator in novice EGD training.
A novel grid-based mesoscopic model for evacuation dynamics
NASA Astrophysics Data System (ADS)
Shi, Meng; Lee, Eric Wai Ming; Ma, Yi
2018-05-01
This study presents a novel grid-based mesoscopic model for evacuation dynamics. In this model, the evacuation space is discretised into larger cells than those used in microscopic models. This approach directly computes the dynamic changes crowd densities in cells over the course of an evacuation. The density flow is driven by the density-speed correlation. The computation is faster than in traditional cellular automata evacuation models which determine density by computing the movements of each pedestrian. To demonstrate the feasibility of this model, we apply it to a series of practical scenarios and conduct a parameter sensitivity study of the effect of changes in time step δ. The simulation results show that within the valid range of δ, changing δ has only a minor impact on the simulation. The model also makes it possible to directly acquire key information such as bottleneck areas from a time-varied dynamic density map, even when a relatively large time step is adopted. We use the commercial software AnyLogic to evaluate the model. The result shows that the mesoscopic model is more efficient than the microscopic model and provides more in-situ details (e.g., pedestrian movement pattern) than the macroscopic models.
Effect of image scaling and segmentation in digital rock characterisation
NASA Astrophysics Data System (ADS)
Jones, B. D.; Feng, Y. T.
2016-04-01
Digital material characterisation from microstructural geometry is an emerging field in computer simulation. For permeability characterisation, a variety of studies exist where the lattice Boltzmann method (LBM) has been used in conjunction with computed tomography (CT) imaging to simulate fluid flow through microscopic rock pores. While these previous works show that the technique is applicable, the use of binary image segmentation and the bounceback boundary condition results in a loss of grain surface definition when the modelled geometry is compared to the original CT image. We apply the immersed moving boundary (IMB) condition of Noble and Torczynski as a partial bounceback boundary condition which may be used to better represent the geometric definition provided by a CT image. The IMB condition is validated against published work on idealised porous geometries in both 2D and 3D. Following this, greyscale image segmentation is applied to a CT image of Diemelstadt sandstone. By varying the mapping of CT voxel densities to lattice sites, it is shown that binary image segmentation may underestimate the true permeability of the sample. A CUDA-C-based code, LBM-C, was developed specifically for this work and leverages GPU hardware in order to carry out computations.
An investigation of unsteady 3D effects on trailing edge flaps
NASA Astrophysics Data System (ADS)
Jost, E.; Fischer, A.; Lutz, T.; Krämer, E.
2016-09-01
The present study investigates the impact of unsteady and viscous three-dimensional aerodynamic effects on a wind turbine blade with trailing edge flap by means of CFD. Harmonic oscillations are simulated on the DTU 10 MW rotor with a flap of 10% chord extent ranging from 70% to 80% blade radius. The deflection frequency is varied in the range between 1p and 6p. To quantify 3D effects, rotor simulations are compared to 2D airfoil computations. A significant influence of trailing and shed vortex structures has been found which leads to a reduction of the lift amplitude and hysteresis effects in the lift response with regard to the flap deflection. In the 3D rotor results greater amplitude reductions and less hystereses have been found compared to the 2D airfoil simulations.
In-depth analysis and modelling of self-heating effects in nanometric DGMOSFETs
NASA Astrophysics Data System (ADS)
Roldán, J. B.; González, B.; Iñiguez, B.; Roldán, A. M.; Lázaro, A.; Cerdeira, A.
2013-01-01
Self-heating effects (SHEs) in nanometric symmetrical double-gate MOSFETs (DGMOSFETs) have been analysed. An equivalent thermal circuit for the transistors has been developed to characterise thermal effects, where the temperature and thickness dependency of the thermal conductivity of the silicon and oxide layers within the devices has been included. The equivalent thermal circuit is consistent with simulations using a commercial technology computer-aided design (TCAD) tool (Sentaurus by Synopsys). In addition, a model for DGMOSFETs has been developed where SHEs have been considered in detail, taking into account the temperature dependence of the low-field mobility, saturation velocity, and inversion charge. The model correctly reproduces Sentaurus simulation data for the typical bias range used in integrated circuits. Lattice temperatures predicted by simulation are coherently reproduced by the model for varying silicon layer geometry.
Parallel ALLSPD-3D: Speeding Up Combustor Analysis Via Parallel Processing
NASA Technical Reports Server (NTRS)
Fricker, David M.
1997-01-01
The ALLSPD-3D Computational Fluid Dynamics code for reacting flow simulation was run on a set of benchmark test cases to determine its parallel efficiency. These test cases included non-reacting and reacting flow simulations with varying numbers of processors. Also, the tests explored the effects of scaling the simulation with the number of processors in addition to distributing a constant size problem over an increasing number of processors. The test cases were run on a cluster of IBM RS/6000 Model 590 workstations with ethernet and ATM networking plus a shared memory SGI Power Challenge L workstation. The results indicate that the network capabilities significantly influence the parallel efficiency, i.e., a shared memory machine is fastest and ATM networking provides acceptable performance. The limitations of ethernet greatly hamper the rapid calculation of flows using ALLSPD-3D.
A Novel Cost Based Model for Energy Consumption in Cloud Computing
Horri, A.; Dastghaibyfard, Gh.
2015-01-01
Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. PMID:25705716
A novel cost based model for energy consumption in cloud computing.
Horri, A; Dastghaibyfard, Gh
2015-01-01
Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment.
Simulating the Heliosphere with Kinetic Hydrogen and Dynamic MHD Source Terms
Heerikhuisen, Jacob; Pogorelov, Nikolai; Zank, Gary
2013-04-01
The interaction between the ionized plasma of the solar wind (SW) emanating from the sun and the partially ionized plasma of the local interstellar medium (LISM) creates the heliosphere. The heliospheric interface is characterized by the tangential discontinuity known as the heliopause that separates the SW and LISM plasmas, and a termination shock on the SW side along with a possible bow shock on the LISM side. Neutral Hydrogen of interstellar origin plays a critical role in shaping the heliospheric interface, since it freely traverses the heliopause. Charge-exchange between H-atoms and plasma protons couples the ions and neutrals, but themore » mean free paths are large, resulting in non-equilibrated energetic ion and neutral components. In our model, source terms for the MHD equations are generated using a kinetic approach for hydrogen, and the key computational challenge is to resolve these sources with sufficient statistics. For steady-state simulations, statistics can accumulate over arbitrarily long time intervals. In this paper we discuss an approach for improving the statistics in time-dependent calculations, and present results from simulations of the heliosphere where the SW conditions at the inner boundary of the computation vary according to an idealized solar cycle.« less
NASA Astrophysics Data System (ADS)
Heister, Timo; Dannberg, Juliane; Gassmöller, Rene; Bangerth, Wolfgang
2017-08-01
Computations have helped elucidate the dynamics of Earth's mantle for several decades already. The numerical methods that underlie these simulations have greatly evolved within this time span, and today include dynamically changing and adaptively refined meshes, sophisticated and efficient solvers, and parallelization to large clusters of computers. At the same time, many of the methods - discussed in detail in a previous paper in this series - were developed and tested primarily using model problems that lack many of the complexities that are common to the realistic models our community wants to solve today. With several years of experience solving complex and realistic models, we here revisit some of the algorithm designs of the earlier paper and discuss the incorporation of more complex physics. In particular, we re-consider time stepping and mesh refinement algorithms, evaluate approaches to incorporate compressibility, and discuss dealing with strongly varying material coefficients, latent heat, and how to track chemical compositions and heterogeneities. Taken together and implemented in a high-performance, massively parallel code, the techniques discussed in this paper then allow for high resolution, 3-D, compressible, global mantle convection simulations with phase transitions, strongly temperature dependent viscosity and realistic material properties based on mineral physics data.
Bilinauskaite, Milda; Mantha, Vishveshwar Rajendra; Rouboa, Abel Ilah; Ziliukas, Pranas; Silva, Antonio Jose
2013-01-01
The aim of this paper is to determine the hydrodynamic characteristics of swimmer's scanned hand models for various combinations of both the angle of attack and the sweepback angle and shape and velocity of swimmer's hand, simulating separate underwater arm stroke phases of freestyle (front crawl) swimming. Four realistic 3D models of swimmer's hand corresponding to different combinations of separated/closed fingers positions were used to simulate different underwater front crawl phases. The fluid flow was simulated using FLUENT (ANSYS, PA, USA). Drag force and drag coefficient were calculated using (computational fluid dynamics) CFD in steady state. Results showed that the drag force and coefficient varied at the different flow velocities on all shapes of the hand and variation was observed for different hand positions corresponding to different stroke phases. The models of the hand with thumb adducted and abducted generated the highest drag forces and drag coefficients. The current study suggests that the realistic variation of both the orientation angles influenced higher values of drag, lift, and resultant coefficients and forces. To augment resultant force, which affects swimmer's propulsion, the swimmer should concentrate in effectively optimising achievable hand areas during crucial propulsive phases. PMID:23691493
Full cell simulation and the evaluation of the buffer system on air-cathode microbial fuel cell
NASA Astrophysics Data System (ADS)
Ou, Shiqi; Kashima, Hiroyuki; Aaron, Douglas S.; Regan, John M.; Mench, Matthew M.
2017-04-01
This paper presents a computational model of a single chamber, air-cathode MFC. The model considers losses due to mass transport, as well as biological and electrochemical reactions, in both the anode and cathode half-cells. Computational fluid dynamics and Monod-Nernst analysis are incorporated into the reactions for the anode biofilm and cathode Pt catalyst and biofilm. The integrated model provides a macro-perspective of the interrelation between the anode and cathode during power production, while incorporating microscale contributions of mass transport within the anode and cathode layers. Model considerations include the effects of pH (H+/OH- transport) and electric field-driven migration on concentration overpotential, effects of various buffers and various amounts of buffer on the pH in the whole reactor, and overall impacts on the power output of the MFC. The simulation results fit the experimental polarization and power density curves well. Further, this model provides insight regarding mass transport at varying current density regimes and quantitative delineation of overpotentials at the anode and cathode. Overall, this comprehensive simulation is designed to accurately predict MFC performance based on fundamental fluid and kinetic relations and guide optimization of the MFC system.
Monte Carlo simulation of electrothermal atomization on a desktop personal computer
NASA Astrophysics Data System (ADS)
Histen, Timothy E.; Güell, Oscar A.; Chavez, Iris A.; Holcombea, James A.
1996-07-01
Monte Carlo simulations have been applied to electrothermal atomization (ETA) using a tubular atomizer (e.g. graphite furnace) because of the complexity in the geometry, heating, molecular interactions, etc. The intense computational time needed to accurately model ETA often limited its effective implementation to the use of supercomputers. However, with the advent of more powerful desktop processors, this is no longer the case. A C-based program has been developed and can be used under Windows TM or DOS. With this program, basic parameters such as furnace dimensions, sample placement, furnace heating and kinetic parameters such as activation energies for desorption and adsorption can be varied to show the absorbance profile dependence on these parameters. Even data such as time-dependent spatial distribution of analyte inside the furnace can be collected. The DOS version also permits input of external temperaturetime data to permit comparison of simulated profiles with experimentally obtained absorbance data. The run-time versions are provided along with the source code. This article is an electronic publication in Spectrochimica Acta Electronica (SAE), the electronic section of Spectrochimica Acta Part B (SAB). The hardcopy text is accompanied by a diskette with a program (PC format), data files and text files.
Multiscale computational models in physical systems biology of intracellular trafficking.
Tourdot, Richard W; Bradley, Ryan P; Ramakrishnan, Natesan; Radhakrishnan, Ravi
2014-10-01
In intracellular trafficking, a definitive understanding of the interplay between protein binding and membrane morphology remains incomplete. The authors describe a computational approach by integrating coarse-grained molecular dynamics (CGMD) simulations with continuum Monte Carlo (CM) simulations of the membrane to study protein-membrane interactions and the ensuing membrane curvature. They relate the curvature field strength discerned from the molecular level to its effect at the cellular length-scale. They perform thermodynamic integration on the CM model to describe the free energy landscape of vesiculation in clathrin-mediated endocytosis. The method presented here delineates membrane morphologies and maps out the free energy changes associated with membrane remodeling due to varying coat sizes, coat curvature strengths, membrane bending rigidities, and tensions; furthermore several constraints on mechanisms underlying clathrin-mediated endocytosis have also been identified, Their CGMD simulations have revealed the importance of PIP2 for stable binding of proteins essential for curvature induction in the bilayer and have provided a molecular basis for the positive curvature induction by the epsin N-terminal homology (EIMTH) domain. Calculation of the free energy landscape for vesicle budding has identified the critical size and curvature strength of a clathrin coat required for nucleation and stabilisation of a mature vesicle.
Whitbeck, David E.
2006-01-01
The Lamoreux Potential Evapotranspiration (LXPET) Program computes potential evapotranspiration (PET) using inputs from four different meteorological sources: temperature, dewpoint, wind speed, and solar radiation. PET and the same four meteorological inputs are used with precipitation data in the Hydrological Simulation Program-Fortran (HSPF) to simulate streamflow in the Salt Creek watershed, DuPage County, Illinois. Streamflows from HSPF are routed with the Full Equations (FEQ) model to determine water-surface elevations. Consequently, variations in meteorological inputs have potential to propagate through many calculations. Sensitivity of PET to variation was simulated by increasing the meteorological input values by 20, 40, and 60 percent and evaluating the change in the calculated PET. Increases in temperatures produced the greatest percent changes, followed by increases in solar radiation, dewpoint, and then wind speed. Additional sensitivity of PET was considered for shifts in input temperatures and dewpoints by absolute differences of ?10, ?20, and ?30 degrees Fahrenheit (degF). Again, changes in input temperatures produced the greatest differences in PET. Sensitivity of streamflow simulated by HSPF was evaluated for 20-percent increases in meteorological inputs. These simulations showed that increases in temperature produced the greatest change in flow. Finally, peak water-surface elevations for nine storm events were compared among unmodified meteorological inputs and inputs with values predicted 6, 24, and 48 hours preceding the simulated peak. Results of this study can be applied to determine how errors specific to a hydrologic system will affect computations of system streamflow and water-surface elevations.
Leake, S.A.; Leahy, P.P.; Navoy, A.S.
1994-01-01
Transient leakage into or out of a compressible fine-grained confining unit results from ground- water storage changes within the unit. The computer program described in this report provides a new method of simulating transient leakage using the U.S. Geological Survey modular finite- difference ground-water flow model (MODFLOW). The new program is referred to as the Transient- Leakage Package. The Transient-Leakage Package solves integrodifferential equations that describe flow across the upper and lower boundaries of confining units. For each confining unit, vertical hydraulic conductivity, thickness, and specific storage are specified in input arrays. These properties can vary from cell to cell and the confining unit need not be present at all locations in the grid; however, the confining units must be bounded above and below by model layers in which head is calculated or specified. The package was used in an example problem to simulate drawdown around a pumping well in a system with two aquifers separated by a confining unit. For drawdown values in excess of 1 centimeter, the solution using the new package closely matched an exact analytical solution. The problem also was simulated without the new package by using a separate model layer to represent the confining unit. That simulation was refined by using two model layers to represent the confining unit. The simulation using the Transient-Leakage Package was faster and more accurate than either of the simulations using model layers to represent the confining unit.
Shrestha, Suman; Vedantham, Srinivasan; Karellas, Andrew
2017-01-01
In digital breast tomosynthesis and digital mammography, the x-ray beam filter material and thickness vary between systems. Replacing K-edge filters with Al was investigated with the intent to reduce exposure duration and to simplify system design. Tungsten target x-ray spectra were simulated with K-edge filters (50μm Rh; 50μm Ag) and Al filters of varying thickness. Monte Carlo simulations were conducted to quantify the x-ray scatter from various filters alone, scatter-to-primary ratio (SPR) with compressed breasts, and to determine the radiation dose to the breast. These data were used to analytically compute the signal-difference-to-noise ratio (SDNR) at unit (1 mGy) mean glandular dose (MGD) for W/Rh and W/Ag spectra. At SDNR matched between K-edge and Al filtered spectra, the reductions in exposure duration and MGD were quantified for three strategies: (i) fixed Al thickness and matched tube potential in kilovolts (kV); (ii) fixed Al thickness and varying the kV to match the half-value layer (HVL) between Al and K-edge filtered spectra; and, (iii) matched kV and varying the Al thickness to match the HVL between Al and K-edge filtered spectra. Monte Carlo simulations indicate that the SPR with and without the breast were not different between Al and K-edge filters. Modelling for fixed Al thickness (700μm) and kV matched to K-edge filtered spectra, identical SDNR was achieved with 37–57% reduction in exposure duration and with 2–20% reduction in MGD, depending on breast thickness. Modelling for fixed Al thickness (700μm) and HVL matched by increasing the kV over [0,4] range, identical SDNR was achieved with 62–65% decrease in exposure duration and with 2–24% reduction in MGD, depending on breast thickness. For kV and HVL matched to K-edge filtered spectra by varying Al filter thickness over [700,880]μm range, identical SDNR was achieved with 23–56% reduction in exposure duration and 2–20% reduction in MGD, depending on breast thickness. These simulations indicate that increased fluence with Al filter of fixed or variable thickness substantially decreases exposure duration while providing for similar image quality with moderate reduction in MGD. PMID:28075335
NASA Astrophysics Data System (ADS)
Leier, André; Marquez-Lago, Tatiana T.; Burrage, Kevin
2008-05-01
The delay stochastic simulation algorithm (DSSA) by Barrio et al. [Plos Comput. Biol. 2, 117(E) (2006)] was developed to simulate delayed processes in cell biology in the presence of intrinsic noise, that is, when there are small-to-moderate numbers of certain key molecules present in a chemical reaction system. These delayed processes can faithfully represent complex interactions and mechanisms that imply a number of spatiotemporal processes often not explicitly modeled such as transcription and translation, basic in the modeling of cell signaling pathways. However, for systems with widely varying reaction rate constants or large numbers of molecules, the simulation time steps of both the stochastic simulation algorithm (SSA) and the DSSA can become very small causing considerable computational overheads. In order to overcome the limit of small step sizes, various τ-leap strategies have been suggested for improving computational performance of the SSA. In this paper, we present a binomial τ-DSSA method that extends the τ-leap idea to the delay setting and avoids drawing insufficient numbers of reactions, a common shortcoming of existing binomial τ-leap methods that becomes evident when dealing with complex chemical interactions. The resulting inaccuracies are most evident in the delayed case, even when considering reaction products as potential reactants within the same time step in which they are produced. Moreover, we extend the framework to account for multicellular systems with different degrees of intercellular communication. We apply these ideas to two important genetic regulatory models, namely, the hes1 gene, implicated as a molecular clock, and a Her1/Her 7 model for coupled oscillating cells.
Numerical investigation of coupled density-driven flow and hydrogeochemical processes below playas
NASA Astrophysics Data System (ADS)
Hamann, Enrico; Post, Vincent; Kohfahl, Claus; Prommer, Henning; Simmons, Craig T.
2015-11-01
Numerical modeling approaches with varying complexity were explored to investigate coupled groundwater flow and geochemical processes in saline basins. Long-term model simulations of a playa system gain insights into the complex feedback mechanisms between density-driven flow and the spatiotemporal patterns of precipitating evaporites and evolving brines. Using a reactive multicomponent transport model approach, the simulations reproduced, for the first time in a numerical study, the evaporite precipitation sequences frequently observed in saline basins ("bull's eyes"). Playa-specific flow, evapoconcentration, and chemical divides were found to be the primary controls for the location of evaporites formed, and the resulting brine chemistry. Comparative simulations with the computationally far less demanding surrogate single-species transport models showed that these were still able to replicate the major flow patterns obtained by the more complex reactive transport simulations. However, the simulated degree of salinization was clearly lower than in reactive multicomponent transport simulations. For example, in the late stages of the simulations, when the brine becomes halite-saturated, the nonreactive simulation overestimated the solute mass by almost 20%. The simulations highlight the importance of the consideration of reactive transport processes for understanding and quantifying geochemical patterns, concentrations of individual dissolved solutes, and evaporite evolution.
Lee, Juhyun; Moghadam, Mahdi Esmaily; Kung, Ethan; Cao, Hung; Beebe, Tyler; Miller, Yury; Roman, Beth L; Lien, Ching-Ling; Chi, Neil C; Marsden, Alison L; Hsiai, Tzung K
2013-01-01
Peristaltic contraction of the embryonic heart tube produces time- and spatial-varying wall shear stress (WSS) and pressure gradients (∇P) across the atrioventricular (AV) canal. Zebrafish (Danio rerio) are a genetically tractable system to investigate cardiac morphogenesis. The use of Tg(fli1a:EGFP) (y1) transgenic embryos allowed for delineation and two-dimensional reconstruction of the endocardium. This time-varying wall motion was then prescribed in a two-dimensional moving domain computational fluid dynamics (CFD) model, providing new insights into spatial and temporal variations in WSS and ∇P during cardiac development. The CFD simulations were validated with particle image velocimetry (PIV) across the atrioventricular (AV) canal, revealing an increase in both velocities and heart rates, but a decrease in the duration of atrial systole from early to later stages. At 20-30 hours post fertilization (hpf), simulation results revealed bidirectional WSS across the AV canal in the heart tube in response to peristaltic motion of the wall. At 40-50 hpf, the tube structure undergoes cardiac looping, accompanied by a nearly 3-fold increase in WSS magnitude. At 110-120 hpf, distinct AV valve, atrium, ventricle, and bulbus arteriosus form, accompanied by incremental increases in both WSS magnitude and ∇P, but a decrease in bi-directional flow. Laminar flow develops across the AV canal at 20-30 hpf, and persists at 110-120 hpf. Reynolds numbers at the AV canal increase from 0.07±0.03 at 20-30 hpf to 0.23±0.07 at 110-120 hpf (p< 0.05, n=6), whereas Womersley numbers remain relatively unchanged from 0.11 to 0.13. Our moving domain simulations highlights hemodynamic changes in relation to cardiac morphogenesis; thereby, providing a 2-D quantitative approach to complement imaging analysis.
Sub-discretized surface model with application to contact mechanics in multi-body simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, S; Williams, J
2008-02-28
The mechanics of contact between rough and imperfectly spherical adhesive powder grains are often complicated by a variety of factors, including several which vary over sub-grain length scales. These include several traction factors that vary spatially over the surface of the individual grains, including high energy electron and acceptor sites (electrostatic), hydrophobic and hydrophilic sites (electrostatic and capillary), surface energy (general adhesion), geometry (van der Waals and mechanical), and elasto-plastic deformation (mechanical). For mechanical deformation and reaction, coupled motions, such as twisting with bending and sliding, as well as surface roughness add an asymmetry to the contact force which invalidatesmore » assumptions for popular models of contact, such as the Hertzian and its derivatives, for the non-adhesive case, and the JKR and DMT models for adhesive contacts. Though several contact laws have been offered to ameliorate these drawbacks, they are often constrained to particular loading paths (most often normal loading) and are relatively complicated for computational implementation. This paper offers a simple and general computational method for augmenting contact law predictions in multi-body simulations through characterization of the contact surfaces using a hierarchically-defined surface sub-discretization. For the case of adhesive contact between powder grains in low stress regimes, this technique can allow a variety of existing contact laws to be resolved across scales, allowing for moments and torques about the contact area as well as normal and tangential tractions to be resolved. This is especially useful for multi-body simulation applications where the modeler desires statistical distributions and calibration for parameters in contact laws commonly used for resolving near-surface contact mechanics. The approach is verified against analytical results for the case of rough, elastic spheres.« less
Validation of the Monte Carlo simulator GATE for indium-111 imaging.
Assié, K; Gardin, I; Véra, P; Buvat, I
2005-07-07
Monte Carlo simulations are useful for optimizing and assessing single photon emission computed tomography (SPECT) protocols, especially when aiming at measuring quantitative parameters from SPECT images. Before Monte Carlo simulated data can be trusted, the simulation model must be validated. The purpose of this work was to validate the use of GATE, a new Monte Carlo simulation platform based on GEANT4, for modelling indium-111 SPECT data, the quantification of which is of foremost importance for dosimetric studies. To that end, acquisitions of (111)In line sources in air and in water and of a cylindrical phantom were performed, together with the corresponding simulations. The simulation model included Monte Carlo modelling of the camera collimator and of a back-compartment accounting for photomultiplier tubes and associated electronics. Energy spectra, spatial resolution, sensitivity values, images and count profiles obtained for experimental and simulated data were compared. An excellent agreement was found between experimental and simulated energy spectra. For source-to-collimator distances varying from 0 to 20 cm, simulated and experimental spatial resolution differed by less than 2% in air, while the simulated sensitivity values were within 4% of the experimental values. The simulation of the cylindrical phantom closely reproduced the experimental data. These results suggest that GATE enables accurate simulation of (111)In SPECT acquisitions.
Opinion formation in time-varying social networks: The case of the naming game
NASA Astrophysics Data System (ADS)
Maity, Suman Kalyan; Manoj, T. Venkat; Mukherjee, Animesh
2012-09-01
We study the dynamics of the naming game as an opinion formation model on time-varying social networks. This agent-based model captures the essential features of the agreement dynamics by means of a memory-based negotiation process. Our study focuses on the impact of time-varying properties of the social network of the agents on the naming game dynamics. In particular, we perform a computational exploration of this model using simulations on top of real networks. We investigate the outcomes of the dynamics on two different types of time-varying data: (1) the networks vary on a day-to-day basis and (2) the networks vary within very short intervals of time (20 sec). In the first case, we find that networks with strong community structure hinder the system from reaching global agreement; the evolution of the naming game in these networks maintains clusters of coexisting opinions indefinitely leading to metastability. In the second case, we investigate the evolution of the naming game in perfect synchronization with the time evolution of the underlying social network shedding new light on the traditional emergent properties of the game that differ largely from what has been reported in the existing literature.
Electroosmotic mixing in microchannels.
Glasgow, Ian; Batton, John; Aubry, Nadine
2004-12-01
Mixing is an essential, yet challenging, process step for many Lab on a Chip (LOC) applications. This paper presents a method of mixing for microfluidic devices that relies upon electroosmotic flow. In physical tests and in computer simulations, we periodically vary the electric field with time to mix two aqueous solutions. Good mixing is shown to occur when the electroosmotic flow at the two inlets pulse out of phase, the Strouhal number is on the order of 1, and the pulse volumes are on the order of the intersection volume.
Computational Fluid Dynamics: Algorithms and Supercomputers
1988-03-01
1985. 1.2. Pulliam, T., and Steger, J. , Implicit Finite Difference Simulations of Three Dimensional Compressible Flow, AIAA Journal , Vol. 18, No. 2...approaches infinity, assuming N is bounded. The question as to actual performance when M is finite and N varies, is a different matter. (Note: the CYBER...PARTICLE-IN-CELL 9i% 3.b7 j.48 WEATHER FORECAST 98% 3.77 3.55 SEISMIC MIGRATION 98% 3.85 3.45 MONTE CARLO 99% 3.85 3.75 LATTICE GAUGE 100% 4.00 3.77
Computational model of a vector-mediated epidemic
NASA Astrophysics Data System (ADS)
Dickman, Adriana Gomes; Dickman, Ronald
2015-05-01
We discuss a lattice model of vector-mediated transmission of a disease to illustrate how simulations can be applied in epidemiology. The population consists of two species, human hosts and vectors, which contract the disease from one another. Hosts are sedentary, while vectors (mosquitoes) diffuse in space. Examples of such diseases are malaria, dengue fever, and Pierce's disease in vineyards. The model exhibits a phase transition between an absorbing (infection free) phase and an active one as parameters such as infection rates and vector density are varied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brinkman, Kyle; Bordia, Rajendra; Reifsnider, Kenneth
This project fabricated model multiphase ceramic waste forms with processing-controlled microstructures followed by advanced characterization with synchrotron and electron microscopy-based 3D tomography to provide elemental and chemical state-specific information resulting in compositional phase maps of ceramic composites. Details of 3D microstructural features were incorporated into computer-based simulations using durability data for individual constituent phases as inputs in order to predict the performance of multiphase waste forms with varying microstructure and phase connectivity.
Laser-driven planar Rayleigh-Taylor instability experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glendinning, S.G.; Weber, S.V.; Bell, P.
1992-08-24
We have performed a series of experiments on the Nova Laser Facility to examine the hydrodynamic behavior of directly driven planar foils with initial perturbations of varying wavelength. The foils were accelerated with a single, frequency doubled, smoothed and temporally shaped laser beam at 0.8{times}10{sup 14} W/cm{sup 2}. The experiments are in good agreement with numerical simulations using the computer codes LASNEX and ORCHID which show growth rates reduced to about 70% of classical for this nonlinear regime.
Using sobol sequences for planning computer experiments
NASA Astrophysics Data System (ADS)
Statnikov, I. N.; Firsov, G. I.
2017-12-01
Discusses the use for research of problems of multicriteria synthesis of dynamic systems method of Planning LP-search (PLP-search), which not only allows on the basis of the simulation model experiments to revise the parameter space within specified ranges of their change, but also through special randomized nature of the planning of these experiments is to apply a quantitative statistical evaluation of influence of change of varied parameters and their pairwise combinations to analyze properties of the dynamic system.Start your abstract here...
An Experimental Comparison Between Flexible and Rigid Airfoils at Low Reynolds Numbers
NASA Astrophysics Data System (ADS)
Uzodinma, Jaylon; Macphee, David
2017-11-01
This study uses experimental and computational research methods to compare the aerodynamic performance of rigid and flexible airfoils at a low Reynolds number throughout varying angles of attack. This research can be used to improve the design of small wind turbines, micro-aerial vehicles, and any other devices that operate at low Reynolds numbers. Experimental testing was conducted in the University of Alabama's low-speed wind tunnel, and computational testing was conducted using the open-source CFD code OpenFOAM. For experimental testing, polyurethane-based (rigid) airfoils and silicone-based (flexible) airfoils were constructed using acrylic molds for NACA 0012 and NACA 2412 airfoil profiles. Computer models of the previously-specified airfoils were also created for a computational analysis. Both experimental and computational data were analyzed to examine the critical angles of attack, the lift and drag coefficients, and the occurrence of laminar boundary separation for each airfoil. Moreover, the computational simulations were used to examine the resulting flow fields, in order to provide possible explanations for the aerodynamic performances of each airfoil type. EEC 1659710.
Identification of gene regulation models from single-cell data
NASA Astrophysics Data System (ADS)
Weber, Lisa; Raymond, William; Munsky, Brian
2018-09-01
In quantitative analyses of biological processes, one may use many different scales of models (e.g. spatial or non-spatial, deterministic or stochastic, time-varying or at steady-state) or many different approaches to match models to experimental data (e.g. model fitting or parameter uncertainty/sloppiness quantification with different experiment designs). These different analyses can lead to surprisingly different results, even when applied to the same data and the same model. We use a simplified gene regulation model to illustrate many of these concerns, especially for ODE analyses of deterministic processes, chemical master equation and finite state projection analyses of heterogeneous processes, and stochastic simulations. For each analysis, we employ MATLAB and PYTHON software to consider a time-dependent input signal (e.g. a kinase nuclear translocation) and several model hypotheses, along with simulated single-cell data. We illustrate different approaches (e.g. deterministic and stochastic) to identify the mechanisms and parameters of the same model from the same simulated data. For each approach, we explore how uncertainty in parameter space varies with respect to the chosen analysis approach or specific experiment design. We conclude with a discussion of how our simulated results relate to the integration of experimental and computational investigations to explore signal-activated gene expression models in yeast (Neuert et al 2013 Science 339 584–7) and human cells (Senecal et al 2014 Cell Rep. 8 75–83)5.
The SCEC/USGS dynamic earthquake rupture code verification exercise
Harris, R.A.; Barall, M.; Archuleta, R.; Dunham, E.; Aagaard, Brad T.; Ampuero, J.-P.; Bhat, H.; Cruz-Atienza, Victor M.; Dalguer, L.; Dawson, P.; Day, S.; Duan, B.; Ely, G.; Kaneko, Y.; Kase, Y.; Lapusta, N.; Liu, Yajing; Ma, S.; Oglesby, D.; Olsen, K.; Pitarka, A.; Song, S.; Templeton, E.
2009-01-01
Numerical simulations of earthquake rupture dynamics are now common, yet it has been difficult to test the validity of these simulations because there have been few field observations and no analytic solutions with which to compare the results. This paper describes the Southern California Earthquake Center/U.S. Geological Survey (SCEC/USGS) Dynamic Earthquake Rupture Code Verification Exercise, where codes that simulate spontaneous rupture dynamics in three dimensions are evaluated and the results produced by these codes are compared using Web-based tools. This is the first time that a broad and rigorous examination of numerous spontaneous rupture codes has been performed—a significant advance in this science. The automated process developed to attain this achievement provides for a future where testing of codes is easily accomplished.Scientists who use computer simulations to understand earthquakes utilize a range of techniques. Most of these assume that earthquakes are caused by slip at depth on faults in the Earth, but hereafter the strategies vary. Among the methods used in earthquake mechanics studies are kinematic approaches and dynamic approaches.The kinematic approach uses a computer code that prescribes the spatial and temporal evolution of slip on the causative fault (or faults). These types of simulations are very helpful, especially since they can be used in seismic data inversions to relate the ground motions recorded in the field to slip on the fault(s) at depth. However, these kinematic solutions generally provide no insight into the physics driving the fault slip or information about why the involved fault(s) slipped that much (or that little). In other words, these kinematic solutions may lack information about the physical dynamics of earthquake rupture that will be most helpful in forecasting future events.To help address this issue, some researchers use computer codes to numerically simulate earthquakes and construct dynamic, spontaneous rupture (hereafter called “spontaneous rupture”) solutions. For these types of numerical simulations, rather than prescribing the slip function at each location on the fault(s), just the friction constitutive properties and initial stress conditions are prescribed. The subsequent stresses and fault slip spontaneously evolve over time as part of the elasto-dynamic solution. Therefore, spontaneous rupture computer simulations of earthquakes allow us to include everything that we know, or think that we know, about earthquake dynamics and to test these ideas against earthquake observations.
Efficient techniques for wave-based sound propagation in interactive applications
NASA Astrophysics Data System (ADS)
Mehra, Ravish
Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.
NASA Astrophysics Data System (ADS)
Saxena, Nishank; Hofmann, Ronny; Alpak, Faruk O.; Berg, Steffen; Dietderich, Jesse; Agarwal, Umang; Tandon, Kunj; Hunter, Sander; Freeman, Justin; Wilson, Ove Bjorn
2017-11-01
We generate a novel reference dataset to quantify the impact of numerical solvers, boundary conditions, and simulation platforms. We consider a variety of microstructures ranging from idealized pipes to digital rocks. Pore throats of the digital rocks considered are large enough to be well resolved with state-of-the-art micro-computerized tomography technology. Permeability is computed using multiple numerical engines, 12 in total, including, Lattice-Boltzmann, computational fluid dynamics, voxel based, fast semi-analytical, and known empirical models. Thus, we provide a measure of uncertainty associated with flow computations of digital media. Moreover, the reference and standards dataset generated is the first of its kind and can be used to test and improve new fluid flow algorithms. We find that there is an overall good agreement between solvers for idealized cross-section shape pipes. As expected, the disagreement increases with increase in complexity of the pore space. Numerical solutions for pipes with sinusoidal variation of cross section show larger variability compared to pipes of constant cross-section shapes. We notice relatively larger variability in computed permeability of digital rocks with coefficient of variation (of up to 25%) in computed values between various solvers. Still, these differences are small given other subsurface uncertainties. The observed differences between solvers can be attributed to several causes including, differences in boundary conditions, numerical convergence criteria, and parameterization of fundamental physics equations. Solvers that perform additional meshing of irregular pore shapes require an additional step in practical workflows which involves skill and can introduce further uncertainty. Computation times for digital rocks vary from minutes to several days depending on the algorithm and available computational resources. We find that more stringent convergence criteria can improve solver accuracy but at the expense of longer computation time.
Capsule modeling of high foot implosion experiments on the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, D. S.; Kritcher, A. L.; Milovich, J. L.
This study summarizes the results of detailed, capsule-only simulations of a set of high foot implosion experiments conducted on the National Ignition Facility (NIF). These experiments span a range of ablator thicknesses, laser powers, and laser energies, and modeling these experiments as a set is important to assess whether the simulation model can reproduce the trends seen experimentally as the implosion parameters were varied. Two-dimensional (2D) simulations have been run including a number of effects—both nominal and off-nominal—such as hohlraum radiation asymmetries, surface roughness, the capsule support tent, and hot electron pre-heat. Selected three-dimensional simulations have also been run tomore » assess the validity of the 2D axisymmetric approximation. As a composite, these simulations represent the current state of understanding of NIF high foot implosion performance using the best and most detailed computational model available. While the most detailed simulations show approximate agreement with the experimental data, it is evident that the model remains incomplete and further refinements are needed. Nevertheless, avenues for improved performance are clearly indicated.« less
PetriScape - A plugin for discrete Petri net simulations in Cytoscape.
Almeida, Diogo; Azevedo, Vasco; Silva, Artur; Baumbach, Jan
2016-06-04
Systems biology plays a central role for biological network analysis in the post-genomic era. Cytoscape is the standard bioinformatics tool offering the community an extensible platform for computational analysis of the emerging cellular network together with experimental omics data sets. However, only few apps/plugins/tools are available for simulating network dynamics in Cytoscape 3. Many approaches of varying complexity exist but none of them have been integrated into Cytoscape as app/plugin yet. Here, we introduce PetriScape, the first Petri net simulator for Cytoscape. Although discrete Petri nets are quite simplistic models, they are capable of modeling global network properties and simulating their behaviour. In addition, they are easily understood and well visualizable. PetriScape comes with the following main functionalities: (1) import of biological networks in SBML format, (2) conversion into a Petri net, (3) visualization as Petri net, and (4) simulation and visualization of the token flow in Cytoscape. PetriScape is the first Cytoscape plugin for Petri nets. It allows a straightforward Petri net model creation, simulation and visualization with Cytoscape, providing clues about the activity of key components in biological networks.
PetriScape - A plugin for discrete Petri net simulations in Cytoscape.
Almeida, Diogo; Azevedo, Vasco; Silva, Artur; Baumbach, Jan
2016-03-01
Systems biology plays a central role for biological network analysis in the post-genomic era. Cytoscape is the standard bioinformatics tool offering the community an extensible platform for computational analysis of the emerging cellular network together with experimental omics data sets. However, only few apps/plugins/tools are available for simulating network dynamics in Cytoscape 3. Many approaches of varying complexity exist but none of them have been integrated into Cytoscape as app/plugin yet. Here, we introduce PetriScape, the first Petri net simulator for Cytoscape. Although discrete Petri nets are quite simplistic models, they are capable of modeling global network properties and simulating their behaviour. In addition, they are easily understood and well visualizable. PetriScape comes with the following main functionalities: (1) import of biological networks in SBML format, (2) conversion into a Petri net, (3) visualization as Petri net, and (4) simulation and visualization of the token flow in Cytoscape. PetriScape is the first Cytoscape plugin for Petri nets. It allows a straightforward Petri net model creation, simulation and visualization with Cytoscape, providing clues about the activity of key components in biological networks.
Capsule modeling of high foot implosion experiments on the National Ignition Facility
Clark, D. S.; Kritcher, A. L.; Milovich, J. L.; ...
2017-03-21
This study summarizes the results of detailed, capsule-only simulations of a set of high foot implosion experiments conducted on the National Ignition Facility (NIF). These experiments span a range of ablator thicknesses, laser powers, and laser energies, and modeling these experiments as a set is important to assess whether the simulation model can reproduce the trends seen experimentally as the implosion parameters were varied. Two-dimensional (2D) simulations have been run including a number of effects—both nominal and off-nominal—such as hohlraum radiation asymmetries, surface roughness, the capsule support tent, and hot electron pre-heat. Selected three-dimensional simulations have also been run tomore » assess the validity of the 2D axisymmetric approximation. As a composite, these simulations represent the current state of understanding of NIF high foot implosion performance using the best and most detailed computational model available. While the most detailed simulations show approximate agreement with the experimental data, it is evident that the model remains incomplete and further refinements are needed. Nevertheless, avenues for improved performance are clearly indicated.« less
Forster, Jeri E.; MaWhinney, Samantha; Ball, Erika L.; Fairclough, Diane
2011-01-01
Dropout is common in longitudinal clinical trials and when the probability of dropout depends on unobserved outcomes even after conditioning on available data, it is considered missing not at random and therefore nonignorable. To address this problem, mixture models can be used to account for the relationship between a longitudinal outcome and dropout. We propose a Natural Spline Varying-coefficient mixture model (NSV), which is a straightforward extension of the parametric Conditional Linear Model (CLM). We assume that the outcome follows a varying-coefficient model conditional on a continuous dropout distribution. Natural cubic B-splines are used to allow the regression coefficients to semiparametrically depend on dropout and inference is therefore more robust. Additionally, this method is computationally stable and relatively simple to implement. We conduct simulation studies to evaluate performance and compare methodologies in settings where the longitudinal trajectories are linear and dropout time is observed for all individuals. Performance is assessed under conditions where model assumptions are both met and violated. In addition, we compare the NSV to the CLM and a standard random-effects model using an HIV/AIDS clinical trial with probable nonignorable dropout. The simulation studies suggest that the NSV is an improvement over the CLM when dropout has a nonlinear dependence on the outcome. PMID:22101223
NASA Astrophysics Data System (ADS)
Mateo, Cherry May R.; Yamazaki, Dai; Kim, Hyungjun; Champathong, Adisorn; Vaze, Jai; Oki, Taikan
2017-10-01
Global-scale river models (GRMs) are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC) is assumed, simulation results deteriorate with finer spatial resolution; Nash-Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC) is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.
Simulation and phases of macroscopic particles in vortex flow
NASA Astrophysics Data System (ADS)
Rice, Heath Eric
Granular materials are an interesting class of media in that they exhibit many disparate characteristics depending on conditions. The same set of particles may behave like a solid, liquid, gas, something in-between, or something completely unique depending on the conditions. Practically speaking, granular materials are used in many aspects of manufacturing, therefore any new information gleaned about them may help refine these techniques. For example, learning of a possible instability may help avoid it in practical application, saving machinery, money, and even personnel. To that end, we intend to simulate a granular medium under tornado-like vortex airflow by varying particle parameters and observing the behaviors that arise. The simulation itself was written in Python from the ground up, starting from the basic simulation equations in Poschel [1]. From there, particle spin, viscous friction, and vertical and tangential airflow were added. The simulations were then run in batches on a local cluster computer, varying the parameters of radius, flow force, density, and friction. Phase plots were created after observing the behaviors of the simulations and the regions and borders were analyzed. Most of the results were as expected: smaller particles behaved more like a gas, larger particles behaved more like a solid, and most intermediate simulations behaved like a liquid. A small subset formed an interesting crossover region in the center, and under moderate forces began to throw a few particles at a time upward from the center in a fountain-like effect. Most borders between regions appeared to agree with analysis, following a parabolic critical rotational velocity at which the parabolic surface of the material dips to the bottom of the mass of particles. The fountain effects seemed to occur at speeds along and slightly faster than this division. [1] Please see thesis for references.
Computational Study of Uniaxial Deformations in Silica Aerogel Using a Coarse-Grained Model.
Ferreiro-Rangel, Carlos A; Gelb, Lev D
2015-07-09
Simulations of a flexible coarse-grained model are used to study silica aerogels. This model, introduced in a previous study (J. Phys. Chem. C 2007, 111, 15792), consists of spherical particles which interact through weak nonbonded forces and strong interparticle bonds that may form and break during the simulations. Small-deformation simulations are used to determine the elastic moduli of a wide range of material models, and large-deformation simulations are used to probe structural evolution and plastic deformation. Uniaxial deformation at constant transverse pressure is simulated using two methods: a hybrid Monte Carlo approach combining molecular dynamics for the motion of individual particles and stochastic moves for transverse stress equilibration, and isothermal molecular dynamics simulations at fixed Poisson ratio. Reasonable agreement on elastic moduli is obtained except at very low densities. The model aerogels exhibit Poisson ratios between 0.17 and 0.24, with higher-density gels clustered around 0.20, and Young's moduli that vary with aerogel density according to a power-law dependence with an exponent near 3.0. These results are in agreement with reported experimental values. The models are shown to satisfy the expected homogeneous isotropic linear-elastic relationship between bulk and Young's moduli at higher densities, but there are systematic deviations at the lowest densities. Simulations of large compressive and tensile strains indicate that these materials display a ductile-to-brittle transition as the density is increased, and that the tensile strength varies with density according to a power law, with an exponent in reasonable agreement with experiment. Auxetic behavior is observed at large tensile strains in some models. Finally, at maximum tensile stress very few broken bonds are found in the materials, in accord with the theory that only a small fraction of the material structure is actually load-bearing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Simonetto, Andrea
This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less
Performance of computer vision in vivo flow cytometry with low fluorescence contrast
Markovic, Stacey; Li, Siyuan; Niedre, Mark
2015-01-01
Abstract. Detection and enumeration of circulating cells in the bloodstream of small animals are important in many areas of preclinical biomedical research, including cancer metastasis, immunology, and reproductive medicine. Optical in vivo flow cytometry (IVFC) represents a class of technologies that allow noninvasive and continuous enumeration of circulating cells without drawing blood samples. We recently developed a technique termed computer vision in vivo flow cytometry (CV-IVFC) that uses a high-sensitivity fluorescence camera and an automated computer vision algorithm to interrogate relatively large circulating blood volumes in the ear of a mouse. We detected circulating cells at concentrations as low as 20 cells/mL. In the present work, we characterized the performance of CV-IVFC with low-contrast imaging conditions with (1) weak cell fluorescent labeling using cell-simulating fluorescent microspheres with varying brightness and (2) high background tissue autofluorescence by varying autofluorescence properties of optical phantoms. Our analysis indicates that CV-IVFC can robustly track and enumerate circulating cells with at least 50% sensitivity even in conditions with two orders of magnitude degraded contrast than our previous in vivo work. These results support the significant potential utility of CV-IVFC in a wide range of in vivo biological models. PMID:25822954
Touch-screen tablet user configurations and case-supported tilt affect head and neck flexion angles.
Young, Justin G; Trudeau, Matthieu; Odell, Dan; Marinelli, Kim; Dennerlein, Jack T
2012-01-01
The aim of this study was to determine how head and neck postures vary when using two media tablet (slate) computers in four common user configurations. Fifteen experienced media tablet users completed a set of simulated tasks with two media tablets in four typical user configurations. The four configurations were: on the lap and held with the user's hands, on the lap and in a case, on a table and in a case, and on a table and in a case set at a high angle for watching movies. An infra-red LED marker based motion analysis system measured head/neck postures. Head and neck flexion significantly varied across the four configurations and across the two tablets tested. Head and neck flexion angles during tablet use were greater, in general, than angles previously reported for desktop and notebook computing. Postural differences between tablets were driven by case designs, which provided significantly different tilt angles, while postural differences between configurations were driven by gaze and viewing angles. Head and neck posture during tablet computing can be improved by placing the tablet higher to avoid low gaze angles (i.e. on a table rather than on the lap) and through the use of a case that provides optimal viewing angles.
3-D Analysis of Flanged Joints Through Various Preload Methods Using ANSYS
NASA Astrophysics Data System (ADS)
Murugan, Jeyaraj Paul; Kurian, Thomas; Jayaprakash, Janardhan; Sreedharapanickar, Somanath
2015-10-01
Flanged joints are being employed in aerospace solid rocket motor hardware for the integration of various systems or subsystems. Hence, the design of flanged joints is very important in ensuring the integrity of motor while functioning. As these joints are subjected to higher loads due to internal pressure acting inside the motor chamber, an appropriate preload is required to be applied in this joint before subjecting it to the external load. Preload, also known as clamp load, is applied on the fastener and helps to hold the mating flanges together. Generally preload is simulated as a thermal load and the exact preload is obtained through number of iterations. Infact, more iterations are required when considering the material nonlinearity of the bolt. This way of simulation will take more computational time for generating the required preload. Now a days most commercial software packages use pretension elements for simulating the preload. This element does not require iterations for inducing the preload and it can be solved with single iteration. This approach takes less computational time and thus one can study the characteristics of the joint easily by varying the preload. When the structure contains more number of joints with different sizes of fasteners, pretension elements can be used compared to thermal load approach for simulating each size of fastener. This paper covers the details of analyses carried out simulating the preload through various options viz., preload through thermal, initial state command and pretension element etc. using ANSYS finite element package.
Scalability Test of Multiscale Fluid-Platelet Model for Three Top Supercomputers
Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny
2016-01-01
We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources. PMID:27570250
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spong, D.A.
The design techniques and physics analysis of modern stellarator configurations for magnetic fusion research rely heavily on high performance computing and simulation. Stellarators, which are fundamentally 3-dimensional in nature, offer significantly more design flexibility than more symmetric devices such as the tokamak. By varying the outer boundary shape of the plasma, a variety of physics features, such as transport, stability, and heating efficiency can be optimized. Scientific visualization techniques are an important adjunct to this effort as they provide a necessary ergonomic link between the numerical results and the intuition of the human researcher. The authors have developed a varietymore » of visualization techniques for stellarators which both facilitate the design optimization process and allow the physics simulations to be more readily understood.« less
Decoupled 1D/3D analysis of a hydraulic valve
NASA Astrophysics Data System (ADS)
Mehring, Carsten; Zopeya, Ashok; Latham, Matt; Ihde, Thomas; Massie, Dan
2014-10-01
Analysis approaches during product development of fluid valves and other aircraft fluid delivery components vary greatly depending on the development stage. Traditionally, empirical or simplistic one-dimensional tools are being deployed during preliminary design, whereas detailed analysis such as CFD (Computational Fluid Dynamics) tools are used to refine a selected design during the detailed design stage. In recent years, combined 1D/3D co-simulation has been deployed specifically for system level simulations requiring an increased level of analysis detail for one or more components. The present paper presents a decoupled 1D/3D analysis approach where 3D CFD analysis results are utilized to enhance the fidelity of a dynamic 1D modelin context of an aircraft fuel valve.
Joint-space adaptive control of a 6 DOF end-effector with closed-kinematic chain mechanism
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Zhou, Zhen-Lei
1989-01-01
The development is presented for a joint-space adaptive scheme that controls the joint position of a six-degree-of-freedom (DOF) robot end-effector performing fine and precise motion within a very limited workspace. The end-effector was built to study autonomous assembly of NASA hardware in space. The design of the adaptive controller is based on the concept of model reference adaptive control (MRAC) and Lyapunov direct method. In the development, it is assumed that the end-effector performs slowly varying motion. Computer simulation is performed to investigate the performance of the developed control scheme on position control of the end-effector. Simulation results manifest that the adaptive control scheme provides excellent tracking of several test paths.
Hormone Purification by Isoelectric Focusing
NASA Technical Reports Server (NTRS)
Bier, M.
1985-01-01
Various ground-based research approaches are being applied to a more definitive evaluation of the natures and degrees of electroosmosis effects on the separation capabilities of the Isoelectric Focusing (IEF) process. A primary instrumental system for this work involves rotationally stabilized, horizontal electrophoretic columns specially adapted for the IEF process. Representative adaptations include segmentation, baffles/screens, and surface coatings. Comparative performance and development testing are pursued against the type of column or cell established as an engineering model. Previously developed computer simulation capabilities are used to predict low-gravity behavior patterns and performance for IEF apparatus geometries of direct project interest. Three existing mathematical models plus potential new routines for particular aspects of simulating instrument fluid patterns with varied wall electroosmosis influences are being exercised.
Clustering and phase transitions on a neutral landscape
NASA Astrophysics Data System (ADS)
Scott, Adam D.; King, Dawn M.; Marić, Nevena; Bahar, Sonya
2013-06-01
Recent computational studies have shown that speciation can occur under neutral conditions, i.e., when the simulated organisms all have identical fitness. These works bear comparison with mathematical studies of clustering on neutral landscapes in the context of branching and coalescing random walks. Here, we show that sympatric clustering/speciation can occur on a neutral landscape whose dimensions specify only the simulated organisms’ phenotypes. We demonstrate that clustering occurs not only in the case of assortative mating, but also in the case of asexual fission; it is not observed in the control case of random mating. We find that the population size and the number of clusters undergo a second-order non-equilibrium phase transition as the maximum mutation size is varied.
Nomura, Ken-Ichi; Kalia, Rajiv K; Nakano, Aiichiro; Vashishta, Priya; van Duin, Adri C T; Goddard, William A
2007-10-05
Mechanical stimuli in energetic materials initiate chemical reactions at shock fronts prior to detonation. Shock sensitivity measurements provide widely varying results, and quantum-mechanical calculations are unable to handle systems large enough to describe shock structure. Recent developments in reactive force-field molecular dynamics (ReaxFF-MD) combined with advances in parallel computing have paved the way to accurately simulate reaction pathways along with the structure of shock fronts. Our multimillion-atom ReaxFF-MD simulations of l,3,5-trinitro-l,3,5-triazine (RDX) reveal that detonation is preceded by a transition from a diffuse shock front with well-ordered molecular dipoles behind it to a disordered dipole distribution behind a sharp front.
NASA Technical Reports Server (NTRS)
Druyan, Leonard M.
2012-01-01
Climate models is a very broad topic, so a single volume can only offer a small sampling of relevant research activities. This volume of 14 chapters includes descriptions of a variety of modeling studies for a variety of geographic regions by an international roster of authors. The climate research community generally uses the rubric climate models to refer to organized sets of computer instructions that produce simulations of climate evolution. The code is based on physical relationships that describe the shared variability of meteorological parameters such as temperature, humidity, precipitation rate, circulation, radiation fluxes, etc. Three-dimensional climate models are integrated over time in order to compute the temporal and spatial variations of these parameters. Model domains can be global or regional and the horizontal and vertical resolutions of the computational grid vary from model to model. Considering the entire climate system requires accounting for interactions between solar insolation, atmospheric, oceanic and continental processes, the latter including land hydrology and vegetation. Model simulations may concentrate on one or more of these components, but the most sophisticated models will estimate the mutual interactions of all of these environments. Advances in computer technology have prompted investments in more complex model configurations that consider more phenomena interactions than were possible with yesterday s computers. However, not every attempt to add to the computational layers is rewarded by better model performance. Extensive research is required to test and document any advantages gained by greater sophistication in model formulation. One purpose for publishing climate model research results is to present purported advances for evaluation by the scientific community.
Reciprocal-space mapping of epitaxic thin films with crystallite size and shape polydispersity.
Boulle, A; Conchon, F; Guinebretière, R
2006-01-01
A development is presented that allows the simulation of reciprocal-space maps (RSMs) of epitaxic thin films exhibiting fluctuations in the size and shape of the crystalline domains over which diffraction is coherent (crystallites). Three different crystallite shapes are studied, namely parallelepipeds, trigonal prisms and hexagonal prisms. For each shape, two cases are considered. Firstly, the overall size is allowed to vary but with a fixed thickness/width ratio. Secondly, the thickness and width are allowed to vary independently. The calculations are performed assuming three different size probability density functions: the normal distribution, the lognormal distribution and a general histogram distribution. In all cases considered, the computation of the RSM only requires a two-dimensional Fourier integral and the integrand has a simple analytical expression, i.e. there is no significant increase in computing times by taking size and shape fluctuations into account. The approach presented is compatible with most lattice disorder models (dislocations, inclusions, mosaicity, ...) and allows a straightforward account of the instrumental resolution. The applicability of the model is illustrated with the case of an yttria-stabilized zirconia film grown on sapphire.
NASA Technical Reports Server (NTRS)
Ahuja, K. K.; Mendoza, J.
1995-01-01
This report documents the results of an experimental investigation on the response of a cavity to external flowfields. The primary objective of this research was to acquire benchmark of data on the effects of cavity length, width, depth, upstream boundary layer, and flow temperature on cavity noise. These data were to be used for validation of computational aeroacoustic (CAA) codes on cavity noise. To achieve this objective, a systematic set of acoustic and flow measurements were made for subsonic turbulent flows approaching a cavity. These measurements were conducted in the research facilities of the Georgia Tech research institute. Two cavity models were designed, one for heated flow and another for unheated flow studies. Both models were designed such that the cavity length (L) could easily be varied while holding fixed the depth (D) and width (W) dimensions of the cavity. Depth and width blocks were manufactured so that these dimensions could be varied as well. A wall jet issuing from a rectangular nozzle was used to simulate flows over the cavity.
Parallel Computation of the Regional Ocean Modeling System (ROMS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, P; Song, Y T; Chao, Y
2005-04-05
The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds ofmore » processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.« less
Carswell, Dave; Hilton, Andy; Chan, Chris; McBride, Diane; Croft, Nick; Slone, Avril; Cross, Mark; Foster, Graham
2013-08-01
The objective of this study was to demonstrate the potential of Computational Fluid Dynamics (CFD) simulations in predicting the levels of haemolysis in ventricular assist devices (VADs). Three different prototypes of a radial flow VAD have been examined experimentally and computationally using CFD modelling to assess device haemolysis. Numerical computations of the flow field were computed using a CFD model developed with the use of the commercial software Ansys CFX 13 and a set of custom haemolysis analysis tools. Experimental values for the Normalised Index of Haemolysis (NIH) have been calculated as 0.020 g/100 L, 0.014 g/100 L and 0.0042 g/100 L for the three designs. Numerical analysis predicts an NIH of 0.021 g/100 L, 0.017 g/100 L and 0.0057 g/100 L, respectively. The actual differences between experimental and numerical results vary between 0.0012 and 0.003 g/100 L, with a variation of 5% for Pump 1 and slightly larger percentage differences for the other pumps. The work detailed herein demonstrates how CFD simulation and, more importantly, the numerical prediction of haemolysis may be used as an effective tool in order to help the designers of VADs manage the flow paths within pumps resulting in a less haemolytic device. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
A Multi-Fidelity Surrogate Model for the Equation of State for Mixtures of Real Gases
NASA Astrophysics Data System (ADS)
Ouellet, Frederick; Park, Chanyoung; Koneru, Rahul; Balachandar, S.; Rollin, Bertrand
2017-11-01
The explosive dispersal of particles is a complex multiphase and multi-species fluid flow problem. In these flows, the products of detonated explosives must be treated as real gases while the ideal gas equation of state is used for the ambient air. As the products expand outward, they mix with the air and create a region where both state equations must be satisfied. One of the most accurate, yet expensive, methods to handle this problem is an algorithm that iterates between both state equations until both pressure and thermal equilibrium are achieved inside of each computational cell. This work creates a multi-fidelity surrogate model to replace this process. This is achieved by using a Kriging model to produce a curve fit which interpolates selected data from the iterative algorithm. The surrogate is optimized for computing speed and model accuracy by varying the number of sampling points chosen to construct the model. The performance of the surrogate with respect to the iterative method is tested in simulations using a finite volume code. The model's computational speed and accuracy are analyzed to show the benefits of this novel approach. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA00023.
Range Finding with a Plenoptic Camera
2014-03-27
92 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Simulated Camera Analysis...Varying Lens Diameter . . . . . . . . . . . . . . . . 95 Simulated Camera Analysis: Varying Detector Size . . . . . . . . . . . . . . . . . 98 Simulated ...Matching Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 37 Simulated Camera Performance with SIFT
Donkin, Chris; Averell, Lee; Brown, Scott; Heathcote, Andrew
2009-11-01
Cognitive models of the decision process provide greater insight into response time and accuracy than do standard ANOVA techniques. However, such models can be mathematically and computationally difficult to apply. We provide instructions and computer code for three methods for estimating the parameters of the linear ballistic accumulator (LBA), a new and computationally tractable model of decisions between two or more choices. These methods-a Microsoft Excel worksheet, scripts for the statistical program R, and code for implementation of the LBA into the Bayesian sampling software WinBUGS-vary in their flexibility and user accessibility. We also provide scripts in R that produce a graphical summary of the data and model predictions. In a simulation study, we explored the effect of sample size on parameter recovery for each method. The materials discussed in this article may be downloaded as a supplement from http://brm.psychonomic-journals.org/content/supplemental.
Computational Investigation of the NASA Cascade Cyclonic Separation Device
NASA Technical Reports Server (NTRS)
Hoyt, Nathaniel C.; Kamotani, Yasuhiro; Kadambi, Jaikrishnan; McQuillen, John B.; Sankovic, John M.
2008-01-01
Devices designed to replace the absent buoyancy separation mechanism within a microgravity environment are of considerable interest to NASA as the functionality of many spacecraft systems are dependent on the proper sequestration of interpenetrating gas and liquid phases. Inasmuch, a full multifluid Euler-Euler computational fluid dynamics investigation has been undertaken to evaluate the performance characteristics of one such device, the Cascade Cyclonic Separator, across a full range of inlet volumetric quality with combined volumetric injection rates varying from 1 L/min to 20 L/min. These simulations have delimited the general modes of operation of this class of devices and have proven able to describe the complicated vortex structure and induced pressure gradients that arise. The computational work has furthermore been utilized to analyze design modifications that enhance the overall performance of these devices. The promising results indicate that proper CFD modeling may be successfully used as a tool for microgravity separator design.
The Influence of Realistic Reynolds Numbers on Slat Noise Simulations
NASA Technical Reports Server (NTRS)
Lockard, David P.; Choudhari, Meelan M.
2012-01-01
The slat noise from the 30P/30N high-lift system has been computed using a computational fluid dynamics code in conjunction with a Ffowcs Williams-Hawkings solver. Varying the Reynolds number from 1.71 to 12.0 million based on the stowed chord resulted in slight changes in the radiated noise. Tonal features in the spectra were robust and evident for all Reynolds numbers and even when a spanwise flow was imposed. The general trends observed in near-field fluctuations were also similar for all the different Reynolds numbers. Experiments on simplified, subscale high-lift systems have exhibited noticeable dependencies on the Reynolds number and tripping, although primarily for tonal features rather than the broadband portion of the spectra. Either the 30P/30N model behaves differently, or the computational model is unable to capture these effects. Hence, the results underscore the need for more detailed measurements of the slat cove flow.
Nair, Sankaran N; Czaja, Sara J; Sharit, Joseph
2007-06-01
This article explores the role of age, cognitive abilities, prior experience, and knowledge in skill acquisition for a computer-based simulated customer service task. Fifty-two participants aged 50-80 performed the task over 4 consecutive days following training. They also completed a battery that assessed prior computer experience and cognitive abilities. The data indicated that overall quality and efficiency of performance improved with practice. The predictors of initial level of performance and rate of change in performance varied according to the performance parameter assessed. Age and fluid intelligence predicted initial level and rate of improvement in overall quality, whereas crystallized intelligence and age predicted initial e-mail processing time, and crystallized intelligence predicted rate of change in e-mail processing time over days. We discuss the implications of these findings for the design of intervention strategies.
Air-gas exchange reevaluated: clinically important results of a computer simulation.
Shunmugam, Manoharan; Shunmugam, Sudhakaran; Williamson, Tom H; Laidlaw, D Alistair
2011-10-21
The primary aim of this study was to evaluate the efficiency of air-gas exchange techniques and the factors that influence the final concentration of an intraocular gas tamponade. Parameters were varied to find the optimum method of performing an air-gas exchange in ideal circumstances. A computer model of the eye was designed using 3D software with fluid flow analysis capabilities. Factors such as angular distance between ports, gas infusion gauge, exhaust vent gauge and depth were varied in the model. Flow rate and axial length were also modulated to simulate faster injections and more myopic eyes, respectively. The flush volume of gas required to achieve a 97% intraocular gas fraction concentration were compared. Modulating individual factors did not reveal any clinically significant difference in the angular distance between ports, exhaust vent size, and depth or rate of gas injection. In combination, however, there was a 28% increase in air-gas exchange efficiency comparing the most efficient with the least efficient studied parameters in this model. The gas flush volume required to achieve a 97% gas fill also increased proportionately at a ratio of 5.5 to 6.2 times the volume of the eye. A 35-mL flush is adequate for eyes up to 25 mm in axial length; however, eyes longer than this would require a much greater flush volume, and surgeons should consider using two separate 50-mL gas syringes to ensure optimal gas concentration for eyes greater than 25 mm in axial length.
NASA Technical Reports Server (NTRS)
Mccurdy, D. A.
1985-01-01
A laboratory experiment was conducted to compare the flyover noise annoyance of proposed advanced turboprop aircraft with that of conventional turboprop and jet aircraft. The effects of fundamental frequency and tone-to-broadband noise ratio on advanced turboprop annoyance were also examined. A computer synthesis system is used to generate 18 realistic, time varying simulations of propeller aircraft takeoff noise in which the harmonic content is systematically varied to represent the factorial combinations of six fundamental frequencies ranging from 67.5 Hz to 292.5 Hz and three tone-to-broadband noise ratios of 0, 15, and 30 dB. These advanced turboprop simulations along with recordings of five conventional turboprop takeoffs and five conventional jet takeoffs are presented at D-weighted sound pressure levels of 70, 80, and 90 dB to 32 subjects in an anechoic chamber. Analyses of the subjects' annoyance judgments compare the three categories of aircraft and examine the effects of the differences in harmonic content among the advanced turboprop noises. The annoyance prediction ability of various noise measurement procedures and corrections is also examined.
NASA Technical Reports Server (NTRS)
Mccurdy, David A.
1988-01-01
A laboratory experiment was conducted to quantify the annoyance of people to the flyover noise of advanced turboprop aircraft with counter-rotating propellers (CRP) having an equal number of blades on each rotor. The objectives were: to determine the effects of total content on annoyance; and compare annoyance to n x n CRP advanced turboprop aircraft with annoyance to conventional turboprop and jet aircraft. A computer synthesis system was used to generate 27 realistic, time-varying simulations of advanced turboprop takeoff noise in which the tonal content was systematically varied to represent the factorial combinations of nine fundamental frequencies and three tone-to-broadband noise ratios. These advanced turboprop simulations along with recordings of five conventional turboprop takeoffs and five conventional jet takeoffs were presented at three D-weighted sound pressure levels to 64 subjects in an anechoic chamber. Analyses of the subjects' annoyance judgments compare the three aircraft types and examined the effects of the differences in tonal content among the advanced turboprop noises. The annoyance prediction ability of various noise metrics is also examined.
Strategies for Interactive Visualization of Large Scale Climate Simulations
NASA Astrophysics Data System (ADS)
Xie, J.; Chen, C.; Ma, K.; Parvis
2011-12-01
With the advances in computational methods and supercomputing technology, climate scientists are able to perform large-scale simulations at unprecedented resolutions. These simulations produce data that are time-varying, multivariate, and volumetric, and the data may contain thousands of time steps with each time step having billions of voxels and each voxel recording dozens of variables. Visualizing such time-varying 3D data to examine correlations between different variables thus becomes a daunting task. We have been developing strategies for interactive visualization and correlation analysis of multivariate data. The primary task is to find connection and correlation among data. Given the many complex interactions among the Earth's oceans, atmosphere, land, ice and biogeochemistry, and the sheer size of observational and climate model data sets, interactive exploration helps identify which processes matter most for a particular climate phenomenon. We may consider time-varying data as a set of samples (e.g., voxels or blocks), each of which is associated with a vector of representative or collective values over time. We refer to such a vector as a temporal curve. Correlation analysis thus operates on temporal curves of data samples. A temporal curve can be treated as a two-dimensional function where the two dimensions are time and data value. It can also be treated as a point in the high-dimensional space. In this case, to facilitate effective analysis, it is often necessary to transform temporal curve data from the original space to a space of lower dimensionality. Clustering and segmentation of temporal curve data in the original or transformed space provides us a way to categorize and visualize data of different patterns, which reveals connection or correlation of data among different variables or at different spatial locations. We have employed the power of GPU to enable interactive correlation visualization for studying the variability and correlations of a single or a pair of variables. It is desired to create a succinct volume classification that summarizes the connection among all correlation volumes with respect to various reference locations. Providing a reference location must correspond to a voxel position, the number of correlation volumes equals the total number of voxels. A brute-force solution takes all correlation volumes as the input and classifies their corresponding voxels according to their correlation volumes' distance. For large-scale time-varying multivariate data, calculating all these correlation volumes on-the-fly and analyzing the relationships among them is not feasible. We have developed a sampling-based approach for volume classification in order to reduce the computation cost of computing the correlation volumes. Users are able to employ their domain knowledge in selecting important samples. The result is a static view that captures the essence of correlation relationships; i.e., for all voxels in the same cluster, their corresponding correlation volumes are similar. This sampling-based approach enables us to obtain an approximation of correlation relations in a cost-effective manner, thus leading to a scalable solution to investigate large-scale data sets. These techniques empower climate scientists to study large data from their simulations.
Orchestrating TRANSP Simulations for Interpretative and Predictive Tokamak Modeling with OMFIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grierson, B. A.; Yuan, X.; Gorelenkova, M.
TRANSP simulations are being used in the OMFIT work- flow manager to enable a machine independent means of experimental analysis, postdictive validation, and predictive time dependent simulations on the DIII-D, NSTX, JET and C-MOD tokamaks. The procedures for preparing the input data from plasma profile diagnostics and equilibrium reconstruction, as well as processing of the time-dependent heating and current drive sources and assumptions about the neutral recycling, vary across machines, but are streamlined by using a common workflow manager. Settings for TRANSP simulation fidelity are incorporated into the OMFIT framework, contrasting between-shot analysis, power balance, and fast-particle simulations. A previouslymore » established series of data consistency metrics are computed such as comparison of experimental vs. calculated neutron rate, equilibrium stored energy vs. total stored energy from profile and fast-ion pressure, and experimental vs. computed surface loop voltage. Discrepancies between data consistency metrics can indicate errors in input quantities such as electron density profile or Zeff, or indicate anomalous fast-particle transport. Measures to assess the sensitivity of the verification metrics to input quantities are provided by OMFIT, including scans of the input profiles and standardized post-processing visualizations. For predictive simulations, TRANSP uses GLF23 or TGLF to predict core plasma profiles, with user defined boundary conditions in the outer region of the plasma. ITPA validation metrics are provided in post-processing to assess the transport model validity. By using OMFIT to orchestrate the steps for experimental data preparation, selection of operating mode, submission, post-processing and visualization, we have streamlined and standardized the usage of TRANSP.« less
Orchestrating TRANSP Simulations for Interpretative and Predictive Tokamak Modeling with OMFIT
Grierson, B. A.; Yuan, X.; Gorelenkova, M.; ...
2018-02-21
TRANSP simulations are being used in the OMFIT work- flow manager to enable a machine independent means of experimental analysis, postdictive validation, and predictive time dependent simulations on the DIII-D, NSTX, JET and C-MOD tokamaks. The procedures for preparing the input data from plasma profile diagnostics and equilibrium reconstruction, as well as processing of the time-dependent heating and current drive sources and assumptions about the neutral recycling, vary across machines, but are streamlined by using a common workflow manager. Settings for TRANSP simulation fidelity are incorporated into the OMFIT framework, contrasting between-shot analysis, power balance, and fast-particle simulations. A previouslymore » established series of data consistency metrics are computed such as comparison of experimental vs. calculated neutron rate, equilibrium stored energy vs. total stored energy from profile and fast-ion pressure, and experimental vs. computed surface loop voltage. Discrepancies between data consistency metrics can indicate errors in input quantities such as electron density profile or Zeff, or indicate anomalous fast-particle transport. Measures to assess the sensitivity of the verification metrics to input quantities are provided by OMFIT, including scans of the input profiles and standardized post-processing visualizations. For predictive simulations, TRANSP uses GLF23 or TGLF to predict core plasma profiles, with user defined boundary conditions in the outer region of the plasma. ITPA validation metrics are provided in post-processing to assess the transport model validity. By using OMFIT to orchestrate the steps for experimental data preparation, selection of operating mode, submission, post-processing and visualization, we have streamlined and standardized the usage of TRANSP.« less
Modeling chemical vapor deposition of silicon dioxide in microreactors at atmospheric pressure
NASA Astrophysics Data System (ADS)
Konakov, S. A.; Krzhizhanovskaya, V. V.
2015-01-01
We developed a multiphysics mathematical model for simulation of silicon dioxide Chemical Vapor Deposition (CVD) from tetraethyl orthosilicate (TEOS) and oxygen mixture in a microreactor at atmospheric pressure. Microfluidics is a promising technology with numerous applications in chemical synthesis due to its high heat and mass transfer efficiency and well-controlled flow parameters. Experimental studies of CVD microreactor technology are slow and expensive. Analytical solution of the governing equations is impossible due to the complexity of intertwined non-linear physical and chemical processes. Computer simulation is the most effective tool for design and optimization of microreactors. Our computational fluid dynamics model employs mass, momentum and energy balance equations for a laminar transient flow of a chemically reacting gas mixture at low Reynolds number. Simulation results show the influence of microreactor configuration and process parameters on SiO2 deposition rate and uniformity. We simulated three microreactors with the central channel diameter of 5, 10, 20 micrometers, varying gas flow rate in the range of 5-100 microliters per hour and temperature in the range of 300-800 °C. For each microchannel diameter we found an optimal set of process parameters providing the best quality of deposited material. The model will be used for optimization of the microreactor configuration and technological parameters to facilitate the experimental stage of this research.
Adaptation of the Carter-Tracy water influx calculation to groundwater flow simulation
Kipp, Kenneth L.
1986-01-01
The Carter-Tracy calculation for water influx is adapted to groundwater flow simulation with additional clarifying explanation not present in the original papers. The Van Everdingen and Hurst aquifer-influence functions for radial flow from an outer aquifer region are employed. This technique, based on convolution of unit-step response functions, offers a simple but approximate method for embedding an inner region of groundwater flow simulation within a much larger aquifer region where flow can be treated in an approximate fashion. The use of aquifer-influence functions in groundwater flow modeling reduces the size of the computational grid with a corresponding reduction in computer storage and execution time. The Carter-Tracy approximation to the convolution integral enables the aquifer influence function calculation to be made with an additional storage requirement of only two times the number of boundary nodes more than that required for the inner region simulation. It is a good approximation for constant flow rates but is poor for time-varying flow rates where the variation is large relative to the mean. A variety of outer aquifer region geometries, exterior boundary conditions, and flow rate versus potentiometric head relations can be used. The radial, transient-flow case presented is representative. An analytical approximation to the functions of Van Everdingen and Hurst for the dimensionless potentiometric head versus dimensionless time is given.
Fermion-to-qubit mappings with varying resource requirements for quantum simulation
NASA Astrophysics Data System (ADS)
Steudtner, Mark; Wehner, Stephanie
2018-06-01
The mapping of fermionic states onto qubit states, as well as the mapping of fermionic Hamiltonian into quantum gates enables us to simulate electronic systems with a quantum computer. Benefiting the understanding of many-body systems in chemistry and physics, quantum simulation is one of the great promises of the coming age of quantum computers. Interestingly, the minimal requirement of qubits for simulating Fermions seems to be agnostic of the actual number of particles as well as other symmetries. This leads to qubit requirements that are well above the minimal requirements as suggested by combinatorial considerations. In this work, we develop methods that allow us to trade-off qubit requirements against the complexity of the resulting quantum circuit. We first show that any classical code used to map the state of a fermionic Fock space to qubits gives rise to a mapping of fermionic models to quantum gates. As an illustrative example, we present a mapping based on a nonlinear classical error correcting code, which leads to significant qubit savings albeit at the expense of additional quantum gates. We proceed to use this framework to present a number of simpler mappings that lead to qubit savings with a more modest increase in gate difficulty. We discuss the role of symmetries such as particle conservation, and savings that could be obtained if an experimental platform could easily realize multi-controlled gates.
NASA Astrophysics Data System (ADS)
Zhang, Min; Liang, Zuozhong; Wu, Fei; Chen, Jian-Feng; Xue, Chunyu; Zhao, Hong
2017-06-01
We selected the crystal structures of ibuprofen with seven common space groups (Cc, P21/c, P212121, P21, Pbca, Pna21, and Pbcn), which was generated from ibuprofen molecule by molecular simulation. The predicted crystal structures of ibuprofen with space group P21/c has the lowest total energy and the largest density, which is nearly indistinguishable with experimental result. In addition, the XRD patterns for predicted crystal structure are highly consistent with recrystallization from solvent of ibuprofen. That indicates that the simulation can accurately predict the crystal structure of ibuprofen from the molecule. Furthermore, based on this crystal structure, we predicted the crystal habit in vacuum using the attachment energy (AE) method and considered solvent effects in a systematic way using the modified attachment energy (MAE) model. The simulation can accurately construct a complete process from molecule to crystal structure to morphology prediction. Experimentally, we observed crystal morphologies in four different polarity solvents compounds (ethanol, acetonitrile, ethyl acetate, and toluene). We found that the aspect ratio decreases of crystal habits in this ibuprofen system were found to vary with increasing solvent relative polarity. Besides, the modified crystal morphologies are in good agreement with the observed experimental morphologies. Finally, this work may guide computer-aided design of the desirable crystal morphology.
A Comparison of Compressed Sensing and Sparse Recovery Algorithms Applied to Simulation Data
Fan, Ya Ju; Kamath, Chandrika
2016-09-01
The move toward exascale computing for scientific simulations is placing new demands on compression techniques. It is expected that the I/O system will not be able to support the volume of data that is expected to be written out. To enable quantitative analysis and scientific discovery, we are interested in techniques that compress high-dimensional simulation data and can provide perfect or near-perfect reconstruction. In this paper, we explore the use of compressed sensing (CS) techniques to reduce the size of the data before they are written out. Using large-scale simulation data, we investigate how the sufficient sparsity condition and themore » contrast in the data affect the quality of reconstruction and the degree of compression. Also, we provide suggestions for the practical implementation of CS techniques and compare them with other sparse recovery methods. Finally, our results show that despite longer times for reconstruction, compressed sensing techniques can provide near perfect reconstruction over a range of data with varying sparsity.« less
A Comparison of Compressed Sensing and Sparse Recovery Algorithms Applied to Simulation Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Ya Ju; Kamath, Chandrika
The move toward exascale computing for scientific simulations is placing new demands on compression techniques. It is expected that the I/O system will not be able to support the volume of data that is expected to be written out. To enable quantitative analysis and scientific discovery, we are interested in techniques that compress high-dimensional simulation data and can provide perfect or near-perfect reconstruction. In this paper, we explore the use of compressed sensing (CS) techniques to reduce the size of the data before they are written out. Using large-scale simulation data, we investigate how the sufficient sparsity condition and themore » contrast in the data affect the quality of reconstruction and the degree of compression. Also, we provide suggestions for the practical implementation of CS techniques and compare them with other sparse recovery methods. Finally, our results show that despite longer times for reconstruction, compressed sensing techniques can provide near perfect reconstruction over a range of data with varying sparsity.« less
NASA Astrophysics Data System (ADS)
Ahmed, Asif; Ferdous, Imam Ul.; Saha, Sumon
2017-06-01
In the present study, three-dimensional numerical simulation of two shell-and-tube heat exchangers (STHXs) with conventional segmental baffles (STHXsSB) and continuous helical baffle (STHXsHB) is carried out and a comparative study is performed based on the simulation results. Both of the STHXs contain 37 tubes inside a 500 mm long and 200 mm diameter shell and mass flow rate of shell-side fluid is varied from 0.5 kg/s to 2 kg/s. At first, physical and mathematical models are developed and numerically simulated using finite element method (FEM). For the validation of the computational model, shell-side average nusselt number (Nus) is calculated from the simulation results and compared with the available experimental results. The comparative study shows that STHXsHB has 72-127% higher heat transfer coefficient per unit pressure drop compared to the conventional STHXsSB for the same shell-side mass flow rate. Moreover, STHXsHB has 59-63% lower shell-side pressure drop than STHXsSB.
Topology, structures, and energy landscapes of human chromosomes
Zhang, Bin; Wolynes, Peter G.
2015-01-01
Chromosome conformation capture experiments provide a rich set of data concerning the spatial organization of the genome. We use these data along with a maximum entropy approach to derive a least-biased effective energy landscape for the chromosome. Simulations of the ensemble of chromosome conformations based on the resulting information theoretic landscape not only accurately reproduce experimental contact probabilities, but also provide a picture of chromosome dynamics and topology. The topology of the simulated chromosomes is probed by computing the distribution of their knot invariants. The simulated chromosome structures are largely free of knots. Topologically associating domains are shown to be crucial for establishing these knotless structures. The simulated chromosome conformations exhibit a tendency to form fibril-like structures like those observed via light microscopy. The topologically associating domains of the interphase chromosome exhibit multistability with varying liquid crystalline ordering that may allow discrete unfolding events and the landscape is locally funneled toward “ideal” chromosome structures that represent hierarchical fibrils of fibrils. PMID:25918364
NASA Astrophysics Data System (ADS)
Liu, Shuyuan; Zhang, Yong; Feng, Yu; Shi, Changbin; Cao, Yong; Yuan, Wei
2018-02-01
A coupled population balance sectional method (PBSM) coupled with computational fluid dynamics (CFD) is presented to simulate the capture of aerosolized oil droplets (AODs) in a range hood exhaust. The homogeneous nucleation and coagulation processes are modeled and simulated with this CFD-PBSM method. With the design angle, α of the range hood exhaust varying from 60° to 30°, the AODs capture increases meanwhile the pressure drop between the inlet and the outlet of the range hood also increases from 8.38Pa to 175.75Pa. The increasing inlet flow velocities also result in less AODs capture although the total suction increases due to higher flow rates to the range hood. Therefore, the CFD-PBSM method provides an insight into the formation and capture of AODs as well as their impact on the operation and design of the range hood exhaust.
NASA Astrophysics Data System (ADS)
WANG, J.; Kim, J.
2014-12-01
In this study, sensitivity of pollutant dispersion on turbulent Schmidt number (Sct) was investigated in a street canyon using a computational fluid dynamics (CFD) model. For this, numerical simulations with systematically varied Sct were performed and the CFD model results were validated against a wind‒tunnel measurement data. The results showed that root mean square error (RMSE) was quite dependent on Sct and dispersion patterns of non‒reactive scalar pollutant with different Sct were quite different among the simulation results. The RMSE was lowest in the case of Sct = 0.35 and the apparent dispersion pattern was most similar to the wind‒tunnel data in the case of Sct = 0.35. Also, numerical simulations using spatially weighted Sct were additionally performed in order for the best reproduction of the wind‒tunnel data. Detailed method and procedure to find the best reproduction will be presented.
Radiation-MHD simulations for the development of a spark discharge channel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niederhaus, John Henry; Jorgenson, Roy E.; Warne, Larry K.
The growth of a cylindrical s park discharge channel in water and Lexan is studied using a series of one - dimensional simulations with the finite - element radiation - magnetohydrodynamics code ALEGRA. Computed solutions are analyzed in order to characterize the rate of growth and dynamics of the spark c hannels during the rising - current phase of the drive pulse. The current ramp rate is varied between 0.2 and 3.0 kA/ns, and values of the mechanical coupling coefficient K p are extracted for each case. The simulations predict spark channel expansion veloc ities primarily in the range ofmore » 2000 to 3500 m/s, channel pressures primarily in the range 10 - 40 GPa, and K p values primarily between 1.1 and 1.4. When Lexan is preheated, slightly larger expansion velocities and smaller K p values are predicted , but the o verall behavior is unchanged.« less
Swept-Wing Ice Accretion Characterization and Aerodynamics
NASA Technical Reports Server (NTRS)
Broeren, Andy P.; Potapczuk, Mark G.; Riley, James T.; Villedieu, Philippe; Moens, Frederic; Bragg, Michael B.
2013-01-01
NASA, FAA, ONERA, the University of Illinois and Boeing have embarked on a significant, collaborative research effort to address the technical challenges associated with icing on large-scale, three-dimensional swept wings. The overall goal is to improve the fidelity of experimental and computational simulation methods for swept-wing ice accretion formation and resulting aerodynamic effect. A seven-phase research effort has been designed that incorporates ice-accretion and aerodynamic experiments and computational simulations. As the baseline, full-scale, swept-wing-reference geometry, this research will utilize the 65% scale Common Research Model configuration. Ice-accretion testing will be conducted in the NASA Icing Research Tunnel for three hybrid swept-wing models representing the 20%, 64% and 83% semispan stations of the baseline-reference wing. Three-dimensional measurement techniques are being developed and validated to document the experimental ice-accretion geometries. Artificial ice shapes of varying geometric fidelity will be developed for aerodynamic testing over a large Reynolds number range in the ONERA F1 pressurized wind tunnel and in a smaller-scale atmospheric wind tunnel. Concurrent research will be conducted to explore and further develop the use of computational simulation tools for ice accretion and aerodynamics on swept wings. The combined results of this research effort will result in an improved understanding of the ice formation and aerodynamic effects on swept wings. The purpose of this paper is to describe this research effort in more detail and report on the current results and status to date. 1
Swept-Wing Ice Accretion Characterization and Aerodynamics
NASA Technical Reports Server (NTRS)
Broeren, Andy P.; Potapczuk, Mark G.; Riley, James T.; Villedieu, Philippe; Moens, Frederic; Bragg, Michael B.
2013-01-01
NASA, FAA, ONERA, the University of Illinois and Boeing have embarked on a significant, collaborative research effort to address the technical challenges associated with icing on large-scale, three-dimensional swept wings. The overall goal is to improve the fidelity of experimental and computational simulation methods for swept-wing ice accretion formation and resulting aerodynamic effect. A seven-phase research effort has been designed that incorporates ice-accretion and aerodynamic experiments and computational simulations. As the baseline, full-scale, swept-wing-reference geometry, this research will utilize the 65 percent scale Common Research Model configuration. Ice-accretion testing will be conducted in the NASA Icing Research Tunnel for three hybrid swept-wing models representing the 20, 64 and 83 percent semispan stations of the baseline-reference wing. Threedimensional measurement techniques are being developed and validated to document the experimental ice-accretion geometries. Artificial ice shapes of varying geometric fidelity will be developed for aerodynamic testing over a large Reynolds number range in the ONERA F1 pressurized wind tunnel and in a smaller-scale atmospheric wind tunnel. Concurrent research will be conducted to explore and further develop the use of computational simulation tools for ice accretion and aerodynamics on swept wings. The combined results of this research effort will result in an improved understanding of the ice formation and aerodynamic effects on swept wings. The purpose of this paper is to describe this research effort in more detail and report on the current results and status to date.
Graphic-based musculoskeletal model for biomechanical analyses and animation.
Chao, Edmund Y S
2003-04-01
The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.
Passive and active ventricular elastances of the left ventricle
Zhong, Liang; Ghista, Dhanjoo N; Ng, Eddie YK; Lim, Soo T
2005-01-01
Background Description of the heart as a pump has been dominated by models based on elastance and compliance. Here, we are presenting a somewhat new concept of time-varying passive and active elastance. The mathematical basis of time-varying elastance of the ventricle is presented. We have defined elastance in terms of the relationship between ventricular pressure and volume, as: dP = EdV + VdE, where E includes passive (Ep) and active (Ea) elastance. By incorporating this concept in left ventricular (LV) models to simulate filling and systolic phases, we have obtained the time-varying expression for Ea and the LV-volume dependent expression for Ep. Methods and Results Using the patient's catheterization-ventriculogram data, the values of passive and active elastance are computed. Ea is expressed as: ; Epis represented as: . Ea is deemed to represent a measure of LV contractility. Hence, Peak dP/dt and ejection fraction (EF) are computed from the monitored data and used as the traditional measures of LV contractility. When our computed peak active elastance (Ea,max) is compared against these traditional indices by linear regression, a high degree of correlation is obtained. As regards Ep, it constitutes a volume-dependent stiffness property of the LV, and is deemed to represent resistance-to-filling. Conclusions Passive and active ventricular elastance formulae can be evaluated from a single-beat P-V data by means of a simple-to-apply LV model. The active elastance (Ea) can be used to characterize the ventricle's contractile state, while passive elastance (Ep) can represent a measure of resistance-to-filling. PMID:15707494
A Multi-Level Parallelization Concept for High-Fidelity Multi-Block Solvers
NASA Technical Reports Server (NTRS)
Hatay, Ferhat F.; Jespersen, Dennis C.; Guruswamy, Guru P.; Rizk, Yehia M.; Byun, Chansup; Gee, Ken; VanDalsem, William R. (Technical Monitor)
1997-01-01
The integration of high-fidelity Computational Fluid Dynamics (CFD) analysis tools with the industrial design process benefits greatly from the robust implementations that are transportable across a wide range of computer architectures. In the present work, a hybrid domain-decomposition and parallelization concept was developed and implemented into the widely-used NASA multi-block Computational Fluid Dynamics (CFD) packages implemented in ENSAERO and OVERFLOW. The new parallel solver concept, PENS (Parallel Euler Navier-Stokes Solver), employs both fine and coarse granularity in data partitioning as well as data coalescing to obtain the desired load-balance characteristics on the available computer platforms. This multi-level parallelism implementation itself introduces no changes to the numerical results, hence the original fidelity of the packages are identically preserved. The present implementation uses the Message Passing Interface (MPI) library for interprocessor message passing and memory accessing. By choosing an appropriate combination of the available partitioning and coalescing capabilities only during the execution stage, the PENS solver becomes adaptable to different computer architectures from shared-memory to distributed-memory platforms with varying degrees of parallelism. The PENS implementation on the IBM SP2 distributed memory environment at the NASA Ames Research Center obtains 85 percent scalable parallel performance using fine-grain partitioning of single-block CFD domains using up to 128 wide computational nodes. Multi-block CFD simulations of complete aircraft simulations achieve 75 percent perfect load-balanced executions using data coalescing and the two levels of parallelism. SGI PowerChallenge, SGI Origin 2000, and a cluster of workstations are the other platforms where the robustness of the implementation is tested. The performance behavior on the other computer platforms with a variety of realistic problems will be included as this on-going study progresses.
Multi-injector modeling of transverse combustion instability experiments
NASA Astrophysics Data System (ADS)
Shipley, Kevin J.
Concurrent simulations and experiments are used to study combustion instabilities in a multiple injector element combustion chamber. The experiments employ a linear array of seven coaxial injector elements positioned atop a rectangular chamber. Different levels of instability are driven in the combustor by varying the operating and geometry parameters of the outer driving injector elements located near the chamber end-walls. The objectives of the study are to apply a reduced three-injector model to generate a computational test bed for the evaluation of injector response to transverse instability, to apply a full seven-injector model to investigate the inter-element coupling between injectors in response to transverse instability, and to further develop this integrated approach as a key element in a predictive methodology that relies heavily on subscale test and simulation. To measure the effects of the transverse wave on a central study injector element two opposing windows are placed in the chamber to allow optical access. The chamber is extensively instrumented with high-frequency pressure transducers. High-fidelity computational fluid dynamics simulations are used to model the experiment. Specifically three-dimensional, detached eddy simulations (DES) are used. Two computational approaches are investigated. The first approach models the combustor with three center injectors and forces transverse waves in the chamber with a wall velocity function at the chamber side walls. Different levels of pressure oscillation amplitudes are possible by varying the amplitude of the forcing function. The purpose of this method is to focus on the combustion response of the study element. In the second approach, all seven injectors are modeled and self-excited combustion instability is achieved. This realistic model of the chamber allows the study of inter-element flow dynamics, e.g., how the resonant motions in the injector tubes are coupled through the transverse pressure waves in the chamber. The computational results are analyzed and compared with experiment results in the time, frequency and modal domains. Results from the three injector model show how applying different velocity forcing amplitudes change the amplitude and spatial location of heat release from the center injector. The instability amplitudes in the simulation are able to be tuned to experiments and produce similar modal combustion responses of the center injector. The reaction model applied was found to play an important role in the spatial and temporal heat release response. Only when the model was calibrated to ignition delay measurements did the heat release response reflect measurements in the experiment. While insightful the simulations are not truly predictive because the driving frequency and forcing function amplitude are input into the simulation. However, the use of this approach as a tool to investigate combustion response is demonstrated. Results from the seven injector simulations provide an insightful look at the mechanisms driving the instability in the combustor. The instability was studied over a range of pressure fluctuations, up to 70% of mean chamber pressure produced in the self-exited simulation. At low amplitudes the transverse instability was found to be supported by both flame impingement with the side wall as well as vortex shedding at the primary acoustic frequency. As instability level grew the primary supporting mechanism shifted to just vortex impingement on the side walls and the greatest growth was seen as additional vortices began impinging between injector elements at the primary acoustic frequency. This research reveals the advantages and limitations of applying these two modeling techniques to simulate multiple injector experiments. The advantage of the three injector model is a simplified geometry which results in faster model development and the ability to more rapidly study the injector response under varying velocity amplitudes. The possibly faster run time is offset though by the need to run multiple cases to calibrate the model to the experiment. The model is also limited to studying the central injector effect and lacks heat release sources from the outer injectors and additional vortex interactions as shown in the seven injector simulation. The advantage of the seven injector model is that the whole domain can be explored to provide a better understanding about influential processes but does require longer development and run time due to the extensive gridding requirement. Both simulations have proven useful in exploring transverse combustion instability and show the need to further develop subscale experiments and companions simulations in developing a full-scale combustion instability prediction capability.
Euler-Lagrange Simulations of Shock Wave-Particle Cloud Interaction
NASA Astrophysics Data System (ADS)
Koneru, Rahul; Rollin, Bertrand; Ouellet, Frederick; Park, Chanyoung; Balachandar, S.
2017-11-01
Numerical experiments of shock interacting with an evolving and fixed cloud of particles are performed. In these simulations we use Eulerian-Lagrangian approach along with state-of-the-art point-particle force and heat transfer models. As validation, we use Sandia Multiphase Shock Tube experiments and particle-resolved simulations. The particle curtain upon interaction with the shock wave is expected to experience Kelvin-Helmholtz (KH) and Richtmyer-Meshkov (RM) instabilities. In the simulations evolving the particle cloud, the initial volume fraction profile matches with that of Sandia Multiphase Shock Tube experiments, and the shock Mach number is limited to M =1.66. Measurements of particle dispersion are made at different initial volume fractions. A detailed analysis of the influence of initial conditions on the evolution of the particle cloudis presented. The early time behavior of the models is studied in the fixed bed simulations at varying volume fractions and shock Mach numbers.The mean gas quantities are measured in the context of 1-way and 2-way coupled simulations. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, Contract No. DE-NA0002378.
Hallock, Michael J.; Stone, John E.; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida
2014-01-01
Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli. Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems. PMID:24882911
Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida
2014-05-01
Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli . Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.
Bayesian spatial transformation models with applications in neuroimaging data
Miranda, Michelle F.; Zhu, Hongtu; Ibrahim, Joseph G.
2013-01-01
Summary The aim of this paper is to develop a class of spatial transformation models (STM) to spatially model the varying association between imaging measures in a three-dimensional (3D) volume (or 2D surface) and a set of covariates. Our STMs include a varying Box-Cox transformation model for dealing with the issue of non-Gaussian distributed imaging data and a Gaussian Markov Random Field model for incorporating spatial smoothness of the imaging data. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. Simulations and real data analysis demonstrate that the STM significantly outperforms the voxel-wise linear model with Gaussian noise in recovering meaningful geometric patterns. Our STM is able to reveal important brain regions with morphological changes in children with attention deficit hyperactivity disorder. PMID:24128143
Analysis of the Effects of Streamwise Lift Distribution on Sonic Boom Signature
NASA Technical Reports Server (NTRS)
Yoo, Seung Yeun (Paul)
2010-01-01
The streamwise lift distribution of a wing-canard-stabilator-body configuration was varied to study its effect on the near-field sonic boom signature. The investigation was carried out via solving the three-dimensional Euler equation with the OVERFLOW-2 flow solver. The computational meshes were created using the Chimera overset grid topology. The lift distribution was varied by first deflecting the canard then trimming the aircraft with the wing and the stabilator while maintaining constant lift coefficient of 0.05. A validation study using experimental results was also performed to determine required grid resolution and appropriate numerical scheme. A wide range of streamwise lift distribution was simulated. The result shows that the longitudinal wave propagation speed can be controlled through lift distribution thus controlling the shock coalescence.
NASA Astrophysics Data System (ADS)
Yuan, Chang-Qing; Zhao, Tong-Jun; Zhan, Yong; Zhang, Su-Hua; Liu, Hui; Zhang, Yu-Hong
2009-11-01
Based on the well accepted Hodgkin-Huxley neuron model, the neuronal intrinsic excitability is studied when the neuron is subject to varying environmental temperatures, the typical impact for its regulating ways. With computer simulation, it is found that altering environmental temperature can improve or inhibit the neuronal intrinsic excitability so as to influence the neuronal spiking properties. The impacts from environmental factors can be understood that the neuronal spiking threshold is essentially influenced by the fluctuations in the environment. With the environmental temperature varying, burst spiking is realized for the neuronal membrane voltage because of the environment-dependent spiking threshold. This burst induced by changes in spiking threshold is different from that excited by input currents or other stimulus.
Using Computer Simulations for Investigating a Sex Education Intervention: An Exploratory Study
Bullock, Seth; Graham, Cynthia A; Ingham, Roger
2017-01-01
Background Sexually transmitted infections (STIs) are ongoing concerns. The best method for preventing the transmission of these infections is the correct and consistent use of condoms. Few studies have explored the use of games in interventions for increasing condom use by challenging the false sense of security associated with judging the presence of an STI based on attractiveness. Objectives The primary purpose of this study was to explore the potential use of computer simulation as a serious game for sex education. Specific aims were to (1) study the influence of a newly designed serious game on self-rated confidence for assessing STI risk and (2) examine whether this varied by gender, age, and scores on sexuality-related personality trait measures. Methods This paper undertook a Web-based questionnaire study employing between and within subject analyses. A Web-based platform hosted in the United Kingdom was used to deliver male and female stimuli (facial photographs) and collect data. A convenience sample group of 66 participants (64%, 42/66) male, mean age 22.5 years) completed the Term on the Tides, a computer simulation developed for this study. Participants also completed questionnaires on demographics, sexual preferences, sexual risk evaluations, the Sexual Sensation Seeking Scale (SSS), and the Sexual Inhibition Subscale 2 (SIS2) of the Sexual Inhibition/Sexual Excitation Scales-Short Form (SIS/SES - SF). Results The overall confidence of participants to evaluate sexual risks reduced after playing the game (P<.005). Age and personality trait measures did not predict the change in confidence of evaluating risk. Women demonstrated larger shifts in confidence than did men (P=.03). Conclusions This study extends the literature by investigating the potential of computer simulations as a serious game for sex education. Engaging in the Term on the Tides game had an impact on participants’ confidence in evaluating sexual risks. PMID:28468747
Design and construction of miniature artificial ecosystem based on dynamic response optimization
NASA Astrophysics Data System (ADS)
Hu, Dawei; Liu, Hong; Tong, Ling; Li, Ming; Hu, Enzhu
The miniature artificial ecosystem (MAES) is a combination of man, silkworm, salad and mi-croalgae to partially regenerate O2 , sanitary water and food, simultaneously dispose CO2 and wastes, therefore it have a fundamental life support function. In order to enhance the safety and reliability of MAES and eliminate the influences of internal variations and external dis-turbances, it was necessary to configure MAES as a closed-loop control system, and it could be considered as a prototype for future bioregenerative life support system. However, MAES is a complex system possessing large numbers of parameters, intricate nonlinearities, time-varying factors as well as uncertainties, hence it is difficult to perfectly design and construct a prototype through merely conducting experiments by trial and error method. Our research presented an effective way to resolve preceding problem by use of dynamic response optimiza-tion. Firstly the mathematical model of MAES with first-order nonlinear ordinary differential equations including parameters was developed based on relevant mechanisms and experimental data, secondly simulation model of MAES was derived on the platform of MatLab/Simulink to perform model validation and further digital simulations, thirdly reference trajectories of de-sired dynamic response of system outputs were specified according to prescribed requirements, and finally optimization for initial values, tuned parameter and independent parameters was carried out using the genetic algorithm, the advanced direct search method along with parallel computing methods through computer simulations. The result showed that all parameters and configurations of MAES were determined after a series of computer experiments, and its tran-sient response performances and steady characteristics closely matched the reference curves. Since the prototype is a physical system that represents the mathematical model with reason-able accuracy, so the process of designing and constructing a prototype of MAES is the reverse of mathematical modeling, and must have prerequisite assists from these results of computer simulation.
Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies
NASA Astrophysics Data System (ADS)
Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj
2016-04-01
In climate simulations, the impacts of the sub-grid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the sub-grid variability in a computationally inexpensive manner. This presentation shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition, by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a non-zero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference PD Williams, NJ Howe, JM Gregory, RS Smith, and MM Joshi (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, under revision.
Bethge, Anja; Schumacher, Udo
2017-01-01
Background Tumor vasculature is critical for tumor growth, formation of distant metastases and efficiency of radio- and chemotherapy treatments. However, how the vasculature itself is affected during cancer treatment regarding to the metastatic behavior has not been thoroughly investigated. Therefore, the aim of this study was to analyze the influence of hypofractionated radiotherapy and cisplatin chemotherapy on vessel tree geometry and metastasis formation in a small cell lung cancer xenograft mouse tumor model to investigate the spread of malignant cells during different treatments modalities. Methods The biological data gained during these experiments were fed into our previously developed computer model “Cancer and Treatment Simulation Tool” (CaTSiT) to model the growth of the primary tumor, its metastatic deposit and also the influence on different therapies. Furthermore, we performed quantitative histology analyses to verify our predictions in xenograft mouse tumor model. Results According to the computer simulation the number of cells engrafting must vary considerably to explain the different weights of the primary tumor at the end of the experiment. Once a primary tumor is established, the fractal dimension of its vasculature correlates with the tumor size. Furthermore, the fractal dimension of the tumor vasculature changes during treatment, indicating that the therapy affects the blood vessels’ geometry. We corroborated these findings with a quantitative histological analysis showing that the blood vessel density is depleted during radiotherapy and cisplatin chemotherapy. The CaTSiT computer model reveals that chemotherapy influences the tumor’s therapeutic susceptibility and its metastatic spreading behavior. Conclusion Using a system biological approach in combination with xenograft models and computer simulations revealed that the usage of chemotherapy and radiation therapy determines the spreading behavior by changing the blood vessel geometry of the primary tumor. PMID:29107953
Curtin, Lindsay B; Finn, Laura A; Czosnowski, Quinn A; Whitman, Craig B; Cawley, Michael J
2011-08-10
To assess the impact of computer-based simulation on the achievement of student learning outcomes during mannequin-based simulation. Participants were randomly assigned to rapid response teams of 5-6 students and then teams were randomly assigned to either a group that completed either computer-based or mannequin-based simulation cases first. In both simulations, students used their critical thinking skills and selected interventions independent of facilitator input. A predetermined rubric was used to record and assess students' performance in the mannequin-based simulations. Feedback and student performance scores were generated by the software in the computer-based simulations. More of the teams in the group that completed the computer-based simulation before completing the mannequin-based simulation achieved the primary outcome for the exercise, which was survival of the simulated patient (41.2% vs. 5.6%). The majority of students (>90%) recommended the continuation of simulation exercises in the course. Students in both groups felt the computer-based simulation should be completed prior to the mannequin-based simulation. The use of computer-based simulation prior to mannequin-based simulation improved the achievement of learning goals and outcomes. In addition to improving participants' skills, completing the computer-based simulation first may improve participants' confidence during the more real-life setting achieved in the mannequin-based simulation.
A Computational Model Predicting Disruption of Blood Vessel Development
Kleinstreuer, Nicole; Dix, David; Rountree, Michael; Baker, Nancy; Sipes, Nisha; Reif, David; Spencer, Richard; Knudsen, Thomas
2013-01-01
Vascular development is a complex process regulated by dynamic biological networks that vary in topology and state across different tissues and developmental stages. Signals regulating de novo blood vessel formation (vasculogenesis) and remodeling (angiogenesis) come from a variety of biological pathways linked to endothelial cell (EC) behavior, extracellular matrix (ECM) remodeling and the local generation of chemokines and growth factors. Simulating these interactions at a systems level requires sufficient biological detail about the relevant molecular pathways and associated cellular behaviors, and tractable computational models that offset mathematical and biological complexity. Here, we describe a novel multicellular agent-based model of vasculogenesis using the CompuCell3D (http://www.compucell3d.org/) modeling environment supplemented with semi-automatic knowledgebase creation. The model incorporates vascular endothelial growth factor signals, pro- and anti-angiogenic inflammatory chemokine signals, and the plasminogen activating system of enzymes and proteases linked to ECM interactions, to simulate nascent EC organization, growth and remodeling. The model was shown to recapitulate stereotypical capillary plexus formation and structural emergence of non-coded cellular behaviors, such as a heterologous bridging phenomenon linking endothelial tip cells together during formation of polygonal endothelial cords. Molecular targets in the computational model were mapped to signatures of vascular disruption derived from in vitro chemical profiling using the EPA's ToxCast high-throughput screening (HTS) dataset. Simulating the HTS data with the cell-agent based model of vascular development predicted adverse effects of a reference anti-angiogenic thalidomide analog, 5HPP-33, on in vitro angiogenesis with respect to both concentration-response and morphological consequences. These findings support the utility of cell agent-based models for simulating a morphogenetic series of events and for the first time demonstrate the applicability of these models for predictive toxicology. PMID:23592958
Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach
NASA Astrophysics Data System (ADS)
Kumral, Mustafa; Ozer, Umit
2013-03-01
Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution iteratively. A case study was conducted to demonstrate the performance of approach. The findings showed that the approach could be used to plan a new drilling campaign.
Stochastic Simulation of Biomolecular Networks in Dynamic Environments
Voliotis, Margaritis; Thomas, Philipp; Grima, Ramon; Bowsher, Clive G.
2016-01-01
Simulation of biomolecular networks is now indispensable for studying biological systems, from small reaction networks to large ensembles of cells. Here we present a novel approach for stochastic simulation of networks embedded in the dynamic environment of the cell and its surroundings. We thus sample trajectories of the stochastic process described by the chemical master equation with time-varying propensities. A comparative analysis shows that existing approaches can either fail dramatically, or else can impose impractical computational burdens due to numerical integration of reaction propensities, especially when cell ensembles are studied. Here we introduce the Extrande method which, given a simulated time course of dynamic network inputs, provides a conditionally exact and several orders-of-magnitude faster simulation solution. The new approach makes it feasible to demonstrate—using decision-making by a large population of quorum sensing bacteria—that robustness to fluctuations from upstream signaling places strong constraints on the design of networks determining cell fate. Our approach has the potential to significantly advance both understanding of molecular systems biology and design of synthetic circuits. PMID:27248512
Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi
2017-10-09
Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.
NASA Astrophysics Data System (ADS)
Yousef, Adel K. M.; Taha, Ziad. A.; Shehab, Abeer A.
2011-01-01
This paper describes the development of a computer model used to analyze the heat flow during pulsed Nd: YAG laser spot welding of dissimilar metal; low carbon steel (1020) to aluminum alloy (6061). The model is built using ANSYS FLUENT 3.6 software where almost all the environments simulated to be similar to the experimental environments. A simulation analysis was implemented based on conduction heat transfer out of the key hole where no melting occurs. The effect of laser power and pulse duration was studied. Three peak powers 1, 1.66 and 2.5 kW were varied during pulsed laser spot welding (keeping the energy constant), also the effect of two pulse durations 4 and 8 ms (with constant peak power), on the transient temperature distribution and weld pool dimension were predicated using the present simulation. It was found that the present simulation model can give an indication for choosing the suitable laser parameters (i.e. pulse durations, peak power and interaction time required) during pulsed laser spot welding of dissimilar metals.
Efficacy of computer-based video and simulation in ultrasound-guided regional anesthesia training.
Woodworth, Glenn E; Chen, Elliza M; Horn, Jean-Louis E; Aziz, Michael F
2014-05-01
To determine the effectiveness of a short educational video and simulation on improvement of ultrasound (US) image acquisition and interpretation skills. Prospective, randomized study. University medical center. 28 anesthesia residents and community anesthesiologists with varied ultrasound experience were randomized to teaching video with interactive simulation or sham video groups. Participants were assessed preintervention and postintervention on their ability to identify the sciatic nerve and other anatomic structures on static US images, as well as their ability to locate the sciatic nerve with US on live models. Pretest written test scores correlated with reported US block experience (Kendall tau rank r = 0.47) and with live US scanning scores (r = 0.64). The teaching video and simulation significantly improved scores on the written examination (P < 0.001); however, they did not significantly improve live US scanning skills. A short educational video with interactive simulation significantly improved knowledge of US anatomy, but failed to improve hands-on performance of US scanning to localize the nerve. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Jones, William H.
1985-01-01
The Combined Aerodynamic and Structural Dynamic Problem Emulating Routines (CASPER) is a collection of data-base modification computer routines that can be used to simulate Navier-Stokes flow through realistic, time-varying internal flow fields. The Navier-Stokes equation used involves calculations in all three dimensions and retains all viscous terms. The only term neglected in the current implementation is gravitation. The solution approach is of an interative, time-marching nature. Calculations are based on Lagrangian aerodynamic elements (aeroelements). It is assumed that the relationships between a particular aeroelement and its five nearest neighbor aeroelements are sufficient to make a valid simulation of Navier-Stokes flow on a small scale and that the collection of all small-scale simulations makes a valid simulation of a large-scale flow. In keeping with these assumptions, it must be noted that CASPER produces an imitation or simulation of Navier-Stokes flow rather than a strict numerical solution of the Navier-Stokes equation. CASPER is written to operate under the Parallel, Asynchronous Executive (PAX), which is described in a separate report.
Ren, Jiaping; Wang, Xinjie; Manocha, Dinesh
2016-01-01
We present a biologically plausible dynamics model to simulate swarms of flying insects. Our formulation, which is based on biological conclusions and experimental observations, is designed to simulate large insect swarms of varying densities. We use a force-based model that captures different interactions between the insects and the environment and computes collision-free trajectories for each individual insect. Furthermore, we model the noise as a constructive force at the collective level and present a technique to generate noise-induced insect movements in a large swarm that are similar to those observed in real-world trajectories. We use a data-driven formulation that is based on pre-recorded insect trajectories. We also present a novel evaluation metric and a statistical validation approach that takes into account various characteristics of insect motions. In practice, the combination of Curl noise function with our dynamics model is used to generate realistic swarm simulations and emergent behaviors. We highlight its performance for simulating large flying swarms of midges, fruit fly, locusts and moths and demonstrate many collective behaviors, including aggregation, migration, phase transition, and escape responses. PMID:27187068
Bias and inference from misspecified mixed-effect models in stepped wedge trial analysis.
Thompson, Jennifer A; Fielding, Katherine L; Davey, Calum; Aiken, Alexander M; Hargreaves, James R; Hayes, Richard J
2017-10-15
Many stepped wedge trials (SWTs) are analysed by using a mixed-effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common-to-all or varied-between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within-cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within-cluster comparisons in the standard model. In the SWTs simulated here, mixed-effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within-cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Calculations of High-Temperature Jet Flow Using Hybrid Reynolds-Average Navier-Stokes Formulations
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Elmiligui, Alaa; Giriamaji, Sharath S.
2008-01-01
Two multiscale-type turbulence models are implemented in the PAB3D solver. The models are based on modifying the Reynolds-averaged Navier Stokes equations. The first scheme is a hybrid Reynolds-averaged- Navier Stokes/large-eddy-simulation model using the two-equation k(epsilon) model with a Reynolds-averaged-Navier Stokes/large-eddy-simulation transition function dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier Stokes model in which the unresolved kinetic energy parameter f(sub k) is allowed to vary as a function of grid spacing and the turbulence length scale. This parameter is estimated based on a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for partially averaged Navier Stokes. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The parameter f(sub k) varies between zero and one and is equal to one in the viscous sublayer and when the Reynolds-averaged Navier Stokes turbulent viscosity becomes smaller than the large-eddy-simulation viscosity. The formulation, usage methodology, and validation examples are presented to demonstrate the enhancement of PAB3D's time-accurate turbulence modeling capabilities. The accurate simulations of flow and turbulent quantities will provide a valuable tool for accurate jet noise predictions. Solutions from these models are compared with Reynolds-averaged Navier Stokes results and experimental data for high-temperature jet flows. The current results show promise for the capability of hybrid Reynolds-averaged Navier Stokes and large eddy simulation and partially averaged Navier Stokes in simulating such flow phenomena.
NASA Astrophysics Data System (ADS)
Bednar, Earl; Drager, Steven L.
2007-04-01
Quantum information processing's objective is to utilize revolutionary computing capability based on harnessing the paradigm shift offered by quantum computing to solve classically hard and computationally challenging problems. Some of our computationally challenging problems of interest include: the capability for rapid image processing, rapid optimization of logistics, protecting information, secure distributed simulation, and massively parallel computation. Currently, one important problem with quantum information processing is that the implementation of quantum computers is difficult to realize due to poor scalability and great presence of errors. Therefore, we have supported the development of Quantum eXpress and QuIDD Pro, two quantum computer simulators running on classical computers for the development and testing of new quantum algorithms and processes. This paper examines the different methods used by these two quantum computing simulators. It reviews both simulators, highlighting each simulators background, interface, and special features. It also demonstrates the implementation of current quantum algorithms on each simulator. It concludes with summary comments on both simulators.
NASA Astrophysics Data System (ADS)
Cantero, Francisco; Castro-Orgaz, Oscar; Garcia-Marín, Amanda; Ayuso, José Luis; Dey, Subhasish
2015-10-01
Is the energy equation for gradually-varied flow the best approximation for the free surface profile computations in river flows? Determination of flood inundation in rivers and natural waterways is based on the hydraulic computation of flow profiles. This is usually done using energy-based gradually-varied flow models, like HEC-RAS, that adopts a vertical division method for discharge prediction in compound channel sections. However, this discharge prediction method is not so accurate in the context of advancements over the last three decades. This paper firstly presents a study of the impact of discharge prediction on the gradually-varied flow computations by comparing thirteen different methods for compound channels, where both energy and momentum equations are applied. The discharge, velocity distribution coefficients, specific energy, momentum and flow profiles are determined. After the study of gradually-varied flow predictions, a new theory is developed to produce higher-order energy and momentum equations for rapidly-varied flow in compound channels. These generalized equations enable to describe the flow profiles with more generality than the gradually-varied flow computations. As an outcome, results of gradually-varied flow provide realistic conclusions for computations of flow in compound channels, showing that momentum-based models are in general more accurate; whereas the new theory developed for rapidly-varied flow opens a new research direction, so far not investigated in flows through compound channels.
Bowtie filters for dedicated breast CT: Theory and computational implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontson, Kimberly, E-mail: Kimberly.Kontson@fda.hhs.gov; Jennings, Robert J.
Purpose: To design bowtie filters with improved properties for dedicated breast CT to improve image quality and reduce dose to the patient. Methods: The authors present three different bowtie filters designed for a cylindrical 14-cm diameter phantom with a uniform composition of 40/60 breast tissue, which vary in their design objectives and performance improvements. Bowtie design #1 is based on single material spectral matching and produces nearly uniform spectral shape for radiation incident upon the detector. Bowtie design #2 uses the idea of basis material decomposition to produce the same spectral shape and intensity at the detector, using two differentmore » materials. Bowtie design #3 eliminates the beam hardening effect in the reconstructed image by adjusting the bowtie filter thickness so that the effective attenuation coefficient for every ray is the same. All three designs are obtained using analytical computational methods and linear attenuation coefficients. Thus, the designs do not take into account the effects of scatter. The authors considered this to be a reasonable approach to the filter design problem since the use of Monte Carlo methods would have been computationally intensive. The filter profiles for a cone-angle of 0° were used for the entire length of each filter because the differences between those profiles and the correct cone-beam profiles for the cone angles in our system are very small, and the constant profiles allowed construction of the filters with the facilities available to us. For evaluation of the filters, we used Monte Carlo simulation techniques and the full cone-beam geometry. Images were generated with and without each bowtie filter to analyze the effect on dose distribution, noise uniformity, and contrast-to-noise ratio (CNR) homogeneity. Line profiles through the reconstructed images generated from the simulated projection images were also used as validation for the filter designs. Results: Examples of the three designs are presented. Initial verification of performance of the designs was done using analytical computations of HVL, intensity, and effective attenuation coefficient behind the phantom as a function of fan-angle with a cone-angle of 0°. The performance of the designs depends only weakly on incident spectrum and tissue composition. For all designs, the dynamic range requirement on the detector was reduced compared to the no-bowtie-filter case. Further verification of the filter designs was achieved through analysis of reconstructed images from simulations. Simulation data also showed that the use of our bowtie filters can reduce peripheral dose to the breast by 61% and provide uniform noise and CNR distributions. The bowtie filter design concepts validated in this work were then used to create a computational realization of a 3D anthropomorphic bowtie filter capable of achieving a constant effective attenuation coefficient behind the entire field-of-view of an anthropomorphic breast phantom. Conclusions: Three different bowtie filter designs that vary in performance improvements were described and evaluated using computational and simulation techniques. Results indicate that the designs are robust against variations in breast diameter, breast composition, and tube voltage, and that the use of these filters can reduce patient dose and improve image quality compared to the no-bowtie-filter case.« less
NASA Astrophysics Data System (ADS)
Hochgraf, Kelsey
Auralization methods have been used for a long time to simulate the acoustics of a concert hall for different seat positions. The goal of this thesis was to apply the concept of auralization to a larger audience area that the listener could walk through to compare differences in acoustics for a wide range of seat positions. For this purpose, the acoustics of Rensselaer's Experimental Media and Performing Arts Center (EMPAC) Concert Hall were simulated to create signals for a 136 channel wave field synthesis (WFS) system located at Rensselaer's Collaborative Research Augmented Immersive Virtual Environment (CRAIVE) Laboratory. By allowing multiple people to dynamically experience the concert hall's acoustics at the same time, this research gained perspective on what is important for achieving objective accuracy and subjective plausibility in an auralization. A finite difference time domain (FDTD) simulation on a three-dimensional face-centered cubic grid, combined at a crossover frequency of 800 Hz with a CATT-Acoustic(TM) simulation, was found to have a reverberation time, direct to reverberant sound energy ratio, and early reflection pattern that more closely matched measured data from the hall compared to a CATT-Acoustic(TM) simulation and other hybrid simulations. In the CRAIVE lab, nine experienced listeners found all hybrid auralizations (with varying source location, grid resolution, crossover frequency, and number of loudspeakers) to be more perceptually plausible than the CATT-Acoustic(TM) auralization. The FDTD simulation required two days to compute, while the CATT-Acoustic(TM) simulation required three separate TUCT(TM) computations, each taking four hours, to accommodate the large number of receivers. Given the perceptual advantages realized with WFS for auralization of a large, inhomogeneous sound field, it is recommended that hybrid simulations be used in the future to achieve more accurate and plausible auralizations. Predictions are made for a parallelized version of the simulation code that could achieve such auralizations in less than one hour, making the tool practical for everyday application.
Large eddy simulation of the FDA benchmark nozzle for a Reynolds number of 6500.
Janiga, Gábor
2014-04-01
This work investigates the flow in a benchmark nozzle model of an idealized medical device proposed by the FDA using computational fluid dynamics (CFD). It was in particular shown that a proper modeling of the transitional flow features is particularly challenging, leading to large discrepancies and inaccurate predictions from the different research groups using Reynolds-averaged Navier-Stokes (RANS) modeling. In spite of the relatively simple, axisymmetric computational geometry, the resulting turbulent flow is fairly complex and non-axisymmetric, in particular due to the sudden expansion. The resulting flow cannot be well predicted with simple modeling approaches. Due to the varying diameters and flow velocities encountered in the nozzle, different typical flow regions and regimes can be distinguished, from laminar to transitional and to weakly turbulent. The purpose of the present work is to re-examine the FDA-CFD benchmark nozzle model at a Reynolds number of 6500 using large eddy simulation (LES). The LES results are compared with published experimental data obtained by Particle Image Velocimetry (PIV) and an excellent agreement can be observed considering the temporally averaged flow velocities. Different flow regimes are characterized by computing the temporal energy spectra at different locations along the main axis. Copyright © 2014 Elsevier Ltd. All rights reserved.
Direct Numerical Simulations of a Full Stationary Wind-Turbine Blade
NASA Astrophysics Data System (ADS)
Qamar, Adnan; Zhang, Wei; Gao, Wei; Samtaney, Ravi
2014-11-01
Direct numerical simulation of flow past a full stationary wind-turbine blade is carried out at Reynolds number, Re = 10,000 placed at 0 and 5 (degree) angle of attack. The study is targeted to create a DNS database for verification of solvers and turbulent models that are utilized in wind-turbine modeling applications. The full blade comprises of a circular cylinder base that is attached to a spanwise varying airfoil cross-section profile (without twist). An overlapping composite grid technique is utilized to perform these DNS computations, which permits block structure in the mapped computational space. Different flow shedding regimes are observed along the blade length. Von-Karman shedding is observed in the cylinder shaft region of the turbine blade. Along the airfoil cross-section of the blade, near body shear layer breakdown is observed. A long tip vortex originates from the blade tip region, which exits the computational plane without being perturbed. Laminar to turbulent flow transition is observed along the blade length. The turbulent fluctuations amplitude decreases along the blade length and the flow remains laminar regime in the vicinity of the blade tip. The Strouhal number is found to decrease monotonously along the blade length. Average lift and drag coefficients are also reported for the cases investigated. Supported by funding under a KAUST OCRF-CRG grant.
Noise prediction of a subsonic turbulent round jet using the lattice-Boltzmann method
Lew, Phoi-Tack; Mongeau, Luc; Lyrintzis, Anastasios
2010-01-01
The lattice-Boltzmann method (LBM) was used to study the far-field noise generated from a Mach, Mj=0.4, unheated turbulent axisymmetric jet. A commercial code based on the LBM kernel was used to simulate the turbulent flow exhausting from a pipe which is 10 jet radii in length. Near-field flow results such as jet centerline velocity decay rates and turbulence intensities were in agreement with experimental results and results from comparable LES studies. The predicted far field sound pressure levels were within 2 dB from published experimental results. Weak unphysical tones were present at high frequency in the computed radiated sound pressure spectra. These tones are believed to be due to spurious sound wave reflections at boundaries between regions of varying voxel resolution. These “VR tones” did not appear to bias the underlying broadband noise spectrum, and they did not affect the overall levels significantly. The LBM appears to be a viable approach, comparable in accuracy to large eddy simulations, for the problem considered. The main advantages of this approach over Navier–Stokes based finite difference schemes may be a reduced computational cost, ease of including the nozzle in the computational domain, and ease of investigating nozzles with complex shapes. PMID:20815448
Full cell simulation and the evaluation of the buffer system on air-cathode microbial fuel cell
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ou, Shiqi; Kashima, Hiroyuki; Aaron, Douglas S.
This paper presents a computational model of a single chamber, air-cathode MFC. The model considers losses due to mass transport, as well as biological and electrochemical reactions, in both the anode and cathode half-cells. Computational fluid dynamics and Monod-Nernst analysis are incorporated into the reactions for the anode biofilm and cathode Pt catalyst and biofilm. The integrated model provides a macro-perspective of the interrelation between the anode and cathode during power production, while incorporating microscale contributions of mass transport within the anode and cathode layers. Model considerations include the effects of pH (H +/OH – transport) and electric field-driven migrationmore » on concentration overpotential, effects of various buffers and various amounts of buffer on the pH in the whole reactor, and overall impacts on the power output of the MFC. The simulation results fit the experimental polarization and power density curves well. Further, this model provides insight regarding mass transport at varying current density regimes and quantitative delineation of overpotentials at the anode and cathode. Altogether, this comprehensive simulation is designed to accurately predict MFC performance based on fundamental fluid and kinetic relations and guide optimization of the MFC system.« less
Kumar, M Senthil; Schwartz, Russell
2010-12-09
Virus capsid assembly has been a key model system for studies of complex self-assembly but it does pose some significant challenges for modeling studies. One important limitation is the difficulty of determining accurate rate parameters. The large size and rapid assembly of typical viruses make it infeasible to directly measure coat protein binding rates or deduce them from the relatively indirect experimental measures available. In this work, we develop a computational strategy to deduce coat-coat binding rate parameters for viral capsid assembly systems by fitting stochastic simulation trajectories to experimental measures of assembly progress. Our method combines quadratic response surface and quasi-gradient descent approximations to deal with the high computational cost of simulations, stochastic noise in simulation trajectories and limitations of the available experimental data. The approach is demonstrated on a light scattering trajectory for a human papillomavirus (HPV) in vitro assembly system, showing that the method can provide rate parameters that produce accurate curve fits and are in good concordance with prior analysis of the data. These fits provide an insight into potential assembly mechanisms of the in vitro system and give a basis for exploring how these mechanisms might vary between in vitro and in vivo assembly conditions.
Full cell simulation and the evaluation of the buffer system on air-cathode microbial fuel cell
Ou, Shiqi; Kashima, Hiroyuki; Aaron, Douglas S.; ...
2017-02-23
This paper presents a computational model of a single chamber, air-cathode MFC. The model considers losses due to mass transport, as well as biological and electrochemical reactions, in both the anode and cathode half-cells. Computational fluid dynamics and Monod-Nernst analysis are incorporated into the reactions for the anode biofilm and cathode Pt catalyst and biofilm. The integrated model provides a macro-perspective of the interrelation between the anode and cathode during power production, while incorporating microscale contributions of mass transport within the anode and cathode layers. Model considerations include the effects of pH (H +/OH – transport) and electric field-driven migrationmore » on concentration overpotential, effects of various buffers and various amounts of buffer on the pH in the whole reactor, and overall impacts on the power output of the MFC. The simulation results fit the experimental polarization and power density curves well. Further, this model provides insight regarding mass transport at varying current density regimes and quantitative delineation of overpotentials at the anode and cathode. Altogether, this comprehensive simulation is designed to accurately predict MFC performance based on fundamental fluid and kinetic relations and guide optimization of the MFC system.« less
NASA Astrophysics Data System (ADS)
Senthil Kumar, M.; Schwartz, Russell
2010-12-01
Virus capsid assembly has been a key model system for studies of complex self-assembly but it does pose some significant challenges for modeling studies. One important limitation is the difficulty of determining accurate rate parameters. The large size and rapid assembly of typical viruses make it infeasible to directly measure coat protein binding rates or deduce them from the relatively indirect experimental measures available. In this work, we develop a computational strategy to deduce coat-coat binding rate parameters for viral capsid assembly systems by fitting stochastic simulation trajectories to experimental measures of assembly progress. Our method combines quadratic response surface and quasi-gradient descent approximations to deal with the high computational cost of simulations, stochastic noise in simulation trajectories and limitations of the available experimental data. The approach is demonstrated on a light scattering trajectory for a human papillomavirus (HPV) in vitro assembly system, showing that the method can provide rate parameters that produce accurate curve fits and are in good concordance with prior analysis of the data. These fits provide an insight into potential assembly mechanisms of the in vitro system and give a basis for exploring how these mechanisms might vary between in vitro and in vivo assembly conditions.
NASA Astrophysics Data System (ADS)
Noble, David R.; Georgiadis, John G.; Buckius, Richard O.
1996-07-01
The lattice Boltzmann method (LBM) is used to simulate flow in an infinite periodic array of octagonal cylinders. Results are compared with those obtained by a finite difference (FD) simulation solved in terms of streamfunction and vorticity using an alternating direction implicit scheme. Computed velocity profiles are compared along lines common to both the lattice Boltzmann and finite difference grids. Along all such slices, both streamwise and transverse velocity predictions agree to within 05% of the average streamwise velocity. The local shear on the surface of the cylinders also compares well, with the only deviations occurring in the vicinity of the corners of the cylinders, where the slope of the shear is discontinuous. When a constant dimensionless relaxation time is maintained, LBM exhibits the same convergence behaviour as the FD algorithm, with the time step increasing as the square of the grid size. By adjusting the relaxation time such that a constant Mach number is achieved, the time step of LBM varies linearly with the grid size. The efficiency of LBM on the CM-5 parallel computer at the National Center for Supercomputing Applications (NCSA) is evaluated by examining each part of the algorithm. Overall, a speed of 139 GFLOPS is obtained using 512 processors for a domain size of 2176×2176.
Hybrid ray-FDTD model for the simulation of the ultrasonic inspection of CFRP parts
NASA Astrophysics Data System (ADS)
Jezzine, Karim; Ségur, Damien; Ecault, Romain; Dominguez, Nicolas; Calmon, Pierre
2017-02-01
Carbon Fiber Reinforced Polymers (CFRP) are commonly used in structural parts in the aeronautic industry, to reduce the weight of aircraft while maintaining high mechanical performances. Simulation of the ultrasonic inspections of these parts has to face the highly heterogeneous and anisotropic characteristics of these materials. To model the propagation of ultrasound in these composite structures, we propose two complementary approaches. The first one is based on a ray model predicting the propagation of the ultrasound in an anisotropic effective medium obtained from a homogenization of the material. The ray model is designed to deal with possibly curved parts and subsequent continuously varying anisotropic orientations. The second approach is based on the coupling of the ray model, and a finite difference scheme in time domain (FDTD). The ray model handles the ultrasonic propagation between the transducer and the FDTD computation zone that surrounds the composite part. In this way, the computational efficiency is preserved and the ultrasound scattering by the composite structure can be predicted. Inspections of flat or curved composite panels, as well as stiffeners can be performed. The models have been implemented in the CIVA software platform and compared to experiments. We also present an application of the simulation to the performance demonstration of the adaptive inspection technique SAUL (Surface Adaptive Ultrasound).
A novel adaptive algorithm for 3D finite element analysis to model extracortical bone growth.
Cheong, Vee San; Blunn, Gordon W; Coathup, Melanie J; Fromme, Paul
2018-02-01
Extracortical bone growth with osseointegration of bone onto the shaft of massive bone tumour implants is an important clinical outcome for long-term implant survival. A new computational algorithm combining geometrical shape changes and bone adaptation in 3D Finite Element simulations has been developed, using a soft tissue envelope mesh, a novel concept of osteoconnectivity, and bone remodelling theory. The effects of varying the initial tissue density, spatial influence function and time step were investigated. The methodology demonstrated good correspondence to radiological results for a segmental prosthesis.
From coupled elementary units to the complexity of the glass transition.
Rehwald, Christian; Rubner, Oliver; Heuer, Andreas
2010-09-10
Supercooled liquids display fascinating properties upon cooling such as the emergence of dynamic length scales. Different models strongly vary with respect to the choice of the elementary subsystems as well as their mutual coupling. Here we show via computer simulations of a glass former that both ingredients can be identified via analysis of finite-size effects within the continuous-time random walk framework. The subsystems already contain complete information about thermodynamics and diffusivity, whereas the coupling determines structural relaxation and the emergence of dynamic length scales.
Nanopowder synthesis based on electric explosion technology
NASA Astrophysics Data System (ADS)
Kryzhevich, D. S.; Zolnikov, K. P.; Korchuganov, A. V.; Psakhie, S. G.
2017-10-01
A computer simulation of the bicomponent nanoparticle formation during the electric explosion of copper and nickel wires was carried out. The calculations were performed in the framework of the molecular dynamics method using many-body potentials of interatomic interaction. As a result of an electric explosion of dissimilar metal wires, bicomponent nanoparticles having different stoichiometry and a block structure can be formed. It is possible to control the process of destruction and the structure of the formed bicomponent nanoparticles by varying the distance between the wires and the loading parameters.
NASA Astrophysics Data System (ADS)
Mohamad, Firdaus; Wisnoe, Wirachman; Nasir, Rizal E. M.; Kuntjoro, Wahyu
2012-06-01
This paper discusses on the split drag flaps to the yawing motion of BWB aircraft. This study used split drag flaps instead of vertical tail and rudder with the intention to generate yawing moment. These features are installed near the tips of the wing. Yawing moment is generated by the combination of side and drag forces which are produced upon the split drag flaps deflection. This study is carried out using Computational Fluid Dynamics (CFD) approach and applied to low subsonic speed (0.1 Mach number) with various sideslip angles (β) and total flaps deflections (δT). For this research, the split drag flaps deflections are varied up to ±30°. Data in terms of dimensionless coefficient such as drag coefficient (CD), side coefficient (CS) and yawing moment coefficient (Cn) were used to observe the effect of the split drag flaps. From the simulation results, these split drag flaps are proven to be effective from ±15° deflections or 30° total deflections.
Harik, Polina; Cuddy, Monica M; O'Donovan, Seosaimhin; Murray, Constance T; Swanson, David B; Clauser, Brian E
2009-10-01
The 2000 Institute of Medicine report on patient safety brought renewed attention to the issue of preventable medical errors, and subsequently specialty boards and the National Board of Medical Examiners were encouraged to play a role in setting expectations around safety education. This paper examines potentially dangerous actions taken by examinees during the portion of the United States Medical Licensing Examination Step 3 that is particularly well suited to evaluating lapses in physician decision making, the Computer-based Case Simulation (CCS). Descriptive statistics and a general linear modeling approach were used to analyze dangerous actions ordered by 25,283 examinees that completed CCS for the first time between November 2006 and January 2008. More than 20% of examinees ordered at least one dangerous action with the potential to cause significant patient harm. The propensity to order dangerous actions may vary across clinical cases. The CCS format may provide a means of collecting important information about patient-care situations in which examinees may be more likely to commit dangerous actions and the propensity of examinees to order dangerous tests and treatments.
Competition of information channels in the spreading of innovations
NASA Astrophysics Data System (ADS)
Kocsis, Gergely; Kun, Ferenc
2011-08-01
We study the spreading of information on technological developments in socioeconomic systems where the social contacts of agents are represented by a network of connections. In the model, agents get informed about the existence and advantages of new innovations through advertising activities of producers, which are then followed by an interagent information transfer. Computer simulations revealed that varying the strength of external driving and of interagent coupling, furthermore, the topology of social contacts, the model presents a complex behavior with interesting novel features: On the macrolevel the system exhibits logistic behavior typical for the diffusion of innovations. The time evolution can be described analytically by an integral equation that captures the nucleation and growth of clusters of informed agents. On the microlevel, small clusters are found to be compact with a crossover to fractal structures with increasing size. The distribution of cluster sizes has a power-law behavior with a crossover to a higher exponent when long-range social contacts are present in the system. Based on computer simulations we construct an approximate phase diagram of the model on a regular square lattice of agents.
Time Series Analysis for Spatial Node Selection in Environment Monitoring Sensor Networks
Bhandari, Siddhartha; Jurdak, Raja; Kusy, Branislav
2017-01-01
Wireless sensor networks are widely used in environmental monitoring. The number of sensor nodes to be deployed will vary depending on the desired spatio-temporal resolution. Selecting an optimal number, position and sampling rate for an array of sensor nodes in environmental monitoring is a challenging question. Most of the current solutions are either theoretical or simulation-based where the problems are tackled using random field theory, computational geometry or computer simulations, limiting their specificity to a given sensor deployment. Using an empirical dataset from a mine rehabilitation monitoring sensor network, this work proposes a data-driven approach where co-integrated time series analysis is used to select the number of sensors from a short-term deployment of a larger set of potential node positions. Analyses conducted on temperature time series show 75% of sensors are co-integrated. Using only 25% of the original nodes can generate a complete dataset within a 0.5 °C average error bound. Our data-driven approach to sensor position selection is applicable for spatiotemporal monitoring of spatially correlated environmental parameters to minimize deployment cost without compromising data resolution. PMID:29271880