Sample records for subsequent computer simulations

  1. A virtual surgical training system that simulates cutting of soft tissue using a modified pre-computed elastic model.

    PubMed

    Toe, Kyaw Kyar; Huang, Weimin; Yang, Tao; Duan, Yuping; Zhou, Jiayin; Su, Yi; Teo, Soo-Kng; Kumar, Selvaraj Senthil; Lim, Calvin Chi-Wan; Chui, Chee Kong; Chang, Stephen

    2015-08-01

    This work presents a surgical training system that incorporates cutting operation of soft tissue simulated based on a modified pre-computed linear elastic model in the Simulation Open Framework Architecture (SOFA) environment. A precomputed linear elastic model used for the simulation of soft tissue deformation involves computing the compliance matrix a priori based on the topological information of the mesh. While this process may require a few minutes to several hours, based on the number of vertices in the mesh, it needs only to be computed once and allows real-time computation of the subsequent soft tissue deformation. However, as the compliance matrix is based on the initial topology of the mesh, it does not allow any topological changes during simulation, such as cutting or tearing of the mesh. This work proposes a way to modify the pre-computed data by correcting the topological connectivity in the compliance matrix, without re-computing the compliance matrix which is computationally expensive.

  2. A Parallel Sliding Region Algorithm to Make Agent-Based Modeling Possible for a Large-Scale Simulation: Modeling Hepatitis C Epidemics in Canada.

    PubMed

    Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla

    2016-11-01

    Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.

  3. New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Lung, Shun-Fat

    2017-01-01

    A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.

  4. Modeling cation/anion-water interactions in functional aluminosilicate structures.

    PubMed

    Richards, A J; Barnes, P; Collins, D R; Christodoulos, F; Clark, S M

    1995-02-01

    A need for the computer simulation of hydration/dehydration processes in functional aluminosilicate structures has been noted. Full and realistic simulations of these systems can be somewhat ambitious and require the aid of interactive computer graphics to identify key structural/chemical units, both in the devising of suitable water-ion simulation potentials and in the analysis of hydrogen-bonding schemes in the subsequent simulation studies. In this article, the former is demonstrated by the assembling of a range of essential water-ion potentials. These span the range of formal charges from +4e to -2e, and are evaluated in the context of three types of structure: a porous zeolite, calcium silicate cement, and layered clay. As an example of the latter, the computer graphics output from Monte Carlo computer simulation studies of hydration/dehydration in calcium-zeolite A is presented.

  5. Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.; Patera, Anthony

    1993-01-01

    Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.

  6. Development of Hybrid Computer Programs for AAFSS/COBRA/COIN Weapons Effectiveness Studies. Volume I. Simulating Aircraft Maneuvers and Weapon Firing Runs.

    DTIC Science & Technology

    for the game. Subsequent duels , flown with single armed escorts, calculated reduction in losses and damage states. For the study, hybrid computer...6) a duel between a ground weapon, armed escort, and formation of lift aircraft. (Author)

  7. Instructional support and implementation structure during elementary teachers' science education simulation use

    NASA Astrophysics Data System (ADS)

    Gonczi, Amanda L.; Chiu, Jennifer L.; Maeng, Jennifer L.; Bell, Randy L.

    2016-07-01

    This investigation sought to identify patterns in elementary science teachers' computer simulation use, particularly implementation structures and instructional supports commonly employed by teachers. Data included video-recorded science lessons of 96 elementary teachers who used computer simulations in one or more science lessons. Results indicated teachers used a one-to-one student-to-computer ratio most often either during class-wide individual computer use or during a rotating station structure. Worksheets, general support, and peer collaboration were the most common forms of instructional support. The least common instructional support forms included lesson pacing, initial play, and a closure discussion. Students' simulation use was supported in the fewest ways during a rotating station structure. Results suggest that simulation professional development with elementary teachers needs to explicitly focus on implementation structures and instructional support to enhance participants' pedagogical knowledge and improve instructional simulation use. In addition, research is needed to provide theoretical explanations for the observed patterns that should subsequently be addressed in supporting teachers' instructional simulation use during professional development or in teacher preparation programs.

  8. Strategy and gaps for modeling, simulation, and control of hybrid systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rabiti, Cristian; Garcia, Humberto E.; Hovsapian, Rob

    2015-04-01

    The purpose of this report is to establish a strategy for modeling and simulation of candidate hybrid energy systems. Modeling and simulation is necessary to design, evaluate, and optimize the system technical and economic performance. Accordingly, this report first establishes the simulation requirements to analysis candidate hybrid systems. Simulation fidelity levels are established based on the temporal scale, real and synthetic data availability or needs, solution accuracy, and output parameters needed to evaluate case-specific figures of merit. Accordingly, the associated computational and co-simulation resources needed are established; including physical models when needed, code assembly and integrated solutions platforms, mathematical solvers,more » and data processing. This report first attempts to describe the figures of merit, systems requirements, and constraints that are necessary and sufficient to characterize the grid and hybrid systems behavior and market interactions. Loss of Load Probability (LOLP) and effective cost of Effective Cost of Energy (ECE), as opposed to the standard Levelized Cost of Electricty (LCOE), are introduced as technical and economical indices for integrated energy system evaluations. Financial assessment methods are subsequently introduced for evaluation of non-traditional, hybrid energy systems. Algorithms for coupled and iterative evaluation of the technical and economic performance are subsequently discussed. This report further defines modeling objectives, computational tools, solution approaches, and real-time data collection and processing (in some cases using real test units) that will be required to model, co-simulate, and optimize; (a) an energy system components (e.g., power generation unit, chemical process, electricity management unit), (b) system domains (e.g., thermal, electrical or chemical energy generation, conversion, and transport), and (c) systems control modules. Co-simulation of complex, tightly coupled, dynamic energy systems requires multiple simulation tools, potentially developed in several programming languages and resolved on separate time scales. Whereas further investigation and development of hybrid concepts will provide a more complete understanding of the joint computational and physical modeling needs, this report highlights areas in which co-simulation capabilities are warranted. The current development status, quality assurance, availability and maintainability of simulation tools that are currently available for hybrid systems modeling is presented. Existing gaps in the modeling and simulation toolsets and development needs are subsequently discussed. This effort will feed into a broader Roadmap activity for designing, developing, and demonstrating hybrid energy systems.« less

  9. The Application of a Massively Parallel Computer to the Simulation of Electrical Wave Propagation Phenomena in the Heart Muscle Using Simplified Models

    NASA Technical Reports Server (NTRS)

    Karpoukhin, Mikhii G.; Kogan, Boris Y.; Karplus, Walter J.

    1995-01-01

    The simulation of heart arrhythmia and fibrillation are very important and challenging tasks. The solution of these problems using sophisticated mathematical models is beyond the capabilities of modern super computers. To overcome these difficulties it is proposed to break the whole simulation problem into two tightly coupled stages: generation of the action potential using sophisticated models. and propagation of the action potential using simplified models. The well known simplified models are compared and modified to bring the rate of depolarization and action potential duration restitution closer to reality. The modified method of lines is used to parallelize the computational process. The conditions for the appearance of 2D spiral waves after the application of a premature beat and the subsequent traveling of the spiral wave inside the simulated tissue are studied.

  10. Comments on ``Use of conditional simulation in nuclear waste site performance assessment`` by Carol Gotway

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Downing, D.J.

    1993-10-01

    This paper discusses Carol Gotway`s paper, ``The Use of Conditional Simulation in Nuclear Waste Site Performance Assessment.`` The paper centers on the use of conditional simulation and the use of geostatistical methods to simulate an entire field of values for subsequent use in a complex computer model. The issues of sampling designs for geostatistics, semivariogram estimation and anisotropy, turning bands method for random field generation, and estimation of the comulative distribution function are brought out.

  11. Hierarchical Boltzmann simulations and model error estimation

    NASA Astrophysics Data System (ADS)

    Torrilhon, Manuel; Sarna, Neeraj

    2017-08-01

    A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.

  12. Materials by numbers: Computations as tools of discovery

    PubMed Central

    Landman, Uzi

    2005-01-01

    Current issues pertaining to theoretical simulations of materials, with a focus on systems of nanometer-scale dimensions, are discussed. The use of atomistic simulations as high-resolution numerical experiments, enabling and guiding formulation and testing of analytic theoretical descriptions, is demonstrated through studies of the generation and breakup of nanojets, which have led to the derivation of a stochastic hydrodynamic description. Subsequently, I illustrate the use of computations and simulations as tools of discovery, with examples that include the self-organized formation of nanowires, the surprising nanocatalytic activity of small aggregates of gold that, in the bulk form, is notorious for being chemically inert, and the emergence of rotating electron molecules in two-dimensional quantum dots. I conclude with a brief discussion of some key challenges in nanomaterials simulations. PMID:15870210

  13. Computational Challenges in the Analysis of Petrophysics Using Microtomography and Upscaling

    NASA Astrophysics Data System (ADS)

    Liu, J.; Pereira, G.; Freij-Ayoub, R.; Regenauer-Lieb, K.

    2014-12-01

    Microtomography provides detailed 3D internal structures of rocks in micro- to tens of nano-meter resolution and is quickly turning into a new technology for studying petrophysical properties of materials. An important step is the upscaling of these properties as micron or sub-micron resolution can only be done on the sample-scale of millimeters or even less than a millimeter. We present here a recently developed computational workflow for the analysis of microstructures including the upscaling of material properties. Computations of properties are first performed using conventional material science simulations at micro to nano-scale. The subsequent upscaling of these properties is done by a novel renormalization procedure based on percolation theory. We have tested the workflow using different rock samples, biological and food science materials. We have also applied the technique on high-resolution time-lapse synchrotron CT scans. In this contribution we focus on the computational challenges that arise from the big data problem of analyzing petrophysical properties and its subsequent upscaling. We discuss the following challenges: 1) Characterization of microtomography for extremely large data sets - our current capability. 2) Computational fluid dynamics simulations at pore-scale for permeability estimation - methods, computing cost and accuracy. 3) Solid mechanical computations at pore-scale for estimating elasto-plastic properties - computational stability, cost, and efficiency. 4) Extracting critical exponents from derivative models for scaling laws - models, finite element meshing, and accuracy. Significant progress in each of these challenges is necessary to transform microtomography from the current research problem into a robust computational big data tool for multi-scale scientific and engineering problems.

  14. Learning about the Unit Cell and Crystal Lattice with Computerized Simulations and Games: A Pilot Study

    ERIC Educational Resources Information Center

    Luealamai, Sutha; Panijpan, Bhinyo

    2012-01-01

    The authors have developed a computer-based learning module on the unit cell of various types of crystal. The module has two components: the virtual unit cell (VUC) part and the subsequent unit cell hunter part. The VUC is a virtual reality simulation for students to actively arrive at the unit cell from exploring, from a broad view, the crystal…

  15. Computer simulation of a geomagnetic substorm

    NASA Technical Reports Server (NTRS)

    Lyon, J. G.; Brecht, S. H.; Huba, J. D.; Fedder, J. A.; Palmadesso, P. J.

    1981-01-01

    A global two-dimensional simulation of a substormlike process occurring in earth's magnetosphere is presented. The results are consistent with an empirical substorm model - the neutral-line model. Specifically, the introduction of a southward interplanetary magnetic field forms an open magnetosphere. Subsequently, a substorm neutral line forms at about 15 earth radii or closer in the magnetotail, and plasma sheet thinning and plasma acceleration occur. Eventually the substorm neutral line moves tailward toward its presubstorm position.

  16. Modeling of bubble dynamics in relation to medical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amendt, P.A.; London, R.A.; Strauss, M.

    1997-03-12

    In various pulsed-laser medical applications, strong stress transients can be generated in advance of vapor bubble formation. To better understand the evolution of stress transients and subsequent formation of vapor bubbles, two-dimensional simulations are presented in channel or cylindrical geometry with the LATIS (LAser TISsue) computer code. Differences with one-dimensional modeling are explored, and simulated experimental conditions for vapor bubble generation are presented and compared with data. 22 refs., 8 figs.

  17. HRLSim: a high performance spiking neural network simulator for GPGPU clusters.

    PubMed

    Minkovich, Kirill; Thibeault, Corey M; O'Brien, Michael John; Nogin, Aleksey; Cho, Youngkwan; Srinivasa, Narayan

    2014-02-01

    Modeling of large-scale spiking neural models is an important tool in the quest to understand brain function and subsequently create real-world applications. This paper describes a spiking neural network simulator environment called HRL Spiking Simulator (HRLSim). This simulator is suitable for implementation on a cluster of general purpose graphical processing units (GPGPUs). Novel aspects of HRLSim are described and an analysis of its performance is provided for various configurations of the cluster. With the advent of inexpensive GPGPU cards and compute power, HRLSim offers an affordable and scalable tool for design, real-time simulation, and analysis of large-scale spiking neural networks.

  18. Risk as Feelings in the Effect of Patient Outcomes on Physicians' Subsequent Treatment Decisions: A Randomized Trial and Manipulation Validation

    PubMed Central

    Hemmerich, Joshua A; Elstein, Arthur S; Schwarze, Margaret L; Moliski, Elizabeth G; Dale, William

    2013-01-01

    The present study tested predictions derived from the Risk as Feelings hypothesis about the effects of prior patients' negative treatment outcomes on physicians' subsequent treatment decisions. Two experiments at The University of Chicago, U.S.A., utilized a computer simulation of an abdominal aortic aneurysm (AAA) patient with enhanced realism to present participants with one of three experimental conditions: AAA rupture causing a watchful waiting death (WWD), perioperative death (PD), or a successful operation (SO), as well as the statistical treatment guidelines for AAA. Experiment 1 tested effects of these simulated outcomes on (n=76) laboratory participants' (university student sample) self-reported emotions, and their ratings of valence and arousal of the AAA rupture simulation and other emotion inducing picture stimuli. Experiment 2 tested two hypotheses: 1) that experiencing a patient WWD in the practice trial's experimental condition would lead physicians to choose surgery earlier, and 2) experiencing a patient PD would lead physicians to choose surgery later with the next patient. Experiment 2 presented (n=132) physicians (surgeons and geriatricians) with the same experimental manipulation and a second simulated AAA patient. Physicians then chose to either go to surgery or continue watchful waiting. The results of Experiment 1 demonstrated that the WWD experimental condition significantly increased anxiety, and was rated similarly to other negative and arousing pictures. The results of Experiment 2 demonstrated that, after controlling for demographics, baseline anxiety, intolerance for uncertainty, risk attitudes, and the influence of simulation characteristics, the WWD experimental condition significantly expedited decisions to choose surgery for the next patient. The results support the Risk as Feelings hypothesis on physicians' treatment decisions in a realistic AAA patient computer simulation. Bad outcomes affected emotions and decisions, even with statistical AAA rupture risk guidance present. These results suggest that bad patient outcomes cause physicians to experience anxiety and regret that influences their subsequent treatment decision-making for the next patient. PMID:22571890

  19. Advanced Simulation and Computing Fiscal Year 2016 Implementation Plan, Version 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCoy, M.; Archer, B.; Hendrickson, B.

    2015-08-27

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The purpose of this IP is to outline key work requirements to be performed and to control individualmore » work activities within the scope of work. Contractors may not deviate from this plan without a revised WA or subsequent IP.« less

  20. Manned systems utilization analysis (study 2.1). Volume 5: Program listing for the LOVES computer code

    NASA Technical Reports Server (NTRS)

    Wray, S. T., Jr.

    1975-01-01

    The LOVES computer code developed to investigate the concept of space servicing operational satellites as an alternative to replacing expendable satellites or returning satellites to earth for ground refurbishment is presented. In addition to having the capability to simulate the expendable satellite operation and the ground refurbished satellite operation, the program is designed to simulate the logistics of space servicing satellites using an upper stage vehicle and/or the earth to orbit shuttle. The program not only provides for the initial deployment of the satellite but also simulates the random failure and subsequent replacement of various equipment modules comprising the satellite. The program has been used primarily to conduct trade studies and/or parametric studies of various space program operational philosophies.

  1. Efficiency analysis of numerical integrations for finite element substructure in real-time hybrid simulation

    NASA Astrophysics Data System (ADS)

    Wang, Jinting; Lu, Liqiao; Zhu, Fei

    2018-01-01

    Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.

  2. Improved failure prediction in forming simulations through pre-strain mapping

    NASA Astrophysics Data System (ADS)

    Upadhya, Siddharth; Staupendahl, Daniel; Heuse, Martin; Tekkaya, A. Erman

    2018-05-01

    The sensitivity of sheared edges of advanced high strength steel (AHSS) sheets to cracking during subsequent forming operations and the difficulty to predict this failure with any degree of accuracy using conventionally used FLC based failure criteria is a major problem plaguing the manufacturing industry. A possible method that allows for an accurate prediction of edge cracks is the simulation of the shearing operation and carryover of this model into a subsequent forming simulation. But even with an efficient combination of a solid element shearing operation and a shell element forming simulation, the need for a fine mesh, and the resulting high computation time makes this approach not viable from an industry point of view. The crack sensitivity of sheared edges is due to work hardening in the shear-affected zone (SAZ). A method to predict plastic strains induced by the shearing process is to measure the hardness after shearing and calculate the ultimate tensile strength as well as the flow stress. In combination with the flow curve, the relevant strain data can be obtained. To eliminate the time-intensive shearing simulation necessary to obtain the strain data in the SAZ, a new pre-strain mapping approach is proposed. The pre-strains to be mapped are, hereby, determined from hardness values obtained in the proximity of the sheared edge. To investigate the performance of this approach the ISO/TS 16630 hole expansion test was simulated with shell elements for different materials, whereby the pre-strains were mapped onto the edge of the hole. The hole expansion ratios obtained from such pre-strain mapped simulations are in close agreement with the experimental results. Furthermore, the simulations can be carried out with no increase in computation time, making this an interesting and viable solution for predicting edge failure due to shearing.

  3. Solutions of the Taylor-Green Vortex Problem Using High-Resolution Explicit Finite Difference Methods

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2013-01-01

    A computational fluid dynamics code that solves the compressible Navier-Stokes equations was applied to the Taylor-Green vortex problem to examine the code s ability to accurately simulate the vortex decay and subsequent turbulence. The code, WRLES (Wave Resolving Large-Eddy Simulation), uses explicit central-differencing to compute the spatial derivatives and explicit Low Dispersion Runge-Kutta methods for the temporal discretization. The flow was first studied and characterized using Bogey & Bailley s 13-point dispersion relation preserving (DRP) scheme. The kinetic energy dissipation rate, computed both directly and from the enstrophy field, vorticity contours, and the energy spectra are examined. Results are in excellent agreement with a reference solution obtained using a spectral method and provide insight into computations of turbulent flows. In addition the following studies were performed: a comparison of 4th-, 8th-, 12th- and DRP spatial differencing schemes, the effect of the solution filtering on the results, the effect of large-eddy simulation sub-grid scale models, and the effect of high-order discretization of the viscous terms.

  4. On the precision of aero-thermal simulations for TMT

    NASA Astrophysics Data System (ADS)

    Vogiatzis, Konstantinos; Thompson, Hugh

    2016-08-01

    Environmental effects on the Image Quality (IQ) of the Thirty Meter Telescope (TMT) are estimated by aero-thermal numerical simulations. These simulations utilize Computational Fluid Dynamics (CFD) to estimate, among others, thermal (dome and mirror) seeing as well as wind jitter and blur. As the design matures, guidance obtained from these numerical experiments can influence significant cost-performance trade-offs and even component survivability. The stochastic nature of environmental conditions results in the generation of a large computational solution matrix in order to statistically predict Observatory Performance. Moreover, the relative contribution of selected key subcomponents to IQ increases the parameter space and thus computational cost, while dictating a reduced prediction error bar. The current study presents the strategy followed to minimize prediction time and computational resources, the subsequent physical and numerical limitations and finally the approach to mitigate the issues experienced. In particular, the paper describes a mesh-independence study, the effect of interpolation of CFD results on the TMT IQ metric, and an analysis of the sensitivity of IQ to certain important heat sources and geometric features.

  5. An infectious way to teach students about outbreaks.

    PubMed

    Cremin, Íde; Watson, Oliver; Heffernan, Alastair; Imai, Natsuko; Ahmed, Norin; Bivegete, Sandra; Kimani, Teresia; Kyriacou, Demetris; Mahadevan, Preveina; Mustafa, Rima; Pagoni, Panagiota; Sophiea, Marisa; Whittaker, Charlie; Beacroft, Leo; Riley, Steven; Fisher, Matthew C

    2018-06-01

    The study of infectious disease outbreaks is required to train today's epidemiologists. A typical way to introduce and explain key epidemiological concepts is through the analysis of a historical outbreak. There are, however, few training options that explicitly utilise real-time simulated stochastic outbreaks where the participants themselves comprise the dataset they subsequently analyse. In this paper, we present a teaching exercise in which an infectious disease outbreak is simulated over a five-day period and subsequently analysed. We iteratively developed the teaching exercise to offer additional insight into analysing an outbreak. An R package for visualisation, analysis and simulation of the outbreak data was developed to accompany the practical to reinforce learning outcomes. Computer simulations of the outbreak revealed deviations from observed dynamics, highlighting how simplifying assumptions conventionally made in mathematical models often differ from reality. Here we provide a pedagogical tool for others to use and adapt in their own settings. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Feasibility study of automatic control of crew comfort in the shuttle Extravehicular Mobility Unit. [liquid cooled garment regulator

    NASA Technical Reports Server (NTRS)

    Cook, D. W.

    1977-01-01

    Computer simulation is used to demonstrate that crewman comfort can be assured by using automatic control of the inlet temperature of the coolant into the liquid cooled garment when input to the controller consists of measurements of the garment inlet temperature and the garment outlet temperature difference. Subsequent tests using a facsimile of the control logic developed in the computer program confirmed the feasibility of such a design scheme.

  7. A numerical study of incompressible juncture flows

    NASA Technical Reports Server (NTRS)

    Kwak, D.; Rogers, S. E.; Kaul, U. K.; Chang, J. L. C.

    1986-01-01

    The laminar, steady juncture flow around single or multiple posts mounted between two flat plates is simulated using the three dimensional incompressible Navier-Stokes code, INS3D. The three dimensional separation of the boundary layer and subsequent formation and development of the horseshoe vortex is computed. The computed flow compares favorably with the experimental observation. The recent numerical study to understand and quantify the juncture flow relevant to the Space Shuttle main engine power head is summarized.

  8. The flight robotics laboratory

    NASA Technical Reports Server (NTRS)

    Tobbe, Patrick A.; Williamson, Marlin J.; Glaese, John R.

    1988-01-01

    The Flight Robotics Laboratory of the Marshall Space Flight Center is described in detail. This facility, containing an eight degree of freedom manipulator, precision air bearing floor, teleoperated motion base, reconfigurable operator's console, and VAX 11/750 computer system, provides simulation capability to study human/system interactions of remote systems. The facility hardware, software and subsequent integration of these components into a real time man-in-the-loop simulation for the evaluation of spacecraft contact proximity and dynamics are described.

  9. Programs for Testing Processor-in-Memory Computing Systems

    NASA Technical Reports Server (NTRS)

    Katz, Daniel S.

    2006-01-01

    The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.

  10. Using Adaptive Mesh Refinment to Simulate Storm Surge

    NASA Astrophysics Data System (ADS)

    Mandli, K. T.; Dawson, C.

    2012-12-01

    Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.

  11. ICME — A Mere Coupling of Models or a Discipline of Its Own?

    NASA Astrophysics Data System (ADS)

    Bambach, Markus; Schmitz, Georg J.; Prahl, Ulrich

    Technically, ICME — Integrated computational materials engineering — is an approach for solving advanced engineering problems related to the design of new materials and processes by combining individual materials and process models. The combination of models by now is mainly achieved by manual transformation of the output of a simulation to form the input to a subsequent one. This subsequent simulation is either performed at a different length scale or constitutes a subsequent step along the process chain. Is ICME thus just a synonym for the coupling of simulations? In fact, most ICME publications up to now are examples of the joint application of selected models and software codes to a specific problem. However, from a systems point of view, the coupling of individual models and/or software codes across length scales and along material processing chains leads to highly complex meta-models. Their viability has to be ensured by joint efforts from science, industry, software developers and independent organizations. This paper identifies some developments that seem necessary to make future ICME simulations viable, sustainable and broadly accessible and accepted. The main conclusion is that ICME is more than a multi-disciplinary subject but a discipline of its own, for which a generic structural framework has to be elaborated and established.

  12. Accelerating Sequential Gaussian Simulation with a constant path

    NASA Astrophysics Data System (ADS)

    Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus

    2018-03-01

    Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.

  13. A real-time simulation evaluation of an advanced detection. Isolation and accommodation algorithm for sensor failures in turbine engines

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.; Delaat, J. C.

    1986-01-01

    An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.

  14. Test code for the assessment and improvement of Reynolds stress models

    NASA Technical Reports Server (NTRS)

    Rubesin, M. W.; Viegas, J. R.; Vandromme, D.; Minh, H. HA

    1987-01-01

    An existing two-dimensional, compressible flow, Navier-Stokes computer code, containing a full Reynolds stress turbulence model, was adapted for use as a test bed for assessing and improving turbulence models based on turbulence simulation experiments. To date, the results of using the code in comparison with simulated channel flow and over an oscillating flat plate have shown that the turbulence model used in the code needs improvement for these flows. It is also shown that direct simulation of turbulent flows over a range of Reynolds numbers are needed to guide subsequent improvement of turbulence models.

  15. Simulating Vibrations in a Complex Loaded Structure

    NASA Technical Reports Server (NTRS)

    Cao, Tim T.

    2005-01-01

    The Dynamic Response Computation (DIRECT) computer program simulates vibrations induced in a complex structure by applied dynamic loads. Developed to enable rapid analysis of launch- and landing- induced vibrations and stresses in a space shuttle, DIRECT also can be used to analyze dynamic responses of other structures - for example, the response of a building to an earthquake, or the response of an oil-drilling platform and attached tanks to large ocean waves. For a space-shuttle simulation, the required input to DIRECT includes mathematical models of the space shuttle and its payloads, and a set of forcing functions that simulates launch and landing loads. DIRECT can accommodate multiple levels of payload attachment and substructure as well as nonlinear dynamic responses of structural interfaces. DIRECT combines the shuttle and payload models into a single structural model, to which the forcing functions are then applied. The resulting equations of motion are reduced to an optimum set and decoupled into a unique format for simulating dynamics. During the simulation, maximum vibrations, loads, and stresses are monitored and recorded for subsequent analysis to identify structural deficiencies in the shuttle and/or payloads.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Canhai; Xu, Zhijie; Li, Tingwen

    In virtual design and scale up of pilot-scale carbon capture systems, the coupled reactive multiphase flow problem must be solved to predict the adsorber’s performance and capture efficiency under various operation conditions. This paper focuses on the detailed computational fluid dynamics (CFD) modeling of a pilot-scale fluidized bed adsorber equipped with vertical cooling tubes. Multiphase Flow with Interphase eXchanges (MFiX), an open-source multiphase flow CFD solver, is used for the simulations with custom code to simulate the chemical reactions and filtered models to capture the effect of the unresolved details in the coarser mesh for simulations with reasonable simulations andmore » manageable computational effort. Previously developed two filtered models for horizontal cylinder drag, heat transfer, and reaction kinetics have been modified to derive the 2D filtered models representing vertical cylinders in the coarse-grid CFD simulations. The effects of the heat exchanger configurations (i.e., horizontal or vertical) on the adsorber’s hydrodynamics and CO2 capture performance are then examined. The simulation result subsequently is compared and contrasted with another predicted by a one-dimensional three-region process model.« less

  17. A method to incorporate the effect of beam quality on image noise in a digitally reconstructed radiograph (DRR) based computer simulation for optimisation of digital radiography

    NASA Astrophysics Data System (ADS)

    Moore, Craig S.; Wood, Tim J.; Saunderson, John R.; Beavis, Andrew W.

    2017-09-01

    The use of computer simulated digital x-radiographs for optimisation purposes has become widespread in recent years. To make these optimisation investigations effective, it is vital simulated radiographs contain accurate anatomical and system noise. Computer algorithms that simulate radiographs based solely on the incident detector x-ray intensity (‘dose’) have been reported extensively in the literature. However, while it has been established for digital mammography that x-ray beam quality is an important factor when modelling noise in simulated images there are no such studies for diagnostic imaging of the chest, abdomen and pelvis. This study investigates the influence of beam quality on image noise in a digital radiography (DR) imaging system, and incorporates these effects into a digitally reconstructed radiograph (DRR) computer simulator. Image noise was measured on a real DR imaging system as a function of dose (absorbed energy) over a range of clinically relevant beam qualities. Simulated ‘absorbed energy’ and ‘beam quality’ DRRs were then created for each patient and tube voltage under investigation. Simulated noise images, corrected for dose and beam quality, were subsequently produced from the absorbed energy and beam quality DRRs, using the measured noise, absorbed energy and beam quality relationships. The noise images were superimposed onto the noiseless absorbed energy DRRs to create the final images. Signal-to-noise measurements in simulated chest, abdomen and spine images were within 10% of the corresponding measurements in real images. This compares favourably to our previous algorithm where images corrected for dose only were all within 20%.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwan, T.J.T.; Moir, D.C.; Snell, C.M.

    In high resolution flash x-ray imaging technology the electric field developed between the electron beam and the converter target is large enough to draw ions from the target surface. The ions provide fractional neutralization and cause the electron beam to focus radially inward, and the focal point subsequently moves upstream due to the expansion of the ion column. A self-bias target concept is proposed and verified via computer simulation that the electron charge deposited on the target can generate an electric potential, which can effectively limit the ion motion and thereby stabilize the growth of the spot size. A targetmore » chamber using the self bias target concept was designed and tested in the Integrated Test Stand (ITS). The authors have obtained good agreement between computer simulation and experiment.« less

  19. Assessment of students' ability to incorporate a computer into increasingly complex simulated patient encounters.

    PubMed

    Ray, Sarah; Valdovinos, Katie

    Pharmacy students should be exposed to and offered opportunities to practice the skill of incorporating a computer into a patient interview in the didactic setting. Faculty sought to improve retention of student ability to incorporate computers into their patient-pharmacist communication. Students were required to utilize a computer to document clinical information gathered during a simulated patient encounter (SPE). Students utilized electronic worksheets and were evaluated by instructors on their ability to effectively incorporate a computer into a SPE using a rubric. Students received specific instruction on effective computer use during patient encounters. Students were then re-evaluated by an instructor during subsequent SPEs of increasing complexity using standardized rubrics blinded from the students. Pre-instruction, 45% of students effectively incorporated a computer into a SPE. After receiving instruction, 67% of students were effective in their use of a computer during a SPE of performing a pharmaceutical care assessment for a patient with chronic obstructive pulmonary disease (COPD) (p < 0.05 compared to pre-instruction), and 58% of students were effective in their use of a computer during a SPE of retrieving a medication list and social history from a simulated alcohol-impaired patient (p = 0.087 compared to pre-instruction). Instruction can improve pharmacy students' ability to incorporate a computer into SPEs, a critical skill in building and maintaining rapport with patients and improving efficiency of patient visits. Complex encounters may affect students' ability to utilize a computer appropriately. Students may benefit from repeated practice with this skill, especially with SPEs of increasing complexity. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Automation of a Wave-Optics Simulation and Image Post-Processing Package on Riptide

    NASA Astrophysics Data System (ADS)

    Werth, M.; Lucas, J.; Thompson, D.; Abercrombie, M.; Holmes, R.; Roggemann, M.

    Detailed wave-optics simulations and image post-processing algorithms are computationally expensive and benefit from the massively parallel hardware available at supercomputing facilities. We created an automated system that interfaces with the Maui High Performance Computing Center (MHPCC) Distributed MATLAB® Portal interface to submit massively parallel waveoptics simulations to the IBM iDataPlex (Riptide) supercomputer. This system subsequently postprocesses the output images with an improved version of physically constrained iterative deconvolution (PCID) and analyzes the results using a series of modular algorithms written in Python. With this architecture, a single person can simulate thousands of unique scenarios and produce analyzed, archived, and briefing-compatible output products with very little effort. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.

  1. Computer simulations of nematic drops: Coupling between drop shape and nematic order

    NASA Astrophysics Data System (ADS)

    Rull, L. F.; Romero-Enrique, J. M.; Fernandez-Nieves, A.

    2012-07-01

    We perform Monte Carlo computer simulations of nematic drops in equilibrium with their vapor using a Gay-Berne interaction between the rod-like molecules. To generate the drops, we initially perform NPT simulations close to the nematic-vapor coexistence region, allow the system to equilibrate and subsequently induce a sudden volume expansion, followed with NVT simulations. The resultant drops coexist with their vapor and are generally not spherical but elongated, have the rod-like particles tangentially aligned at the surface and an overall nematic orientation along the main axis of the drop. We find that the drop eccentricity increases with increasing molecular elongation, κ. For small κ the nematic texture in the drop is bipolar with two surface defects, or boojums, maximizing their distance along this same axis. For sufficiently high κ, the shape of the drop becomes singular in the vicinity of the defects, and there is a crossover to an almost homogeneous texture; this reflects a transition from a spheroidal to a spindle-like drop.

  2. Progressive Fracture of Composite Structures

    NASA Technical Reports Server (NTRS)

    Minnetyan, Levon

    2001-01-01

    This report includes the results of a research in which the COmposite Durability STRuctural ANalysis (CODSTRAN) computational simulation capabilities were augmented and applied to various structures for demonstration of the new features and verification. The first chapter of this report provides an introduction to the computational simulation or virtual laboratory approach for the assessment of damage and fracture progression characteristics in composite structures. The second chapter outlines the details of the overall methodology used, including the failure criteria and the incremental/iterative loading procedure with the definitions of damage, fracture, and equilibrium states. The subsequent chapters each contain an augmented feature of the code and/or demonstration examples. All but one of the presented examples contains laminated composite structures with various fiber/matrix constituents. For each structure simulated, damage initiation and progression mechanisms are identified and the structural damage tolerance is quantified at various degradation stages. Many chapters contain the simulation of defective and defect free structures to evaluate the effects of existing defects on structural durability.

  3. Using a computer simulation for teaching communication skills: A blinded multisite mixed methods randomized controlled trial.

    PubMed

    Kron, Frederick W; Fetters, Michael D; Scerbo, Mark W; White, Casey B; Lypson, Monica L; Padilla, Miguel A; Gliva-McConvey, Gayle A; Belfore, Lee A; West, Temple; Wallace, Amelia M; Guetterman, Timothy C; Schleicher, Lauren S; Kennedy, Rebecca A; Mangrulkar, Rajesh S; Cleary, James F; Marsella, Stacy C; Becker, Daniel M

    2017-04-01

    To assess advanced communication skills among second-year medical students exposed either to a computer simulation (MPathic-VR) featuring virtual humans, or to a multimedia computer-based learning module, and to understand each group's experiences and learning preferences. A single-blinded, mixed methods, randomized, multisite trial compared MPathic-VR (N=210) to computer-based learning (N=211). Primary outcomes: communication scores during repeat interactions with MPathic-VR's intercultural and interprofessional communication scenarios and scores on a subsequent advanced communication skills objective structured clinical examination (OSCE). Multivariate analysis of variance was used to compare outcomes. student attitude surveys and qualitative assessments of their experiences with MPathic-VR or computer-based learning. MPathic-VR-trained students improved their intercultural and interprofessional communication performance between their first and second interactions with each scenario. They also achieved significantly higher composite scores on the OSCE than computer-based learning-trained students. Attitudes and experiences were more positive among students trained with MPathic-VR, who valued its providing immediate feedback, teaching nonverbal communication skills, and preparing them for emotion-charged patient encounters. MPathic-VR was effective in training advanced communication skills and in enabling knowledge transfer into a more realistic clinical situation. MPathic-VR's virtual human simulation offers an effective and engaging means of advanced communication training. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Using a computer simulation for teaching communication skills: A blinded multisite mixed methods randomized controlled trial

    PubMed Central

    Kron, Frederick W.; Fetters, Michael D.; Scerbo, Mark W.; White, Casey B.; Lypson, Monica L.; Padilla, Miguel A.; Gliva-McConvey, Gayle A.; Belfore, Lee A.; West, Temple; Wallace, Amelia M.; Guetterman, Timothy C.; Schleicher, Lauren S.; Kennedy, Rebecca A.; Mangrulkar, Rajesh S.; Cleary, James F.; Marsella, Stacy C.; Becker, Daniel M.

    2016-01-01

    Objectives To assess advanced communication skills among second-year medical students exposed either to a computer simulation (MPathic-VR) featuring virtual humans, or to a multimedia computer-based learning module, and to understand each group’s experiences and learning preferences. Methods A single-blinded, mixed methods, randomized, multisite trial compared MPathic-VR (N=210) to computer-based learning (N=211). Primary outcomes: communication scores during repeat interactions with MPathic-VR’s intercultural and interprofessional communication scenarios and scores on a subsequent advanced communication skills objective structured clinical examination (OSCE). Multivariate analysis of variance was used to compare outcomes. Secondary outcomes: student attitude surveys and qualitative assessments of their experiences with MPathic-VR or computer-based learning. Results MPathic-VR-trained students improved their intercultural and interprofessional communication performance between their first and second interactions with each scenario. They also achieved significantly higher composite scores on the OSCE than computer-based learning-trained students. Attitudes and experiences were more positive among students trained with MPathic-VR, who valued its providing immediate feedback, teaching nonverbal communication skills, and preparing them for emotion-charged patient encounters. Conclusions MPathic-VR was effective in training advanced communication skills and in enabling knowledge transfer into a more realistic clinical situation. Practice Implications MPathic-VR’s virtual human simulation offers an effective and engaging means of advanced communication training. PMID:27939846

  5. Coarse kMC-based replica exchange algorithms for the accelerated simulation of protein folding in explicit solvent.

    PubMed

    Peter, Emanuel K; Shea, Joan-Emma; Pivkin, Igor V

    2016-05-14

    In this paper, we present a coarse replica exchange molecular dynamics (REMD) approach, based on kinetic Monte Carlo (kMC). The new development significantly can reduce the amount of replicas and the computational cost needed to enhance sampling in protein simulations. We introduce 2 different methods which primarily differ in the exchange scheme between the parallel ensembles. We apply this approach on folding of 2 different β-stranded peptides: the C-terminal β-hairpin fragment of GB1 and TrpZip4. Additionally, we use the new simulation technique to study the folding of TrpCage, a small fast folding α-helical peptide. Subsequently, we apply the new methodology on conformation changes in signaling of the light-oxygen voltage (LOV) sensitive domain from Avena sativa (AsLOV2). Our results agree well with data reported in the literature. In simulations of dialanine, we compare the statistical sampling of the 2 techniques with conventional REMD and analyze their performance. The new techniques can reduce the computational cost of REMD significantly and can be used in enhanced sampling simulations of biomolecules.

  6. Operations analysis (study 2.1): Program manual and users guide for the LOVES computer code

    NASA Technical Reports Server (NTRS)

    Wray, S. T., Jr.

    1975-01-01

    Information is provided necessary to use the LOVES Computer Program in its existing state, or to modify the program to include studies not properly handled by the basic model. The Users Guide defines the basic elements assembled together to form the model for servicing satellites in orbit. As the program is a simulation, the method of attack is to disassemble the problem into a sequence of events, each occurring instantaneously and each creating one or more other events in the future. The main driving force of the simulation is the deterministic launch schedule of satellites and the subsequent failure of the various modules which make up the satellites. The LOVES Computer Program uses a random number generator to simulate the failure of module elements and therefore operates over a long span of time typically 10 to 15 years. The sequence of events is varied by making several runs in succession with different random numbers resulting in a Monte Carlo technique to determine statistical parameters of minimum value, average value, and maximum value.

  7. Multiscale computations with a wavelet-adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Rastigejev, Yevgenii Anatolyevich

    A wavelet-based adaptive multiresolution algorithm for the numerical solution of multiscale problems governed by partial differential equations is introduced. The main features of the method include fast algorithms for the calculation of wavelet coefficients and approximation of derivatives on nonuniform stencils. The connection between the wavelet order and the size of the stencil is established. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution which are used in conjunction with an appropriate threshold criteria to adapt the collocation grid. The efficient data structures for grid representation as well as related computational algorithms to support grid rearrangement procedure are developed. The algorithm is applied to the simulation of phenomena described by Navier-Stokes equations. First, we undertake the study of the ignition and subsequent viscous detonation of a H2 : O2 : Ar mixture in a one-dimensional shock tube. Subsequently, we apply the algorithm to solve the two- and three-dimensional benchmark problem of incompressible flow in a lid-driven cavity at large Reynolds numbers. For these cases we show that solutions of comparable accuracy as the benchmarks are obtained with more than an order of magnitude reduction in degrees of freedom. The simulations show the striking ability of the algorithm to adapt to a solution having different scales at different spatial locations so as to produce accurate results at a relatively low computational cost.

  8. Multiscale Modeling of Damage Processes in Aluminum Alloys: Grain-Scale Mechanisms

    NASA Technical Reports Server (NTRS)

    Hochhalter, J. D.; Veilleux, M. G.; Bozek, J. E.; Glaessgen, E. H.; Ingraffea, A. R.

    2008-01-01

    This paper has two goals related to the development of a physically-grounded methodology for modeling the initial stages of fatigue crack growth in an aluminum alloy. The aluminum alloy, AA 7075-T651, is susceptible to fatigue cracking that nucleates from cracked second phase iron-bearing particles. Thus, the first goal of the paper is to validate an existing framework for the prediction of the conditions under which the particles crack. The observed statistics of particle cracking (defined as incubation for this alloy) must be accurately predicted to simulate the stochastic nature of microstructurally small fatigue crack (MSFC) formation. Also, only by simulating incubation of damage in a statistically accurate manner can subsequent stages of crack growth be accurately predicted. To maintain fidelity and computational efficiency, a filtering procedure was developed to eliminate particles that were unlikely to crack. The particle filter considers the distributions of particle sizes and shapes, grain texture, and the configuration of the surrounding grains. This filter helps substantially reduce the number of particles that need to be included in the microstructural models and forms the basis of the future work on the subsequent stages of MSFC, crack nucleation and microstructurally small crack propagation. A physics-based approach to simulating fracture should ultimately begin at nanometer length scale, in which atomistic simulation is used to predict the fundamental damage mechanisms of MSFC. These mechanisms include dislocation formation and interaction, interstitial void formation, and atomic diffusion. However, atomistic simulations quickly become computationally intractable as the system size increases, especially when directly linking to the already large microstructural models. Therefore, the second goal of this paper is to propose a method that will incorporate atomistic simulation and small-scale experimental characterization into the existing multiscale framework. At the microscale, the nanoscale mechanics are represented within cohesive zones where appropriate, i.e. where the mechanics observed at the nanoscale can be represented as occurring on a plane such as at grain boundaries or slip planes at a crack front. Important advancements that are yet to be made include: 1. an increased fidelity in cohesive zone modeling; 2. a means to understand how atomistic simulation scales with time; 3. a new experimental methodology for generating empirical models for CZMs and emerging materials; and 4. a validation of simulations of the damage processes at the nano-micro scale. With ever-increasing computer power, the long-term ability to employ atomistic simulation for the prognosis of structural components will not be limited by computation power, but by our lack of knowledge in incorporating atomistic models into simulations of MSFC into a multiscale framework.

  9. Alpha absolute power measurement in panic disorder with agoraphobia patients.

    PubMed

    de Carvalho, Marcele Regine; Velasques, Bruna Brandão; Freire, Rafael C; Cagy, Maurício; Marques, Juliana Bittencourt; Teixeira, Silmar; Rangé, Bernard P; Piedade, Roberto; Ribeiro, Pedro; Nardi, Antonio Egidio; Akiskal, Hagop Souren

    2013-10-01

    Panic attacks are thought to be a result from a dysfunctional coordination of cortical and brainstem sensory information leading to heightened amygdala activity with subsequent neuroendocrine, autonomic and behavioral activation. Prefrontal areas may be responsible for inhibitory top-down control processes and alpha synchronization seems to reflect this modulation. The objective of this study was to measure frontal absolute alpha-power with qEEG in 24 subjects with panic disorder and agoraphobia (PDA) compared to 21 healthy controls. qEEG data were acquired while participants watched a computer simulation, consisting of moments classified as "high anxiety"(HAM) and "low anxiety" (LAM). qEEG data were also acquired during two rest conditions, before and after the computer simulation display. We observed a higher absolute alpha-power in controls when compared to the PDA patients while watching the computer simulation. The main finding was an interaction between the moment and group factors on frontal cortex. Our findings suggest that the decreased alpha-power in the frontal cortex for the PDA group may reflect a state of high excitability. Our results suggest a possible deficiency in top-down control processes of anxiety reflected by a low absolute alpha-power in the PDA group while watching the computer simulation and they highlight that prefrontal regions and frontal region nearby the temporal area are recruited during the exposure to anxiogenic stimuli. © 2013 Elsevier B.V. All rights reserved.

  10. Band Gap Engineering of Titania Systems Purposed for Photocatalytic Activity

    NASA Astrophysics Data System (ADS)

    Thurston, Cameron

    Ab initio computer aided design drastically increases candidate population for highly specified material discovery and selection. These simulations, carried out through a first-principles computational approach, accurately extrapolate material properties and behavior. Titanium Dioxide (TiO2 ) is one such material that stands to gain a great deal from the use of these simulations. In its anatase form, titania (TiO2 ) has been found to exhibit a band gap nearing 3.2 eV. If titania is to become a viable alternative to other contemporary photoactive materials exhibiting band gaps better suited for the solar spectrum, then the band gap must be subsequently reduced. To lower the energy needed for electronic excitation, both transition metals and non-metals have been extensively researched and are currently viable candidates for the continued reduction of titania's band gap. The introduction of multicomponent atomic doping introduces new energy bands which tend to both reduce the band gap and recombination loss. Ta-N, Nb-N, V-N, Cr-N, Mo-N, and W-N substitutions were studied in titania and subsequent energy and band gap calculations show a favorable band gap reduction in the case of passivated systems.

  11. Using Molecular Dynamics Simulations as an Aid in the Prediction of Domain Swapping of Computationally Designed Protein Variants.

    PubMed

    Mou, Yun; Huang, Po-Ssu; Thomas, Leonard M; Mayo, Stephen L

    2015-08-14

    In standard implementations of computational protein design, a positive-design approach is used to predict sequences that will be stable on a given backbone structure. Possible competing states are typically not considered, primarily because appropriate structural models are not available. One potential competing state, the domain-swapped dimer, is especially compelling because it is often nearly identical with its monomeric counterpart, differing by just a few mutations in a hinge region. Molecular dynamics (MD) simulations provide a computational method to sample different conformational states of a structure. Here, we tested whether MD simulations could be used as a post-design screening tool to identify sequence mutations leading to domain-swapped dimers. We hypothesized that a successful computationally designed sequence would have backbone structure and dynamics characteristics similar to that of the input structure and that, in contrast, domain-swapped dimers would exhibit increased backbone flexibility and/or altered structure in the hinge-loop region to accommodate the large conformational change required for domain swapping. While attempting to engineer a homodimer from a 51-amino-acid fragment of the monomeric protein engrailed homeodomain (ENH), we had instead generated a domain-swapped dimer (ENH_DsD). MD simulations on these proteins showed increased B-factors derived from MD simulation in the hinge loop of the ENH_DsD domain-swapped dimer relative to monomeric ENH. Two point mutants of ENH_DsD designed to recover the monomeric fold were then tested with an MD simulation protocol. The MD simulations suggested that one of these mutants would adopt the target monomeric structure, which was subsequently confirmed by X-ray crystallography. Copyright © 2015. Published by Elsevier Ltd.

  12. Computational Analysis of Gravitational Effects in Low-Density Gas Jets

    NASA Technical Reports Server (NTRS)

    Satti, Rajani P.; Agrawal, Ajay K.

    2004-01-01

    This study deals with the computational analysis of buoyancy-induced instability in the nearfield of an isothermal helium jet injected into quiescent ambient air environment. Laminar, axisymmetric, unsteady flow conditions were considered for the analysis. The transport equations of helium mass fraction coupled with the conservation equations of mixture mass and momentum were solved using a staggered grid finite volume method. The jet Richardson numbers of 1.5 and 0.018 were considered to encompass both buoyant and inertial jet flow regimes. Buoyancy effects were isolated by initiating computations in Earth gravity and subsequently, reducing gravity to simulate the microgravity conditions. Computed results concur with experimental observations that the periodic flow oscillations observed in Earth gravity subside in microgravity.

  13. Simplified energy-balance model for pragmatic multi-dimensional device simulation

    NASA Astrophysics Data System (ADS)

    Chang, Duckhyun; Fossum, Jerry G.

    1997-11-01

    To pragmatically account for non-local carrier heating and hot-carrier effects such as velocity overshoot and impact ionization in multi-dimensional numerical device simulation, a new simplified energy-balance (SEB) model is developed and implemented in FLOODS[16] as a pragmatic option. In the SEB model, the energy-relaxation length is estimated from a pre-process drift-diffusion simulation using the carrier-velocity distribution predicted throughout the device domain, and is used without change in a subsequent simpler hydrodynamic (SHD) simulation. The new SEB model was verified by comparison of two-dimensional SHD and full HD DC simulations of a submicron MOSFET. The SHD simulations yield detailed distributions of carrier temperature, carrier velocity, and impact-ionization rate, which agree well with the full HD simulation results obtained with FLOODS. The most noteworthy feature of the new SEB/SHD model is its computational efficiency, which results from reduced Newton iteration counts caused by the enhanced linearity. Relative to full HD, SHD simulation times can be shorter by as much as an order of magnitude since larger voltage steps for DC sweeps and larger time steps for transient simulations can be used. The improved computational efficiency can enable pragmatic three-dimensional SHD device simulation as well, for which the SEB implementation would be straightforward as it is in FLOODS or any robust HD simulator.

  14. Toward the Computational Representation of Individual Cultural, Cognitive, and Physiological State: The Sensor Shooter Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    RAYBOURN,ELAINE M.; FORSYTHE,JAMES C.

    2001-08-01

    This report documents an exploratory FY 00 LDRD project that sought to demonstrate the first steps toward a realistic computational representation of the variability encountered in individual human behavior. Realism, as conceptualized in this project, required that the human representation address the underlying psychological, cultural, physiological, and environmental stressors. The present report outlines the researchers' approach to representing cognitive, cultural, and physiological variability of an individual in an ambiguous situation while faced with a high-consequence decision that would greatly impact subsequent events. The present project was framed around a sensor-shooter scenario as a soldier interacts with an unexpected target (twomore » young Iraqi girls). A software model of the ''Sensor Shooter'' scenario from Desert Storm was developed in which the framework consisted of a computational instantiation of Recognition Primed Decision Making in the context of a Naturalistic Decision Making model [1]. Recognition Primed Decision Making was augmented with an underlying foundation based on our current understanding of human neurophysiology and its relationship to human cognitive processes. While the Gulf War scenario that constitutes the framework for the Sensor Shooter prototype is highly specific, the human decision architecture and the subsequent simulation are applicable to other problems similar in concept, intensity, and degree of uncertainty. The goal was to provide initial steps toward a computational representation of human variability in cultural, cognitive, and physiological state in order to attain a better understanding of the full depth of human decision-making processes in the context of ambiguity, novelty, and heightened arousal.« less

  15. Preferred orientation of albumin adsorption on a hydrophilic surface from molecular simulation.

    PubMed

    Hsu, Hao-Jen; Sheu, Sheh-Yi; Tsay, Ruey-Yug

    2008-12-01

    In general, non-specific protein adsorption follows a two-step procedure, i.e. first adsorption onto a surface in native form, and a subsequent conformational change on the surface. In order to predict the subsequent conformational change, it is important to determine the preferred orientation of an adsorbed protein in the first step of the adsorption. In this work, a method based on finding the global minimum of the interaction potential energy of an adsorbed protein has been developed to delineate the preferred orientations for the adsorption of human serum albumin (HSA) on a model surface with a hydrophilic self-assembled monolayer (SAM). For computational efficiency, solvation effects were greatly simplified by only including the dampening of electrostatic effects while neglecting contributions due to the competition of water molecules for the functional groups on the surface. A contour map obtained by systematic rotation of a molecule in conjunction with perpendicular motion to the surface gives the minimum interaction energy of the adsorbed molecule at various adsorption orientations. Simulation results show that for an -OH terminated SAM surface, a "back-on" orientation of HSA is the preferred orientation. The projection area of this adsorption orientation corresponds with the "triangular-side-on" adsorption of a heart shaped HSA molecule. The method proposed herein is able to provide results which are consistent with those predicted by Monte Carlo (MC) simulations with a substantially less computing cost. The high computing efficiency of the current method makes it possible to be implemented as a design tool for the control of protein adsorption on surfaces; however, before this can be fully realized, these methods must be further developed to enable interaction free energy to be calculated in place of potential energy, along with a more realistic representation of solvation effects.

  16. Transport delay compensation for computer-generated imagery systems

    NASA Technical Reports Server (NTRS)

    Mcfarland, Richard E.

    1988-01-01

    In the problem of pure transport delay in a low-pass system, a trade-off exists with respect to performance within and beyond a frequency bandwidth. When activity beyond the band is attenuated because of other considerations, this trade-off may be used to improve the performance within the band. Specifically, transport delay in computer-generated imagery systems is reduced to a manageable problem by recognizing frequency limits in vehicle activity and manual-control capacity. Based on these limits, a compensation algorithm has been developed for use in aircraft simulation at NASA Ames Research Center. For direct measurement of transport delays, a beam-splitter experiment is presented that accounts for the complete flight simulation environment. Values determined by this experiment are appropriate for use in the compensation algorithm. The algorithm extends the bandwidth of high-frequency flight simulation to well beyond that of normal pilot inputs. Within this bandwidth, the visual scene presentation manifests negligible gain distortion and phase lag. After a year of utilization, two minor exceptions to universal simulation applicability have been identified and subsequently resolved.

  17. Investigation of a Nanowire Electronic Nose by Computer Simulation

    DTIC Science & Technology

    2009-04-14

    R. D. Mileham, and D. W. Galipeau. Gas sensing based on inelastic electron tunneling spectroscopy. IEEE Sensors Journal, 8(6):983988, 2008. [6] J...explosives in the hold of passenger aircraft . More generally they can be used to detect the presence of molecules that could be a threat to human health...design suitable for subsequent fabrication and then characterization. 15. SUBJECT TERMS EOARD, Sensor Technology, electronic

  18. The calculation of electromagnetic fields in the Fresnel and Fraunhofer regions using numerical integration methods

    NASA Technical Reports Server (NTRS)

    Schmidt, R. F.

    1971-01-01

    Some results obtained with a digital computer program written at Goddard Space Flight Center to obtain electromagnetic fields scattered by perfectly reflecting surfaces are presented. For purposes of illustration a paraboloidal reflector was illuminated at radio frequencies in the simulation for both receiving and transmitting modes of operation. Fields were computed in the Fresnel and Fraunhofer regions. A dual-reflector system (Cassegrain) was also simulated for the transmitting case, and fields were computed in the Fraunhofer region. Appended results include derivations which show that the vector Kirchhoff-Kottler formulation has an equivalent form requiring only incident magnetic fields as a driving function. Satisfaction of the radiation conditions at infinity by the equivalent form is demonstrated by a conversion from Cartesian to spherical vector operators. A subsequent development presents the formulation by which Fresnel or Fraunhofer patterns are obtainable for dual-reflector systems. A discussion of the time-average Poynting vector is also appended.

  19. Determination Gradients of the Earth's Magnetic Field from the Measurements of the Satellites and Inversion of the Kursk Magnetic Anomaly

    NASA Technical Reports Server (NTRS)

    Karoly, Kis; Taylor, Patrick T.; Geza, Wittmann

    2014-01-01

    We computed magnetic field gradients at satellite altitude, over Europe with emphasis on the Kursk Magnetic Anomaly (KMA). They were calculated using the CHAMP satellite total magnetic anomalies. Our computations were done to determine how the magnetic anomaly data from the new ESA/Swarm satellites could be utilized to determine the structure of the magnetization of the Earths crust, especially in the region of the KMA. Since the ten years of 2 CHAMP data could be used to simulate the Swarm data. An initial East magnetic anomaly gradient map of Europe was computed and subsequently the North, East and Vertical magnetic gradients for the KMA region were calculated. The vertical gradient of the KMA was determined using Hilbert transforms. Inversion of the total KMA was derived using Simplex and Simulated Annealing algorithms. Our resulting inversion depth model is a horizontal quadrangle with upper 300-329 km and lower 331-339 km boundaries.

  20. Computational representation and hemodynamic characterization of in vivo acquired severe stenotic renal artery geometries using turbulence modeling.

    PubMed

    Kagadis, George C; Skouras, Eugene D; Bourantas, George C; Paraskeva, Christakis A; Katsanos, Konstantinos; Karnabatidis, Dimitris; Nikiforidis, George C

    2008-06-01

    The present study reports on computational fluid dynamics in the case of severe renal artery stenosis (RAS). An anatomically realistic model of a renal artery was reconstructed from CT scans, and used to conduct CFD simulations of blood flow across RAS. The recently developed shear stress transport (SST) turbulence model was pivotally applied in the simulation of blood flow in the region of interest. Blood flow was studied in vivo under the presence of RAS and subsequently in simulated cases before the development of RAS, and after endovascular stent implantation. The pressure gradients in the RAS case were many orders of magnitude larger than in the healthy case. The presence of RAS increased flow resistance, which led to considerably lower blood flow rates. A simulated stent in place of the RAS decreased the flow resistance at levels proportional to, and even lower than, the simulated healthy case without the RAS. The wall shear stresses, differential pressure profiles, and net forces exerted on the surface of the atherosclerotic plaque at peak pulse were shown to be of relevant high distinctiveness, so as to be considered potential indicators of hemodynamically significant RAS.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakosi, Jozsef; Christon, Mark A.; Francois, Marianne M.

    This report describes the work carried out for completion of the Thermal Hydraulics Methods (THM) Level 3 Milestone THM.CFD.P5.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL). A series of body-fitted computational meshes have been generated by Numeca's Hexpress/Hybrid, a.k.a. 'Spider', meshing technology for the V5H 3 x 3 and 5 x 5 rod bundle geometries and subsequently used to compute the fluid dynamics of grid-to-rod fretting (GTRF). Spider is easy to use, fast, and automatically generates high-quality meshes for extremely complex geometries, required for the GTRF problem. Hydra-TH has been used to carry out large-eddy simulationsmore » on both 3 x 3 and 5 x 5 geometries, using different mesh resolutions. The results analyzed show good agreement with Star-CCM+ simulations and experimental data.« less

  2. Inlet Development for a Rocket Based Combined Cycle, Single Stage to Orbit Vehicle Using Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    DeBonis, J. R.; Trefny, C. J.; Steffen, C. J., Jr.

    1999-01-01

    Design and analysis of the inlet for a rocket based combined cycle engine is discussed. Computational fluid dynamics was used in both the design and subsequent analysis. Reynolds averaged Navier-Stokes simulations were performed using both perfect gas and real gas assumptions. An inlet design that operates over the required Mach number range from 0 to 12 was produced. Performance data for cycle analysis was post processed using a stream thrust averaging technique. A detailed performance database for cycle analysis is presented. The effect ot vehicle forebody compression on air capture is also examined.

  3. A new model to compute the desired steering torque for steer-by-wire vehicles and driving simulators

    NASA Astrophysics Data System (ADS)

    Fankem, Steve; Müller, Steffen

    2014-05-01

    This paper deals with the control of the hand wheel actuator in steer-by-wire (SbW) vehicles and driving simulators (DSs). A novel model for the computation of the desired steering torque is presented. The introduced steering torque computation does not only aim to generate a realistic steering feel, which means that the driver should not miss the basic steering functionality of a modern conventional steering system such as an electric power steering (EPS) or hydraulic power steering (HPS), and this in every driving situation. In addition, the modular structure of the steering torque computation combined with suitably selected tuning parameters has the objective to offer a high degree of customisability of the steering feel and thus to provide each driver with his preferred steering feel in a very intuitive manner. The task and the tuning of each module are firstly described. Then, the steering torque computation is parameterised such that the steering feel of a series EPS system is reproduced. For this purpose, experiments are conducted in a hardware-in-the-loop environment where a test EPS is mounted on a steering test bench coupled with a vehicle simulator and parameter identification techniques are applied. Subsequently, how appropriate the steering torque computation mimics the test EPS system is objectively evaluated with respect to criteria concerning the steering torque level and gradient, the feedback behaviour and the steering return ability. Finally, the intuitive tuning of the modular steering torque computation is demonstrated for deriving a sportier steering feel configuration.

  4. A New Computational Technique for the Generation of Optimised Aircraft Trajectories

    NASA Astrophysics Data System (ADS)

    Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto

    2017-12-01

    A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.

  5. Contribution of cosmic ray particles to radiation environment at high mountain altitude: Comparison of Monte Carlo simulations with experimental data.

    PubMed

    Mishev, A L

    2016-03-01

    A numerical model for assessment of the effective dose due to secondary cosmic ray particles of galactic origin at high mountain altitude of about 3000 m above the sea level is presented. The model is based on a newly numerically computed effective dose yield function considering realistic propagation of cosmic rays in the Earth magnetosphere and atmosphere. The yield function is computed using a full Monte Carlo simulation of the atmospheric cascade induced by primary protons and α- particles and subsequent conversion of secondary particle fluence (neutrons, protons, gammas, electrons, positrons, muons and charged pions) to effective dose. A lookup table of the newly computed effective dose yield function is provided. The model is compared with several measurements. The comparison of model simulations with measured spectral energy distributions of secondary cosmic ray neutrons at high mountain altitude shows good consistency. Results from measurements of radiation environment at high mountain station--Basic Environmental Observatory Moussala (42.11 N, 23.35 E, 2925 m a.s.l.) are also shown, specifically the contribution of secondary cosmic ray neutrons. A good agreement with the model is demonstrated. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Evaluation of standard radiation atmosphere aerosol models for a coastal environment

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H.; Suttles, J. T.; Sebacher, D. I.; Fuller, W. H.; Lecroy, S. R.

    1986-01-01

    Calculations are compared with data from an experiment to evaluate the utility of standard radiation atmosphere (SRA) models for defining aerosol properties in atmospheric radiation computations. Initial calculations with only SRA aerosols in a four-layer atmospheric column simulation allowed a sensitivity study and the detection of spectral trends in optical depth, which differed from measurements. Subsequently, a more detailed analysis provided a revision in the stratospheric layer, which brought calculations in line with both optical depth and skylight radiance data. The simulation procedure allows determination of which atmospheric layers influence both downwelling and upwelling radiation spectra.

  7. Payload maintenance cost model for the space telescope

    NASA Technical Reports Server (NTRS)

    White, W. L.

    1980-01-01

    An optimum maintenance cost model for the space telescope for a fifteen year mission cycle was developed. Various documents and subsequent updates of failure rates and configurations were made. The reliability of the space telescope for one year, two and one half years, and five years were determined using the failure rates and configurations. The failure rates and configurations were also used in the maintenance simulation computer model which simulate the failure patterns for the fifteen year mission life of the space telescope. Cost algorithms associated with the maintenance options as indicated by the failure patterns were developed and integrated into the model.

  8. Experimentally validated multiphysics computational model of focusing and shock wave formation in an electromagnetic lithotripter.

    PubMed

    Fovargue, Daniel E; Mitran, Sorin; Smith, Nathan B; Sankin, Georgy N; Simmons, Walter N; Zhong, Pei

    2013-08-01

    A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model.

  9. Experimentally validated multiphysics computational model of focusing and shock wave formation in an electromagnetic lithotripter

    PubMed Central

    Fovargue, Daniel E.; Mitran, Sorin; Smith, Nathan B.; Sankin, Georgy N.; Simmons, Walter N.; Zhong, Pei

    2013-01-01

    A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model. PMID:23927200

  10. Deterministic Local Sensitivity Analysis of Augmented Systems - II: Applications to the QUENCH-04 Experiment Using the RELAP5/MOD3.2 Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ionescu-Bujor, Mihaela; Jin Xuezhou; Cacuci, Dan G.

    2005-09-15

    The adjoint sensitivity analysis procedure for augmented systems for application to the RELAP5/MOD3.2 code system is illustrated. Specifically, the adjoint sensitivity model corresponding to the heat structure models in RELAP5/MOD3.2 is derived and subsequently augmented to the two-fluid adjoint sensitivity model (ASM-REL/TF). The end product, called ASM-REL/TFH, comprises the complete adjoint sensitivity model for the coupled fluid dynamics/heat structure packages of the large-scale simulation code RELAP5/MOD3.2. The ASM-REL/TFH model is validated by computing sensitivities to the initial conditions for various time-dependent temperatures in the test bundle of the Quench-04 reactor safety experiment. This experiment simulates the reflooding with water ofmore » uncovered, degraded fuel rods, clad with material (Zircaloy-4) that has the same composition and size as that used in typical pressurized water reactors. The most important response for the Quench-04 experiment is the time evolution of the cladding temperature of heated fuel rods. The ASM-REL/TFH model is subsequently used to perform an illustrative sensitivity analysis of this and other time-dependent temperatures within the bundle. The results computed by using the augmented adjoint sensitivity system, ASM-REL/TFH, highlight the reliability, efficiency, and usefulness of the adjoint sensitivity analysis procedure for computing time-dependent sensitivities.« less

  11. Sensitivity of chemistry-transport model simulations to the duration of chemical and transport operators: a case study with GEOS-Chem v10-01

    NASA Astrophysics Data System (ADS)

    Philip, Sajeev; Martin, Randall V.; Keller, Christoph A.

    2016-05-01

    Chemistry-transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemistry-transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to operator duration. Subsequently, we compare the species simulated with operator durations from 10 to 60 min as typically used by global chemistry-transport models, and identify the operator durations that optimize both computational expense and simulation accuracy. We find that longer continuous transport operator duration increases concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production with longer transport operator duration. Longer chemical operator duration decreases sulfate and ammonium but increases nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by up to a factor of 5 from fine (5 min) to coarse (60 min) operator duration. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, secondary inorganic aerosols, ozone and carbon monoxide with a finer temporal or spatial resolution taken as "truth". Relative simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) operator duration. Chemical operator duration twice that of the transport operator duration offers more simulation accuracy per unit computation. However, the relative simulation error from coarser spatial resolution generally exceeds that from longer operator duration; e.g., degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different operator durations in offline chemistry-transport models. We encourage chemistry-transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.

  12. Sensitivity of chemical transport model simulations to the duration of chemical and transport operators: a case study with GEOS-Chem v10-01

    NASA Astrophysics Data System (ADS)

    Philip, S.; Martin, R. V.; Keller, C. A.

    2015-11-01

    Chemical transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemical transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to temporal resolution. Subsequently, we compare the tracers simulated with operator durations from 10 to 60 min as typically used by global chemical transport models, and identify the timesteps that optimize both computational expense and simulation accuracy. We found that longer transport timesteps increase concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production at longer transport timesteps. Longer chemical timesteps decrease sulfate and ammonium but increase nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by an order of magnitude from fine (5 min) to coarse (60 min) temporal resolution. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, ozone, carbon monoxide and secondary inorganic aerosols with a finer temporal or spatial resolution taken as truth. Simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) temporal resolution. Chemical timesteps twice that of the transport timestep offer more simulation accuracy per unit computation. However, simulation error from coarser spatial resolution generally exceeds that from longer timesteps; e.g. degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different temporal resolutions in offline chemical transport models. We encourage the chemical transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.

  13. Risk as feelings in the effect of patient outcomes on physicians' future treatment decisions: a randomized trial and manipulation validation.

    PubMed

    Hemmerich, Joshua A; Elstein, Arthur S; Schwarze, Margaret L; Moliski, Elizabeth Ghini; Dale, William

    2012-07-01

    The present study tested predictions derived from the Risk as Feelings hypothesis about the effects of prior patients' negative treatment outcomes on physicians' subsequent treatment decisions. Two experiments at The University of Chicago, U.S.A., utilized a computer simulation of an abdominal aortic aneurysm (AAA) patient with enhanced realism to present participants with one of three experimental conditions: AAA rupture causing a watchful waiting death (WWD), perioperative death (PD), or a successful operation (SO), as well as the statistical treatment guidelines for AAA. Experiment 1 tested effects of these simulated outcomes on (n = 76) laboratory participants' (university student sample) self-reported emotions, and their ratings of valence and arousal of the AAA rupture simulation and other emotion-inducing picture stimuli. Experiment 2 tested two hypotheses: 1) that experiencing a patient WWD in the practice trial's experimental condition would lead physicians to choose surgery earlier, and 2) experiencing a patient PD would lead physicians to choose surgery later with the next patient. Experiment 2 presented (n = 132) physicians (surgeons and geriatricians) with the same experimental manipulation and a second simulated AAA patient. Physicians then chose to either go to surgery or continue watchful waiting. The results of Experiment 1 demonstrated that the WWD experimental condition significantly increased anxiety, and was rated similarly to other negative and arousing pictures. The results of Experiment 2 demonstrated that, after controlling for demographics, baseline anxiety, intolerance for uncertainty, risk attitudes, and the influence of simulation characteristics, the WWD experimental condition significantly expedited decisions to choose surgery for the next patient. The results support the Risk as Feelings hypothesis on physicians' treatment decisions in a realistic AAA patient computer simulation. Bad outcomes affected emotions and decisions, even with statistical AAA rupture risk guidance present. These results suggest that bad patient outcomes cause physicians to experience anxiety and regret that influences their subsequent treatment decision-making for the next patient. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Aerosol transport simulations in indoor and outdoor environments using computational fluid dynamics (CFD)

    NASA Astrophysics Data System (ADS)

    Landazuri, Andrea C.

    This dissertation focuses on aerosol transport modeling in occupational environments and mining sites in Arizona using computational fluid dynamics (CFD). The impacts of human exposure in both environments are explored with the emphasis on turbulence, wind speed, wind direction and particle sizes. Final emissions simulations involved the digitalization process of available elevation contour plots of one of the mining sites to account for realistic topographical features. The digital elevation map (DEM) of one of the sites was imported to COMSOL MULTIPHYSICSRTM for subsequent turbulence and particle simulations. Simulation results that include realistic topography show considerable deviations of wind direction. Inter-element correlation results using metal and metalloid size resolved concentration data using a Micro-Orifice Uniform Deposit Impactor (MOUDI) under given wind speeds and directions provided guidance on groups of metals that coexist throughout mining activities. Groups between Fe-Mg, Cr-Fe, Al-Sc, Sc-Fe, and Mg-Al are strongly correlated for unrestricted wind directions and speeds, suggesting that the source may be of soil origin (e.g. ore and tailings); also, groups of elements where Cu is present, in the coarse fraction range, may come from mechanical action mining activities and saltation phenomenon. Besides, MOUDI data under low wind speeds (<2 m/s) and at night showed a strong correlation for 1 mum particles between the groups: Sc-Be-Mg, Cr-Al, Cu-Mn, Cd-Pb-Be, Cd-Cr, Cu-Pb, Pb-Cd, As-Cd-Pb. The As-Cd-Pb correlates strongly in almost all ranges of particle sizes. When restricted low wind speeds were imposed more groups of elements are evident and this may be justified with the fact that at lower speeds particles are more likely to settle. When linking these results with CFD simulations and Pb-isotope results it is concluded that the source of elements found in association with Pb in the fine fraction come from the ore that is subsequently processed in the smelter site, whereas the source of elements associated to Pb in the coarse fraction is of different origin. CFD simulation results will not only provide realistic and quantifiable information in terms of potential deleterious effects, but also that the application of CFD represents an important contribution to actual dispersion modeling studies; therefore, Computational Fluid Dynamics can be used as a source apportionment tool to identify areas that have an effect over specific sampling points and susceptible regions under certain meteorological conditions, and these conclusions can be supported with inter-element correlation matrices and lead isotope analysis, especially since there is limited access to the mining sites. Additional results concluded that grid adaption is a powerful tool that allows to refine specific regions that require lots of detail and therefore better resolve flow detail, provides higher number of locations with monotonic convergence than the manual grids, and requires the least computational effort. CFD simulations were approached using the k-epsilon model, with the aid of computer aided engineering software: ANSYSRTM and COMSOL MULTIPHYSICS RTM. The success of aerosol transport simulations depends on a good simulation of the turbulent flow. A lot of attention was placed on investigating and choosing the best models in terms of convergence, independence and computational effort. This dissertation also includes preliminary studies of transient discrete phase, eulerian and species transport modeling, importance of saltation of particles, information on CFD methods, and strategies for future directions that should be taken.

  15. Controlled Studies of Whistler Wave Interactions with Energetic Particles in Radiation Belts

    DTIC Science & Technology

    2009-07-01

    the IGRF geomagnetic field and PIM ionosphere /plasmasphere models . Those simulations demonstrate that on this particular evening 28.5 kHz whistler...a simplified slab model of ionospheric plasmas, we can compute the transmission coefficient and, subsequently, estimate that -15% of the incident...with inner radiation belts as well as the ionospheric effects caused by precipitated energetic electrons. The whistler waves used in our experiments

  16. Modeling cell adhesion and proliferation: a cellular-automata based approach.

    PubMed

    Vivas, J; Garzón-Alvarado, D; Cerrolaza, M

    Cell adhesion is a process that involves the interaction between the cell membrane and another surface, either a cell or a substrate. Unlike experimental tests, computer models can simulate processes and study the result of experiments in a shorter time and lower costs. One of the tools used to simulate biological processes is the cellular automata, which is a dynamic system that is discrete both in space and time. This work describes a computer model based on cellular automata for the adhesion process and cell proliferation to predict the behavior of a cell population in suspension and adhered to a substrate. The values of the simulated system were obtained through experimental tests on fibroblast monolayer cultures. The results allow us to estimate the cells settling time in culture as well as the adhesion and proliferation time. The change in the cells morphology as the adhesion over the contact surface progress was also observed. The formation of the initial link between cell and the substrate of the adhesion was observed after 100 min where the cell on the substrate retains its spherical morphology during the simulation. The cellular automata model developed is, however, a simplified representation of the steps in the adhesion process and the subsequent proliferation. A combined framework of experimental and computational simulation based on cellular automata was proposed to represent the fibroblast adhesion on substrates and changes in a macro-scale observed in the cell during the adhesion process. The approach showed to be simple and efficient.

  17. Computer simulations of disordering kinetics in irradiated intermetallic compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spaczer, M.; Caro, A.; Victoria, M.

    1994-11-01

    Molecular-dynamics computer simulations of collision cascades in intermetallic Cu[sub 3]Au, Ni[sub 3]Al, and NiAl have been performed to study the nature of the disordering processes in the collision cascade. The choice of these systems was suggested by the quite accurate description of the thermodynamic properties obtained using embedded-atom-type potentials. Since melting occurs in the core of the cascades, interesting effects appear as a result of the superposition of the loss (and subsequent recovery) of the crystalline order and the evolution of the chemical order, both processes being developed on different time scales. In our previous simulations on Ni[sub 3]Al andmore » Cu[sub 3]Au [T. Diaz de la Rubia, A. Caro, and M. Spaczer, Phys. Rev. B 47, 11 483 (1993)] we found a significant difference between the time evolution of the chemical short-range order (SRO) and the crystalline order in the cascade core for both alloys, namely the complete loss of the crystalline structure but only partial chemical disordering. Recent computer simulations in NiAl show the same phenomena. To understand these features we study the liquid phase of these three alloys and present simulation results concerning the dynamical melting of small samples, examining the atomic mobility, the relaxation time, and the saturation value of the chemical short-range order. An analytic model for the time evolution of the SRO is given.« less

  18. Spiking Phineas Gage: a neurocomputational theory of cognitive-affective integration in decision making.

    PubMed

    Wagar, Brandon M; Thagard, Paul

    2004-01-01

    The authors present a neurological theory of how cognitive information and emotional information are integrated in the nucleus accumbens during effective decision making. They describe how the nucleus accumbens acts as a gateway to integrate cognitive information from the ventromedial prefrontal cortex and the hippocampus with emotional information from the amygdala. The authors have modeled this integration by a network of spiking artificial neurons organized into separate areas and used this computational model to simulate 2 kinds of cognitive-affective integration. The model simulates successful performance by people with normal cognitive-affective integration. The model also simulates the historical case of Phineas Gage as well as subsequent patients whose ability to make decisions became impeded by damage to the ventromedial prefrontal cortex.

  19. Aeroservoelastic and Flight Dynamics Analysis Using Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Arena, Andrew S., Jr.

    1999-01-01

    This document in large part is based on the Masters Thesis of Cole Stephens. The document encompasses a variety of technical and practical issues involved when using the STARS codes for Aeroservoelastic analysis of vehicles. The document covers in great detail a number of technical issues and step-by-step details involved in the simulation of a system where aerodynamics, structures and controls are tightly coupled. Comparisons are made to a benchmark experimental program conducted at NASA Langley. One of the significant advantages of the methodology detailed is that as a result of the technique used to accelerate the CFD-based simulation, a systems model is produced which is very useful for developing the control law strategy, and subsequent high-speed simulations.

  20. Where next for the reproducibility agenda in computational biology?

    PubMed

    Lewis, Joanna; Breeze, Charles E; Charlesworth, Jane; Maclaren, Oliver J; Cooper, Jonathan

    2016-07-15

    The concept of reproducibility is a foundation of the scientific method. With the arrival of fast and powerful computers over the last few decades, there has been an explosion of results based on complex computational analyses and simulations. The reproducibility of these results has been addressed mainly in terms of exact replicability or numerical equivalence, ignoring the wider issue of the reproducibility of conclusions through equivalent, extended or alternative methods. We use case studies from our own research experience to illustrate how concepts of reproducibility might be applied in computational biology. Several fields have developed 'minimum information' checklists to support the full reporting of computational simulations, analyses and results, and standardised data formats and model description languages can facilitate the use of multiple systems to address the same research question. We note the importance of defining the key features of a result to be reproduced, and the expected agreement between original and subsequent results. Dynamic, updatable tools for publishing methods and results are becoming increasingly common, but sometimes come at the cost of clear communication. In general, the reproducibility of computational research is improving but would benefit from additional resources and incentives. We conclude with a series of linked recommendations for improving reproducibility in computational biology through communication, policy, education and research practice. More reproducible research will lead to higher quality conclusions, deeper understanding and more valuable knowledge.

  1. Using Numerical Modeling to Simulate Space Capsule Ground Landings

    NASA Technical Reports Server (NTRS)

    Heymsfield, Ernie; Fasanella, Edwin L.

    2009-01-01

    Experimental work is being conducted at the National Aeronautics and Space Administration s (NASA) Langley Research Center (LaRC) to investigate ground landing capabilities of the Orion crew exploration vehicle (CEV). The Orion capsule is NASA s replacement for the Space Shuttle. The Orion capsule will service the International Space Station and be used for future space missions to the Moon and to Mars. To evaluate the feasibility of Orion ground landings, a series of capsule impact tests are being performed at the NASA Langley Landing and Impact Research Facility (LandIR). The experimental results derived at LandIR provide means to validate and calibrate nonlinear dynamic finite element models, which are also being developed during this study. Because of the high cost and time involvement intrinsic to full-scale testing, numerical simulations are favored over experimental work. Subsequent to a numerical model validated by actual test responses, impact simulations will be conducted to study multiple impact scenarios not practical to test. Twenty-one swing tests using the LandIR gantry were conducted during the June 07 through October 07 time period to evaluate the Orion s impact response. Results for two capsule initial pitch angles, 0deg and -15deg , along with their computer simulations using LS-DYNA are presented in this article. A soil-vehicle friction coefficient of 0.45 was determined by comparing the test stopping distance with computer simulations. In addition, soil modeling accuracy is presented by comparing vertical penetrometer impact tests with computer simulations for the soil model used during the swing tests.

  2. Simulation Training for Residents Focused on Mechanical Ventilation: A Randomized Trial Using Mannequin-Based Versus Computer-Based Simulation.

    PubMed

    Spadaro, Savino; Karbing, Dan Stieper; Fogagnolo, Alberto; Ragazzi, Riccardo; Mojoli, Francesco; Astolfi, Luca; Gioia, Antonio; Marangoni, Elisabetta; Rees, Stephen Edward; Volta, Carlo Alberto

    2017-12-01

    Advances in knowledge regarding mechanical ventilation (MV), in particular lung-protective ventilation strategies, have been shown to reduce mortality. However, the translation of these advances in knowledge into better therapeutic performance in real-life clinical settings continues to lag. High-fidelity simulation with a mannequin allows students to interact in lifelike situations; this may be a valuable addition to traditional didactic teaching. The purpose of this study is to compare computer-based and mannequin-based approaches for training residents on MV. This prospective randomized single-blind trial involved 50 residents. All participants attended the same didactic lecture on respiratory pathophysiology and were subsequently randomized into two groups: the mannequin group (n = 25) and the computer screen-based simulator group (n = 25). One week later, each underwent a training assessment using five different scenarios of acute respiratory failure of different etiologies. Later, both groups underwent further testing of patient management, using in situ high-fidelity simulation of a patient with acute respiratory distress syndrome. Baseline knowledge was not significantly different between the two groups (P = 0.72). Regarding the training assessment, no significant differences were detected between the groups. In the final assessment, the scores of only the mannequin group significantly improved between the training and final session in terms of either global rating score [3.0 (2.5-4.0) vs. 2.0 (2.0-3.0), P = 0.005] or percentage of key score (82% vs. 71%, P = 0.001). Mannequin-based simulation has the potential to improve skills in managing MV.

  3. Towards Full Aircraft Airframe Noise Prediction: Detached Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Mineck, Raymond E.

    2014-01-01

    Results from a computational study on the aeroacoustic characteristics of an 18%-scale, semi-span Gulf-stream aircraft model are presented in this paper. NASA's FUN3D unstructured compressible Navier-Stokes solver was used to perform steady and unsteady simulations of the flow field associated with this high-fidelity aircraft model. Solutions were obtained for free-air at a Mach number of 0.2 with the flap deflected at 39 deg, with the main gear off and on (the two baseline configurations). Initially, the study focused on accurately predicting the prominent noise sources at both flap tips for the baseline configuration with deployed flap only. Building upon the experience gained from this initial effort, subsequent work involved the full landing configuration with both flap and main landing gear deployed. For the unsteady computations, we capitalized on the Detached Eddy Simulation capability of FUN3D to capture the complex time-dependent flow features associated with the flap and main gear. To resolve the noise sources over a broad frequency range, the tailored grid was very dense near the flap inboard and outboard tips and the region surrounding the gear. Extensive comparison of the computed steady and unsteady surface pressures with wind tunnel measurements showed good agreement for the global aerodynamic characteristics and the local flow field at the flap inboard tip. However, the computed pressure coefficients indicated that a zone of separated flow that forms in the vicinity of the outboard tip is larger in extent along the flap span and chord than measurements suggest. Computed farfield acoustic characteristics from a FW-H integral approach that used the simulated pressures on the model solid surface were in excellent agreement with corresponding measurements.

  4. Test-Free Fracture Toughness

    NASA Technical Reports Server (NTRS)

    Minnetyan, Levon; Chamis, Christos C. (Technical Monitor)

    2003-01-01

    Computational simulation results can give the prediction of damage growth and progression and fracture toughness of composite structures. The experimental data from literature provide environmental effects on the fracture behavior of metallic or fiber composite structures. However, the traditional experimental methods to analyze the influence of the imposed conditions are expensive and time consuming. This research used the CODSTRAN code to model the temperature effects, scaling effects and the loading effects of fiber/braided composite specimens with and without fiber-optic sensors on the damage initiation and energy release rates. The load-displacement relationship and fracture toughness assessment approach is compared with the test results from literature and it is verified that the computational simulation, with the use of established material modeling and finite element modules, adequately tracks the changes of fracture toughness and subsequent fracture propagation for any fiber/braided composite structure due to the change of fiber orientations, presence of large diameter optical fibers, and any loading conditions.

  5. Test-Free Fracture Toughness

    NASA Technical Reports Server (NTRS)

    Minnetyan, Levon; Chamis, Christos C. (Technical Monitor)

    2003-01-01

    Computational simulation results can give the prediction of damage growth and progression and fracture toughness of composite structures. The experimental data from literature provide environmental effects on the fracture behavior of metallic or fiber composite structures. However, the traditional experimental methods to analyze the influence of the imposed conditions are expensive and time consuming. This research used the CODSTRAN code to model the temperature effects, scaling effects and the loading effects of fiberbraided composite specimens with and without fiber-optic sensors on the damage initiation and energy release rates. The load-displacement relationship and fracture toughness assessment approach is compared with the test results from literature and it is verified that the computational simulation, with the use of established material modeling and finite element modules, adequately tracks the changes of fracture toughness and subsequent fracture propagation for any fiberbraided composite structure due to the change of fiber orientations, presence of large diameter optical fibers, and any loading conditions.

  6. The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.

    2015-12-01

    Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less

  7. Computational Aerothermodynamics in Aeroassist Applications

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2001-01-01

    Aeroassisted planetary entry uses atmospheric drag to decelerate spacecraft from super-orbital to orbital or suborbital velocities. Numerical simulation of flow fields surrounding these spacecraft during hypersonic atmospheric entry is required to define aerothermal loads. The severe compression in the shock layer in front of the vehicle and subsequent, rapid expansion into the wake are characterized by high temperature, thermo-chemical nonequilibrium processes. Implicit algorithms required for efficient, stable computation of the governing equations involving disparate time scales of convection, diffusion, chemical reactions, and thermal relaxation are discussed. Robust point-implicit strategies are utilized in the initialization phase; less robust but more efficient line-implicit strategies are applied in the endgame. Applications to ballutes (balloon-like decelerators) in the atmospheres of Venus, Mars, Titan, Saturn, and Neptune and a Mars Sample Return Orbiter (MSRO) are featured. Examples are discussed where time-accurate simulation is required to achieve a steady-state solution.

  8. Lattice Boltzmann simulation of antiplane shear loading of a stationary crack

    NASA Astrophysics Data System (ADS)

    Schlüter, Alexander; Kuhn, Charlotte; Müller, Ralf

    2018-01-01

    In this work, the lattice Boltzmann method is applied to study the dynamic behaviour of linear elastic solids under antiplane shear deformation. In this case, the governing set of partial differential equations reduces to a scalar wave equation for the out of plane displacement in a two dimensional domain. The lattice Boltzmann approach developed by Guangwu (J Comput Phys 161(1):61-69, 2000) in 2006 is used to solve the problem numerically. Some aspects of the scheme are highlighted, including the treatment of the boundary conditions. Subsequently, the performance of the lattice Boltzmann scheme is tested for a stationary crack problem for which an analytic solution exists. The treatment of cracks is new compared to the examples that are discussed in Guangwu's work. Furthermore, the lattice Boltzmann simulations are compared to finite element computations. Finally, the influence of the lattice Boltzmann relaxation parameter on the stability of the scheme is illustrated.

  9. Analog Design for Digital Deployment of a Serious Leadership Game

    NASA Technical Reports Server (NTRS)

    Maxwell, Nicholas; Lang, Tristan; Herman, Jeffrey L.; Phares, Richard

    2012-01-01

    This paper presents the design, development, and user testing of a leadership development simulation. The authors share lessons learned from using a design process for a board game to allow for quick and inexpensive revision cycles during the development of a serious leadership development game. The goal of this leadership simulation is to accelerate the development of leadership capacity in high-potential mid-level managers (GS-15 level) in a federal government agency. Simulation design included a mixed-method needs analysis, using both quantitative and qualitative approaches to determine organizational leadership needs. Eight design iterations were conducted, including three user testing phases. Three re-design iterations followed initial development, enabling game testing as part of comprehensive instructional events. Subsequent design, development and testing processes targeted digital application to a computer- and tablet-based environment. Recommendations include pros and cons of development and learner testing of an initial analog simulation prior to full digital simulation development.

  10. Nonlinear Reduced-Order Simulation Using An Experimentally Guided Modal Basis

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Przekop, Adam

    2012-01-01

    A procedure is developed for using nonlinear experimental response data to guide the modal basis selection in a nonlinear reduced-order simulation. The procedure entails using nonlinear acceleration response data to first identify proper orthogonal modes. Special consideration is given to cases in which some of the desired response data is unavailable. Bases consisting of linear normal modes are then selected to best represent the experimentally determined transverse proper orthogonal modes and either experimentally determined inplane proper orthogonal modes or the special case of numerically computed in-plane companions. The bases are subsequently used in nonlinear modal reduction and dynamic response simulations. The experimental data used in this work is simulated to allow some practical considerations, such as the availability of in-plane response data and non-idealized test conditions, to be explored. Comparisons of the nonlinear reduced-order simulations are made with the surrogate experimental data to demonstrate the effectiveness of the approach.

  11. An improved version of NCOREL: A computer program for 3-D nonlinear supersonic potential flow computations

    NASA Technical Reports Server (NTRS)

    Siclari, Michael J.

    1988-01-01

    A computer code called NCOREL (for Nonconical Relaxation) has been developed to solve for supersonic full potential flows over complex geometries. The method first solves for the conical at the apex and then marches downstream in a spherical coordinate system. Implicit relaxation techniques are used to numerically solve the full potential equation at each subsequent crossflow plane. Many improvements have been made to the original code including more reliable numerics for computing wing-body flows with multiple embedded shocks, inlet flow through simulation, wake model and entropy corrections. Line relaxation or approximate factorization schemes are optionally available. Improved internal grid generation using analytic conformal mappings, supported by a simple geometric Harris wave drag input that was originally developed for panel methods and internal geometry package are some of the new features.

  12. The role of water molecules in computational drug design.

    PubMed

    de Beer, Stephanie B A; Vermeulen, Nico P E; Oostenbrink, Chris

    2010-01-01

    Although water molecules are small and only consist of two different atom types, they play various roles in cellular systems. This review discusses their influence on the binding process between biomacromolecular targets and small molecule ligands and how this influence can be modeled in computational drug design approaches. Both the structure and the thermodynamics of active site waters will be discussed as these influence the binding process significantly. Structurally conserved waters cannot always be determined experimentally and if observed, it is not clear if they will be replaced upon ligand binding, even if sufficient space is available. Methods to predict the presence of water in protein-ligand complexes will be reviewed. Subsequently, we will discuss methods to include water in computational drug research. Either as an additional factor in automated docking experiments, or explicitly in detailed molecular dynamics simulations, the effect of water on the quality of the simulations is significant, but not easily predicted. The most detailed calculations involve estimates of the free energy contribution of water molecules to protein-ligand complexes. These calculations are computationally demanding, but give insight in the versatility and importance of water in ligand binding.

  13. Computational Fluid Dynamics Analysis of the Effect of Plaques in the Left Coronary Artery

    PubMed Central

    Chaichana, Thanapong; Sun, Zhonghua; Jewkes, James

    2012-01-01

    This study was to investigate the hemodynamic effect of simulated plaques in left coronary artery models, which were generated from a sample patient's data. Plaques were simulated and placed at the left main stem and the left anterior descending (LAD) to produce at least 60% coronary stenosis. Computational fluid dynamics analysis was performed to simulate realistic physiological conditions that reflect the in vivo cardiac hemodynamics, and comparison of wall shear stress (WSS) between Newtonian and non-Newtonian fluid models was performed. The pressure gradient (PSG) and flow velocities in the left coronary artery were measured and compared in the left coronary models with and without presence of plaques during cardiac cycle. Our results showed that the highest PSG was observed in stenotic regions caused by the plaques. Low flow velocity areas were found at postplaque locations in the left circumflex, LAD, and bifurcation. WSS at the stenotic locations was similar between the non-Newtonian and Newtonian models although some more details were observed with non-Newtonian model. There is a direct correlation between coronary plaques and subsequent hemodynamic changes, based on the simulation of plaques in the realistic coronary models. PMID:22400051

  14. A Simulation Based Approach for Contingency Planning for Aircraft Turnaround Operation System Activities in Airline Hubs

    NASA Technical Reports Server (NTRS)

    Adeleye, Sanya; Chung, Christopher

    2006-01-01

    Commercial aircraft undergo a significant number of maintenance and logistical activities during the turnaround operation at the departure gate. By analyzing the sequencing of these activities, more effective turnaround contingency plans may be developed for logistical and maintenance disruptions. Turnaround contingency plans are particularly important as any kind of delay in a hub based system may cascade into further delays with subsequent connections. The contingency sequencing of the maintenance and logistical turnaround activities were analyzed using a combined network and computer simulation modeling approach. Experimental analysis of both current and alternative policies provides a framework to aid in more effective tactical decision making.

  15. Process Integrated Mechanism for Human-Computer Collaboration and Coordination

    DTIC Science & Technology

    2012-09-12

    system we implemented the TAFLib library that provides the communication with TAF . The data received from the TAF server is collected in a data structure...send new commands and flight plans for the UAVs to the TAF server. Test scenarios Several scenarios have been implemented to test and prove our...areas. Shooting Enemies The basic scenario proved the successful integration of PIM and the TAF simulation environment. Subsequently we improved the CP

  16. Metabolic flexibility of mitochondrial respiratory chain disorders predicted by computer modelling.

    PubMed

    Zieliński, Łukasz P; Smith, Anthony C; Smith, Alexander G; Robinson, Alan J

    2016-11-01

    Mitochondrial respiratory chain dysfunction causes a variety of life-threatening diseases affecting about 1 in 4300 adults. These diseases are genetically heterogeneous, but have the same outcome; reduced activity of mitochondrial respiratory chain complexes causing decreased ATP production and potentially toxic accumulation of metabolites. Severity and tissue specificity of these effects varies between patients by unknown mechanisms and treatment options are limited. So far most research has focused on the complexes themselves, and the impact on overall cellular metabolism is largely unclear. To illustrate how computer modelling can be used to better understand the potential impact of these disorders and inspire new research directions and treatments, we simulated them using a computer model of human cardiomyocyte mitochondrial metabolism containing over 300 characterised reactions and transport steps with experimental parameters taken from the literature. Overall, simulations were consistent with patient symptoms, supporting their biological and medical significance. These simulations predicted: complex I deficiencies could be compensated using multiple pathways; complex II deficiencies had less metabolic flexibility due to impacting both the TCA cycle and the respiratory chain; and complex III and IV deficiencies caused greatest decreases in ATP production with metabolic consequences that parallel hypoxia. Our study demonstrates how results from computer models can be compared to a clinical phenotype and used as a tool for hypothesis generation for subsequent experimental testing. These simulations can enhance understanding of dysfunctional mitochondrial metabolism and suggest new avenues for research into treatment of mitochondrial disease and other areas of mitochondrial dysfunction. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Numerical simulation and validation of SI-CAI hybrid combustion in a CAI/HCCI gasoline engine

    NASA Astrophysics Data System (ADS)

    Wang, Xinyan; Xie, Hui; Xie, Liyan; Zhang, Lianfang; Li, Le; Chen, Tao; Zhao, Hua

    2013-02-01

    SI-CAI hybrid combustion, also known as spark-assisted compression ignition (SACI), is a promising concept to extend the operating range of CAI (Controlled Auto-Ignition) and achieve the smooth transition between spark ignition (SI) and CAI in the gasoline engine. In this study, a SI-CAI hybrid combustion model (HCM) has been constructed on the basis of the 3-Zones Extended Coherent Flame Model (ECFM3Z). An ignition model is included to initiate the ECFM3Z calculation and induce the flame propagation. In order to precisely depict the subsequent auto-ignition process of the unburned fuel and air mixture independently after the initiation of flame propagation, the tabulated chemistry concept is adopted to describe the auto-ignition chemistry. The methodology for extracting tabulated parameters from the chemical kinetics calculations is developed so that both cool flame reactions and main auto-ignition combustion can be well captured under a wider range of thermodynamic conditions. The SI-CAI hybrid combustion model (HCM) is then applied in the three-dimensional computational fluid dynamics (3-D CFD) engine simulation. The simulation results are compared with the experimental data obtained from a single cylinder VVA engine. The detailed analysis of the simulations demonstrates that the SI-CAI hybrid combustion process is characterised with the early flame propagation and subsequent multi-site auto-ignition around the main flame front, which is consistent with the optical results reported by other researchers. Besides, the systematic study of the in-cylinder condition reveals the influence mechanism of the early flame propagation on the subsequent auto-ignition.

  18. Simulating coupled dynamics of a rigid-flexible multibody system and compressible fluid

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Tian, Qiang; Hu, HaiYan

    2018-04-01

    As a subsequent work of previous studies of authors, a new parallel computation approach is proposed to simulate the coupled dynamics of a rigid-flexible multibody system and compressible fluid. In this approach, the smoothed particle hydrodynamics (SPH) method is used to model the compressible fluid, the natural coordinate formulation (NCF) and absolute nodal coordinate formulation (ANCF) are used to model the rigid and flexible bodies, respectively. In order to model the compressible fluid properly and efficiently via SPH method, three measures are taken as follows. The first is to use the Riemann solver to cope with the fluid compressibility, the second is to define virtual particles of SPH to model the dynamic interaction between the fluid and the multibody system, and the third is to impose the boundary conditions of periodical inflow and outflow to reduce the number of SPH particles involved in the computation process. Afterwards, a parallel computation strategy is proposed based on the graphics processing unit (GPU) to detect the neighboring SPH particles and to solve the dynamic equations of SPH particles in order to improve the computation efficiency. Meanwhile, the generalized-alpha algorithm is used to solve the dynamic equations of the multibody system. Finally, four case studies are given to validate the proposed parallel computation approach.

  19. In Vivo Validation of Numerical Prediction for Turbulence Intensity in an Aortic Coarctation

    PubMed Central

    Arzani, Amirhossein; Dyverfeldt, Petter; Ebbers, Tino; Shadden, Shawn C.

    2013-01-01

    This paper compares numerical predictions of turbulence intensity with in vivo measurement. Magnetic resonance imaging (MRI) was carried out on a 60-year-old female with a restenosed aortic coarctation. Time-resolved three-directional phase-contrast (PC) MRI data was acquired to enable turbulence intensity estimation. A contrast-enhanced MR angiography (MRA) and a time-resolved 2D PCMRI measurement were also performed to acquire data needed to perform subsequent image-based computational fluid dynamics (CFD) modeling. A 3D model of the aortic coarctation and surrounding vasculature was constructed from the MRA data, and physiologic boundary conditions were modeled to match 2D PCMRI and pressure pulse measurements. Blood flow velocity data was subsequently obtained by numerical simulation. Turbulent kinetic energy (TKE) was computed from the resulting CFD data. Results indicate relative agreement (error ≈10%) between the in vivo measurements and the CFD predictions of TKE. The discrepancies in modeled vs. measured TKE values were within expectations due to modeling and measurement errors. PMID:22016327

  20. Hybrid method to estimate two-layered superficial tissue optical properties from simulated data of diffuse reflectance spectroscopy.

    PubMed

    Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin

    2018-04-20

    An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.

  1. 3D finite element modelling of sheet metal blanking process

    NASA Astrophysics Data System (ADS)

    Bohdal, Lukasz; Kukielka, Leon; Chodor, Jaroslaw; Kulakowska, Agnieszka; Patyk, Radoslaw; Kaldunski, Pawel

    2018-05-01

    The shearing process such as the blanking of sheet metals has been used often to prepare workpieces for subsequent forming operations. The use of FEM simulation is increasing for investigation and optimizing the blanking process. In the current literature a blanking FEM simulations for the limited capability and large computational cost of the three dimensional (3D) analysis has been largely limited to two dimensional (2D) plane axis-symmetry problems. However, a significant progress in modelling which takes into account the influence of real material (e.g. microstructure of the material), physical and technological conditions can be obtained by using 3D numerical analysis methods in this area. The objective of this paper is to present 3D finite element analysis of the ductile fracture, strain distribution and stress in blanking process with the assumption geometrical and physical nonlinearities. The physical, mathematical and computer model of the process are elaborated. Dynamic effects, mechanical coupling, constitutive damage law and contact friction are taken into account. The application in ANSYS/LS-DYNA program is elaborated. The effect of the main process parameter a blanking clearance on the deformation of 1018 steel and quality of the blank's sheared edge is analyzed. The results of computer simulations can be used to forecasting quality of the final parts optimization.

  2. Construction of pore network models for Berea and Fontainebleau sandstones using non-linear programing and optimization techniques

    NASA Astrophysics Data System (ADS)

    Sharqawy, Mostafa H.

    2016-12-01

    Pore network models (PNM) of Berea and Fontainebleau sandstones were constructed using nonlinear programming (NLP) and optimization methods. The constructed PNMs are considered as a digital representation of the rock samples which were based on matching the macroscopic properties of the porous media and used to conduct fluid transport simulations including single and two-phase flow. The PNMs consisted of cubic networks of randomly distributed pores and throats sizes and with various connectivity levels. The networks were optimized such that the upper and lower bounds of the pore sizes are determined using the capillary tube bundle model and the Nelder-Mead method instead of guessing them, which reduces the optimization computational time significantly. An open-source PNM framework was employed to conduct transport and percolation simulations such as invasion percolation and Darcian flow. The PNM model was subsequently used to compute the macroscopic properties; porosity, absolute permeability, specific surface area, breakthrough capillary pressure, and primary drainage curve. The pore networks were optimized to allow for the simulation results of the macroscopic properties to be in excellent agreement with the experimental measurements. This study demonstrates that non-linear programming and optimization methods provide a promising method for pore network modeling when computed tomography imaging may not be readily available.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dang, Liem X.; Vo, Quynh N.; Nilsson, Mikael

    We report one of the first simulations using a classical rate theory approach to predict the mechanism of the exchange process between water and aqueous uranyl ions. Using our water and ion-water polarizable force fields and molecular dynamics techniques, we computed the potentials of mean force for the uranyl ion-water pair as the function of pressures at ambient temperature. Subsequently, these simulated potentials of mean force were used to calculate rate constants using the transition rate theory; the time dependent transmission coefficients were also examined using the reactive flux method and Grote-Hynes treatments of the dynamic response of the solvent.more » The computed activation volumes using transition rate theory and the corrected rate constants are positive, thus the mechanism of this particular water-exchange is a dissociative process. We discuss our rate theory results and compare them with previously studies in which non-polarizable force fields were used. This work was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. The calculations were carried out using computer resources provided by the Office of Basic Energy Sciences.« less

  4. Improved first-order uncertainty method for water-quality modeling

    USGS Publications Warehouse

    Melching, C.S.; Anmangandla, S.

    1992-01-01

    Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.

  5. Theory, Solution Methods, and Implementation of the HERMES Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reaugh, John E.; White, Bradley W.; Curtis, John P.

    The HERMES (high explosive response to mechanical stimulus) model was developed over the past decade to enable computer simulation of the mechanical and subsequent energetic response of explosives and propellants to mechanical insults such as impacts, perforations, drops, and falls. The model is embedded in computer simulation programs that solve the non-linear, large deformation equations of compressible solid and fluid flow in space and time. It is implemented as a user-defined model, which returns the updated stress tensor and composition that result from the simulation supplied strain tensor change. Although it is multi-phase, in that gas and solid species aremore » present, it is single-velocity, in that the gas does not flow through the porous solid. More than 70 time-dependent variables are made available for additional analyses and plotting. The model encompasses a broad range of possible responses: mechanical damage with no energetic response, and a continuous spectrum of degrees of violence including delayed and prompt detonation. This paper describes the basic workings of the model.« less

  6. A Computational and Experimental Investigation of Shear Coaxial Jet Atomization

    NASA Technical Reports Server (NTRS)

    Ibrahim, Essam A.; Kenny, R. Jeremy; Walker, Nathan B.

    2006-01-01

    The instability and subsequent atomization of a viscous liquid jet emanated into a high-pressure gaseous surrounding is studied both computationally and experimentally. Liquid water issued into nitrogen gas at elevated pressures is used to simulate the flow conditions in a coaxial shear injector element relevant to liquid propellant rocket engines. The theoretical analysis is based on a simplified mathematical formulation of the continuity and momentum equations in their conservative form. Numerical solutions of the governing equations subject to appropriate initial and boundary conditions are obtained via a robust finite difference scheme. The computations yield real-time evolution and subsequent breakup characteristics of the liquid jet. The experimental investigation utilizes a digital imaging technique to measure resultant drop sizes. Data were collected for liquid Reynolds number between 2,500 and 25,000, aerodynamic Weber number range of 50-500 and ambient gas pressures from 150 to 1200 psia. Comparison of the model predictions and experimental data for drop sizes at gas pressures of 150 and 300 psia reveal satisfactory agreement particularly for lower values of investigated Weber number. The present model is intended as a component of a practical tool to facilitate design and optimization of coaxial shear atomizers.

  7. Energy Navigation: Simulation Evaluation and Benefit Analysis

    NASA Technical Reports Server (NTRS)

    Williams, David H.; Oseguera-Lohr, Rosa M.; Lewis, Elliot T.

    2011-01-01

    This paper presents results from two simulation studies investigating the use of advanced flight-deck-based energy navigation (ENAV) and conventional transport-category vertical navigation (VNAV) for conducting a descent through a busy terminal area, using Continuous Descent Arrival (CDA) procedures. This research was part of the Low Noise Flight Procedures (LNFP) element within the Quiet Aircraft Technology (QAT) Project, and the subsequent Airspace Super Density Operations (ASDO) research focus area of the Airspace Project. A piloted simulation study addressed development of flight guidance, and supporting pilot and Air Traffic Control (ATC) procedures for high density terminal operations. The procedures and charts were designed to be easy to understand, and to make it easy for the crew to make changes via the Flight Management Computer Control-Display Unit (FMC-CDU) to accommodate changes from ATC.

  8. Momentum Distribution as a Fingerprint of Quantum Delocalization in Enzymatic Reactions: Open-Chain Path-Integral Simulations of Model Systems and the Hydride Transfer in Dihydrofolate Reductase.

    PubMed

    Engel, Hamutal; Doron, Dvir; Kohen, Amnon; Major, Dan Thomas

    2012-04-10

    The inclusion of nuclear quantum effects such as zero-point energy and tunneling is of great importance in studying condensed phase chemical reactions involving the transfer of protons, hydrogen atoms, and hydride ions. In the current work, we derive an efficient quantum simulation approach for the computation of the momentum distribution in condensed phase chemical reactions. The method is based on a quantum-classical approach wherein quantum and classical simulations are performed separately. The classical simulations use standard sampling techniques, whereas the quantum simulations employ an open polymer chain path integral formulation which is computed using an efficient Monte Carlo staging algorithm. The approach is validated by applying it to a one-dimensional harmonic oscillator and symmetric double-well potential. Subsequently, the method is applied to the dihydrofolate reductase (DHFR) catalyzed reduction of 7,8-dihydrofolate by nicotinamide adenine dinucleotide phosphate hydride (NADPH) to yield S-5,6,7,8-tetrahydrofolate and NADP(+). The key chemical step in the catalytic cycle of DHFR involves a stereospecific hydride transfer. In order to estimate the amount of quantum delocalization, we compute the position and momentum distributions for the transferring hydride ion in the reactant state (RS) and transition state (TS) using a recently developed hybrid semiempirical quantum mechanics-molecular mechanics potential energy surface. Additionally, we examine the effect of compression of the donor-acceptor distance (DAD) in the TS on the momentum distribution. The present results suggest differential quantum delocalization in the RS and TS, as well as reduced tunneling upon DAD compression.

  9. Permeability Sensitivity Functions and Rapid Simulation of Hydraulic-Testing Measurements Using Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Escobar Gómez, J. D.; Torres-Verdín, C.

    2018-03-01

    Single-well pressure-diffusion simulators enable improved quantitative understanding of hydraulic-testing measurements in the presence of arbitrary spatial variations of rock properties. Simulators of this type implement robust numerical algorithms which are often computationally expensive, thereby making the solution of the forward modeling problem onerous and inefficient. We introduce a time-domain perturbation theory for anisotropic permeable media to efficiently and accurately approximate the transient pressure response of spatially complex aquifers. Although theoretically valid for any spatially dependent rock/fluid property, our single-phase flow study emphasizes arbitrary spatial variations of permeability and anisotropy, which constitute key objectives of hydraulic-testing operations. Contrary to time-honored techniques, the perturbation method invokes pressure-flow deconvolution to compute the background medium's permeability sensitivity function (PSF) with a single numerical simulation run. Subsequently, the first-order term of the perturbed solution is obtained by solving an integral equation that weighs the spatial variations of permeability with the spatial-dependent and time-dependent PSF. Finally, discrete convolution transforms the constant-flow approximation to arbitrary multirate conditions. Multidimensional numerical simulation studies for a wide range of single-well field conditions indicate that perturbed solutions can be computed in less than a few CPU seconds with relative errors in pressure of <5%, corresponding to perturbations in background permeability of up to two orders of magnitude. Our work confirms that the proposed joint perturbation-convolution (JPC) method is an efficient alternative to analytical and numerical solutions for accurate modeling of pressure-diffusion phenomena induced by Neumann or Dirichlet boundary conditions.

  10. Computer simulation of solder joint failure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burchett, S.N.; Frear, D.R.; Rashid, M.M.

    The thermomechanical fatigue failure of solder joints is increasingly becoming an important reliability issue for electronic packages. The purpose of this Laboratory Directed Research and Development (LDRD) project was to develop computational tools for simulating the behavior of solder joints under strain and temperature cycling, taking into account the microstructural heterogeneities that exist in as-solidified near eutectic Sn-Pb joints, as well as subsequent microstructural evolution. The authors present two computational constitutive models, a two-phase model and a single-phase model, that were developed to predict the behavior of near eutectic Sn-Pb solder joints under fatigue conditions. Unique metallurgical tests provide themore » fundamental input for the constitutive relations. The two-phase model mathematically predicts the heterogeneous coarsening behavior of near eutectic Sn-Pb solder. The finite element simulations with this model agree qualitatively with experimental thermomechanical fatigue tests. The simulations show that the presence of an initial heterogeneity in the solder microstructure could significantly degrade the fatigue lifetime. The single-phase model was developed to predict solder joint behavior using materials data for constitutive relation constants that could be determined through straightforward metallurgical experiments. Special thermomechanical fatigue tests were developed to give fundamental materials input to the models, and an in situ SEM thermomechanical fatigue test system was developed to characterize microstructural evolution and the mechanical behavior of solder joints during the test. A shear/torsion test sample was developed to impose strain in two different orientations. Materials constants were derived from these tests. The simulation results from the two-phase model showed good fit to the experimental test results.« less

  11. A Computational and Experimental Study of Slit Resonators

    NASA Technical Reports Server (NTRS)

    Tam, C. K. W.; Ju, H.; Jones, M. G.; Watson, W. R.; Parrott, T. L.

    2003-01-01

    Computational and experimental studies are carried out to offer validation of the results obtained from direct numerical simulation (DNS) of the flow and acoustic fields of slit resonators. The test cases include slits with 90-degree corners and slits with 45-degree bevel angle housed inside an acoustic impedance tube. Three slit widths are used. Six frequencies from 0.5 to 3.0 kHz are chosen. Good agreement is found between computed and measured reflection factors. In addition, incident sound waves having white noise spectrum and a prescribed pseudo-random noise spectrum are used in subsequent series of tests. The computed broadband results are again found to agree well with experimental data. It is believed the present results provide strong support that DNS can eventually be a useful and accurate prediction tool for liner aeroacoustics. The usage of DNS as a design tool is discussed and illustrated by a simple example.

  12. Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients

    DOE PAGES

    Bruno, Mattia; Lehner, Christoph; Soni, Amarjit

    2018-04-20

    Here, we propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C 1 and C 2, related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.

  13. Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients

    NASA Astrophysics Data System (ADS)

    Bruno, Mattia; Lehner, Christoph; Soni, Amarjit; Rbc; Ukqcd Collaborations

    2018-04-01

    We propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C1 and C2 , related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.

  14. Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruno, Mattia; Lehner, Christoph; Soni, Amarjit

    Here, we propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C 1 and C 2, related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.

  15. Computation of Cavitating Flow in a Francis Hydroturbine

    NASA Astrophysics Data System (ADS)

    Leonard, Daniel; Lindau, Jay

    2013-11-01

    In an effort to improve cavitation characteristics at off-design conditions, a steady, periodic, multiphase, RANS CFD study of an actual Francis hydroturbine was conducted and compared to experimental results. It is well-known that operating hydroturbines at off-design conditions usually results in the formation of large-scale vaporous cavities. These cavities, and their subsequent collapse, reduce efficiency and cause damage and wear to surfaces. The conventional hydro community has expressed interest in increasing their turbine's operating ranges, improving their efficiencies, and reducing damage and wear to critical turbine components. In this work, mixing planes were used to couple rotating and stationary stages of the turbine which have non-multiple periodicity, and provide a coupled solution for the stay vanes, wicket gates, runner blades, and draft tube. The mixture approach is used to simulate the multiphase flow dynamics, and cavitation models were employed to govern the mass transfer between liquid and gas phases. The solution is compared with experimental results across a range of cavitation numbers which display all the major cavitation features in the machine. Unsteady computations are necessary to capture inherently unsteady cavitation phenomena, such as the precessing vortex rope, and the shedding of bubbles from the wicket gates and their subsequent impingement upon the leading edge of the runner blades. To display these features, preliminary unsteady simulations of the full machine are also presented.

  16. Simulation of non-linear acoustic field and thermal pattern of phased-array high-intensity focused ultrasound (HIFU).

    PubMed

    Wang, Mingjun; Zhou, Yufeng

    2016-08-01

    HIFU becomes an effective and non-invasive modality of solid tumour/cancer ablation. Simulation of the non-linear acoustic wave propagation using a phased-array transducer in multiple layered media using different focusing strategies and the consequent lesion formation are essential in HIFU planning in order to enhance the efficacy and efficiency of treatment. An angular spectrum approach with marching fractional steps was applied in the wave propagation from phased-array HIFU transducer, and diffraction, attenuation, and non-linearity effects were accounted for by a second-order operator splitting scheme. The simulated distributions of the first three harmonics along and transverse to the transducer axis were compared to the hydrophone measurements. The bioheat equation was used to simulate the subsequent temperature elevation using the deposited acoustic energy, and lesion formation was determined by the thermal dose. Better agreement was found between the measured harmonics distribution and simulation using the proposed algorithm than the Khokhlov-Zabozotskaya-Kuznetsov equation. Variable focusing of the phased-array transducer (geometric focusing, transverse shifting and the generation of multiple foci) can be simulated successfully. The shifting and splitting of focus was found to result in significantly less temperature elevation at the focus and the subsequently, the smaller lesion size, but the larger grating lobe grating lobe in the pre-focal region. The proposed algorithm could simulate the non-linear wave propagation from the source with arbitrary shape and distribution of excitation through multiple tissue layers in high computation accuracy. The performance of phased-array HIFU can be optimised in the treatment planning.

  17. Introducing nuclei scatterer patterns into histology based intravascular ultrasound simulation framework

    NASA Astrophysics Data System (ADS)

    Kraft, Silvan; Karamalis, Athanasios; Sheet, Debdoot; Drecoll, Enken; Rummeny, Ernst J.; Navab, Nassir; Noël, Peter B.; Katouzian, Amin

    2013-03-01

    Medical ultrasonic grayscale images are formed from acoustic waves following their interactions with distributed scatterers within tissues media. For accurate simulation of acoustic wave propagation, a reliable model describing unknown parameters associated with tissues scatterers such as distribution, size and acoustic properties is essential. In this work, we introduce a novel approach defining ultrasonic scatterers by incorporating a distribution of cellular nuclei patterns in biological tissues to simulate ultrasonic response of atherosclerotic tissues in intravascular ultrasound (IVUS). For this reason, a virtual phantom is generated through manual labeling of different tissue types (fibrotic, lipidic and calcified) on histology sections. Acoustic properties of each tissue type are defined by assuming that the ultrasound signal is primarily backscattered by the nuclei of the organic cells within the intima and media of the vessel wall. This resulting virtual phantom is subsequently used to simulate ultrasonic wave propagation through the tissue medium computed using finite difference estimation. Subsequently B-mode images for a specific histological section are processed from the simulated radiofrequency (RF) data and compared with the original IVUS of the same tissue section. Real IVUS RF signals for these histological sections were obtained using a single-element mechanically rotating 40MHz transducer. Evaluation is performed by trained reviewers subjectively assessing both simulated and real B-mode IVUS images. Our simulation platform provides a high image quality with a very promising correlation to the original IVUS images. This will facilitate to better understand progression of such a chronic disease from micro-level and its integration into cardiovascular disease-specific models.

  18. CET exSim: mineral exploration experience via simulation

    NASA Astrophysics Data System (ADS)

    Wong, Jason C.; Holden, Eun-Jung; Kovesi, Peter; McCuaig, T. Campbell; Hronsky, Jon

    2013-08-01

    Undercover mineral exploration is a challenging task as it requires understanding of subsurface geology by relying heavily on remotely sensed (i.e. geophysical) data. Cost-effective exploration is essential in order to increase the chance of success using finite budgets. This requires effective decision-making in both the process of selecting the optimum data collection methods and in the process of achieving accuracy during subsequent interpretation. Traditionally, developing the skills, behaviour and practices of exploration decision-making requires many years of experience through working on exploration projects under various geological settings, commodities and levels of available resources. This implies long periods of sub-optimal exploration decision-making, before the necessary experience has been successfully obtained. To address this critical industry issue, our ongoing research focuses on the development of the unique and novel e-learning environment, exSim, which simulates exploration scenarios where users can test their strategies and learn the consequences of their choices. This simulator provides an engaging platform for self-learning and experimentation in exploration decision strategies, providing a means to build experience more effectively. The exSim environment also provides a unique platform on which numerous scenarios and situations (e.g. deposit styles) can be simulated, potentially allowing the user to become virtually familiarised with a broader scope of exploration practices. Harnessing the power of computer simulation, visualisation and an intuitive graphical user interface, the simulator provides a way to assess the user's exploration decisions and subsequent interpretations. In this paper, we present the prototype functionalities in exSim including: simulation of geophysical surveys, follow-up drill testing and interpretation assistive tools.

  19. ESTL tracking and data relay satellite /TDRSS/ simulation system

    NASA Technical Reports Server (NTRS)

    Kapell, M. H.

    1980-01-01

    The Tracking Data Relay Satellite System (TDRSS) provides single access forward and return communication links with the Shuttle/Orbiter via S-band and Ku-band frequency bands. The ESTL (Electronic Systems Test Laboratory) at Lyndon B. Johnson Space Center (JSC) utilizes a TDRS satellite simulator and critical TDRS ground hardware for test operations. To accomplish Orbiter/TDRSS relay communications performance testing in the ESTL, a satellite simulator was developed which met the specification requirements of the TDRSS channels utilized by the Orbiter. Actual TDRSS ground hardware unique to the Orbiter communication interfaces was procured from individual vendors, integrated in the ESTL, and interfaced via a data bus for control and status monitoring. This paper discusses the satellite simulation hardware in terms of early development and subsequent modifications. The TDRS ground hardware configuration and the complex computer interface requirements are reviewed. Also, special test hardware such as a radio frequency interference test generator is discussed.

  20. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.

    2016-01-01

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We therefore conclude that customization parameters must be set with reference to the optimized parameters of the corresponding irradiation technique in order to render them useful for achieving artifact-free MC simulation for use in computational experiments and clinical treatments.

  1. Confirmation of a realistic reactor model for BNCT dosimetry at the TRIGA Mainz

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziegner, Markus, E-mail: Markus.Ziegner.fl@ait.ac.at; Schmitz, Tobias; Hampel, Gabriele

    2014-11-01

    Purpose: In order to build up a reliable dose monitoring system for boron neutron capture therapy (BNCT) applications at the TRIGA reactor in Mainz, a computer model for the entire reactor was established, simulating the radiation field by means of the Monte Carlo method. The impact of different source definition techniques was compared and the model was validated by experimental fluence and dose determinations. Methods: The depletion calculation code ORIGEN2 was used to compute the burn-up and relevant material composition of each burned fuel element from the day of first reactor operation to its current core. The material composition ofmore » the current core was used in a MCNP5 model of the initial core developed earlier. To perform calculations for the region outside the reactor core, the model was expanded to include the thermal column and compared with the previously established ATTILA model. Subsequently, the computational model is simplified in order to reduce the calculation time. Both simulation models are validated by experiments with different setups using alanine dosimetry and gold activation measurements with two different types of phantoms. Results: The MCNP5 simulated neutron spectrum and source strength are found to be in good agreement with the previous ATTILA model whereas the photon production is much lower. Both MCNP5 simulation models predict all experimental dose values with an accuracy of about 5%. The simulations reveal that a Teflon environment favorably reduces the gamma dose component as compared to a polymethyl methacrylate phantom. Conclusions: A computer model for BNCT dosimetry was established, allowing the prediction of dosimetric quantities without further calibration and within a reasonable computation time for clinical applications. The good agreement between the MCNP5 simulations and experiments demonstrates that the ATTILA model overestimates the gamma dose contribution. The detailed model can be used for the planning of structural modifications in the thermal column irradiation channel or the use of different irradiation sites than the thermal column, e.g., the beam tubes.« less

  2. Coercivity of domain wall motion in thin films of amorphous rare earth-transition metal alloys

    NASA Technical Reports Server (NTRS)

    Mansuripur, M.; Giles, R. C.; Patterson, G.

    1991-01-01

    Computer simulations of a two dimensional lattice of magnetic dipoles are performed on the Connection Machine. The lattice is a discrete model for thin films of amorphous rare-earth transition metal alloys, which have application as the storage media in erasable optical data storage systems. In these simulations, the dipoles follow the dynamic Landau-Lifshitz-Gilbert equation under the influence of an effective field arising from local anisotropy, near-neighbor exchange, classical dipole-dipole interactions, and an externally applied field. Various sources of coercivity, such as defects and/or inhomogeneities in the lattice, are introduced and the subsequent motion of domain walls in response to external fields is investigated.

  3. Probabilistic simulation of the human factor in structural reliability

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1993-01-01

    A formal approach is described in an attempt to computationally simulate the probable ranges of uncertainties of the human factor in structural probabilistic assessments. A multi-factor interaction equation (MFIE) model has been adopted for this purpose. Human factors such as marital status, professional status, home life, job satisfaction, work load and health, are considered to demonstrate the concept. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Suitability of the MFIE in the subsequently probabilistic sensitivity studies are performed to assess the validity of the whole approach. Results obtained show that the uncertainties for no error range from five to thirty percent for the most optimistic case.

  4. Probabilistic simulation of the human factor in structural reliability

    NASA Astrophysics Data System (ADS)

    Chamis, Christos C.; Singhal, Surendra N.

    1994-09-01

    The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).

  5. Probabilistic Simulation of the Human Factor in Structural Reliability

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Singhal, Surendra N.

    1994-01-01

    The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).

  6. Improvement in precipitation-runoff model simulations by recalibration with basin-specific data, and subsequent model applications, Onondaga Lake Basin, Onondaga County, New York

    USGS Publications Warehouse

    Coon, William F.

    2011-01-01

    Simulation of streamflows in small subbasins was improved by adjusting model parameter values to match base flows, storm peaks, and storm recessions more precisely than had been done with the original model. Simulated recessional and low flows were either increased or decreased as appropriate for a given stream, and simulated peak flows generally were lowered in the revised model. The use of suspended-sediment concentrations rather than concentrations of the surrogate constituent, total suspended solids, resulted in increases in the simulated low-flow sediment concentrations and, in most cases, decreases in the simulated peak-flow sediment concentrations. Simulated orthophosphate concentrations in base flows generally increased but decreased for peak flows in selected headwater subbasins in the revised model. Compared with the original model, phosphorus concentrations simulated by the revised model were comparable in forested subbasins, generally decreased in developed and wetland-dominated subbasins, and increased in agricultural subbasins. A final revision to the model was made by the addition of the simulation of chloride (salt) concentrations in the Onondaga Creek Basin to help water-resource managers better understand the relative contributions of salt from multiple sources in this particular tributary. The calibrated revised model was used to (1) compute loading rates for the various land types that were simulated in the model, (2) conduct a watershed-management analysis that estimated the portion of the total load that was likely to be transported to Onondaga Lake from each of the modeled subbasins, (3) compute and assess chloride loads to Onondaga Lake from the Onondaga Creek Basin, and (4) simulate precolonization (forested) conditions in the basin to estimate the probable minimum phosphorus loads to the lake.

  7. Petascale supercomputing to accelerate the design of high-temperature alloys

    DOE PAGES

    Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; ...

    2017-10-25

    Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less

  8. Petascale supercomputing to accelerate the design of high-temperature alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Dongwon; Lee, Sangkeun; Shyam, Amit

    Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less

  9. Petascale supercomputing to accelerate the design of high-temperature alloys

    NASA Astrophysics Data System (ADS)

    Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; Haynes, J. Allen

    2017-12-01

    Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ‧-Al2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviour of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. The approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.

  10. A numerically efficient damping model for acoustic resonances in microfluidic cavities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hahn, P., E-mail: hahnp@ethz.ch; Dual, J.

    Bulk acoustic wave devices are typically operated in a resonant state to achieve enhanced acoustic amplitudes and high acoustofluidic forces for the manipulation of microparticles. Among other loss mechanisms related to the structural parts of acoustofluidic devices, damping in the fluidic cavity is a crucial factor that limits the attainable acoustic amplitudes. In the analytical part of this study, we quantify all relevant loss mechanisms related to the fluid inside acoustofluidic micro-devices. Subsequently, a numerical analysis of the time-harmonic visco-acoustic and thermo-visco-acoustic equations is carried out to verify the analytical results for 2D and 3D examples. The damping results aremore » fitted into the framework of classical linear acoustics to set up a numerically efficient device model. For this purpose, all damping effects are combined into an acoustofluidic loss factor. Since some components of the acoustofluidic loss factor depend on the acoustic mode shape in the fluid cavity, we propose a two-step simulation procedure. In the first step, the loss factors are deduced from the simulated mode shape. Subsequently, a second simulation is invoked, taking all losses into account. Owing to its computational efficiency, the presented numerical device model is of great relevance for the simulation of acoustofluidic particle manipulation by means of acoustic radiation forces or acoustic streaming. For the first time, accurate 3D simulations of realistic micro-devices for the quantitative prediction of pressure amplitudes and the related acoustofluidic forces become feasible.« less

  11. A Simple Graphical Method for Quantification of Disaster Management Surge Capacity Using Computer Simulation and Process-control Tools.

    PubMed

    Franc, Jeffrey Michael; Ingrassia, Pier Luigi; Verde, Manuela; Colombo, Davide; Della Corte, Francesco

    2015-02-01

    Surge capacity, or the ability to manage an extraordinary volume of patients, is fundamental for hospital management of mass-casualty incidents. However, quantification of surge capacity is difficult and no universal standard for its measurement has emerged, nor has a standardized statistical method been advocated. As mass-casualty incidents are rare, simulation may represent a viable alternative to measure surge capacity. Hypothesis/Problem The objective of the current study was to develop a statistical method for the quantification of surge capacity using a combination of computer simulation and simple process-control statistical tools. Length-of-stay (LOS) and patient volume (PV) were used as metrics. The use of this method was then demonstrated on a subsequent computer simulation of an emergency department (ED) response to a mass-casualty incident. In the derivation phase, 357 participants in five countries performed 62 computer simulations of an ED response to a mass-casualty incident. Benchmarks for ED response were derived from these simulations, including LOS and PV metrics for triage, bed assignment, physician assessment, and disposition. In the application phase, 13 students of the European Master in Disaster Medicine (EMDM) program completed the same simulation scenario, and the results were compared to the standards obtained in the derivation phase. Patient-volume metrics included number of patients to be triaged, assigned to rooms, assessed by a physician, and disposed. Length-of-stay metrics included median time to triage, room assignment, physician assessment, and disposition. Simple graphical methods were used to compare the application phase group to the derived benchmarks using process-control statistical tools. The group in the application phase failed to meet the indicated standard for LOS from admission to disposition decision. This study demonstrates how simulation software can be used to derive values for objective benchmarks of ED surge capacity using PV and LOS metrics. These objective metrics can then be applied to other simulation groups using simple graphical process-control tools to provide a numeric measure of surge capacity. Repeated use in simulations of actual EDs may represent a potential means of objectively quantifying disaster management surge capacity. It is hoped that the described statistical method, which is simple and reusable, will be useful for investigators in this field to apply to their own research.

  12. Hydrodynamics of Fishlike Swimming: Effects of swimming kinematics and Reynolds number

    NASA Astrophysics Data System (ADS)

    Gilmanov, Anvar; Posada, Nicolas; Sotiropoulos, Fotis

    2003-11-01

    We carry out a series of numerical simulations to investigate the effects of swimming kinematics and Reynolds number on the flow past a three-dimensional fishlike body undergoing undulatory motion. The simulated body shape is that of a real mackerel fish. The mackerel was frozen and subsequently sliced in several thin fillets whose dimensions were carefully measured and used to construct the fishlike body shape used in the simulations. The flow induced by the undulating body is simulated by solving the 3D, unsteady, incompressible Navier-Stokes equations with the second-order accurate, hybrid Cartesian/Immersed Boundary formulation of Gilmanov and Sotiropoulos (J. Comp. Physics, under review, 2003). We consider in-line swimming at constant speed and carry out simulations for various types of swimming kinematics, varying the tailbeat amplitude, frequency, and Reynolds number (300

  13. 2D OR NOT 2D: THE EFFECT OF DIMENSIONALITY ON THE DYNAMICS OF FINGERING CONVECTION AT LOW PRANDTL NUMBER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garaud, Pascale; Brummell, Nicholas

    2015-12-10

    Fingering convection (otherwise known as thermohaline convection) is an instability that occurs in stellar radiative interiors in the presence of unstable compositional gradients. Numerical simulations have been used in order to estimate the efficiency of mixing induced by this instability. However, fully three-dimensional (3D) computations in the parameter regime appropriate for stellar astrophysics (i.e., low Prandtl number) are prohibitively expensive. This raises the question of whether two-dimensional (2D) simulations could be used instead to achieve the same goals. In this work, we address this issue by comparing the outcome of 2D and 3D simulations of fingering convection at low Prandtlmore » number. We find that 2D simulations are never appropriate. However, we also find that the required 3D computational domain does not have to be very wide: the third dimension only needs to contain a minimum of two wavelengths of the fastest-growing linearly unstable mode to capture the essentially 3D dynamics of small-scale fingering. Narrow domains, however, should still be used with caution since they could limit the subsequent development of any large-scale dynamics typically associated with fingering convection.« less

  14. CFD Modelling of a Quadrupole Vortex Inside a Cylindrical Channel for Research into Advanced Hybrid Rocket Designs

    NASA Astrophysics Data System (ADS)

    Godfrey, B.; Majdalani, J.

    2014-11-01

    This study relies on computational fluid dynamics (CFD) tools to analyse a possible method for creating a stable quadrupole vortex within a simulated, circular-port, cylindrical rocket chamber. A model of the vortex generator is created in a SolidWorks CAD program and then the grid is generated using the Pointwise mesh generation software. The non-reactive flowfield is simulated using an open source computational program, Stanford University Unstructured (SU2). Subsequent analysis and visualization are performed using ParaView. The vortex generation approach that we employ consists of four tangentially injected monopole vortex generators that are arranged symmetrically with respect to the center of the chamber in such a way to produce a quadrupole vortex with a common downwash. The present investigation focuses on characterizing the flow dynamics so that future investigations can be undertaken with increasing levels of complexity. Our CFD simulations help to elucidate the onset of vortex filaments within the monopole tubes, and the evolution of quadrupole vortices downstream of the injection faceplate. Our results indicate that the quadrupole vortices produced using the present injection pattern can become quickly unstable to the extent of dissipating soon after being introduced into simulated rocket chamber. We conclude that a change in the geometrical configuration will be necessary to produce more stable quadrupoles.

  15. A Wearable Goggle Navigation System for Dual-Mode Optical and Ultrasound Localization of Suspicious Lesions: Validation Studies Using Tissue-Simulating Phantoms and an Ex Vivo Human Breast Tissue Model

    PubMed Central

    Wang, Dong; Gan, Qi; Ye, Jian; Yue, Jian; Wang, Benzhong; Povoski, Stephen P.; Martin, Edward W.; Hitchcock, Charles L.; Yilmaz, Alper; Tweedle, Michael F.; Shao, Pengfei; Xu, Ronald X.

    2016-01-01

    Surgical resection remains the primary curative treatment for many early-stage cancers, including breast cancer. The development of intraoperative guidance systems for identifying all sites of disease and improving the likelihood of complete surgical resection is an area of active ongoing research, as this can lead to a decrease in the need of subsequent additional surgical procedures. We develop a wearable goggle navigation system for dual-mode optical and ultrasound imaging of suspicious lesions. The system consists of a light source module, a monochromatic CCD camera, an ultrasound system, a Google Glass, and a host computer. It is tested in tissue-simulating phantoms and an ex vivo human breast tissue model. Our experiments demonstrate that the surgical navigation system provides useful guidance for localization and core needle biopsy of simulated tumor within the tissue-simulating phantom, as well as a core needle biopsy and subsequent excision of Indocyanine Green (ICG)—fluorescing sentinel lymph nodes. Our experiments support the contention that this wearable goggle navigation system can be potentially very useful and fully integrated by the surgeon for optimizing many aspects of oncologic surgery. Further engineering optimization and additional in vivo clinical validation work is necessary before such a surgical navigation system can be fully realized in the everyday clinical setting. PMID:27367051

  16. A Wearable Goggle Navigation System for Dual-Mode Optical and Ultrasound Localization of Suspicious Lesions: Validation Studies Using Tissue-Simulating Phantoms and an Ex Vivo Human Breast Tissue Model.

    PubMed

    Zhang, Zeshu; Pei, Jing; Wang, Dong; Gan, Qi; Ye, Jian; Yue, Jian; Wang, Benzhong; Povoski, Stephen P; Martin, Edward W; Hitchcock, Charles L; Yilmaz, Alper; Tweedle, Michael F; Shao, Pengfei; Xu, Ronald X

    2016-01-01

    Surgical resection remains the primary curative treatment for many early-stage cancers, including breast cancer. The development of intraoperative guidance systems for identifying all sites of disease and improving the likelihood of complete surgical resection is an area of active ongoing research, as this can lead to a decrease in the need of subsequent additional surgical procedures. We develop a wearable goggle navigation system for dual-mode optical and ultrasound imaging of suspicious lesions. The system consists of a light source module, a monochromatic CCD camera, an ultrasound system, a Google Glass, and a host computer. It is tested in tissue-simulating phantoms and an ex vivo human breast tissue model. Our experiments demonstrate that the surgical navigation system provides useful guidance for localization and core needle biopsy of simulated tumor within the tissue-simulating phantom, as well as a core needle biopsy and subsequent excision of Indocyanine Green (ICG)-fluorescing sentinel lymph nodes. Our experiments support the contention that this wearable goggle navigation system can be potentially very useful and fully integrated by the surgeon for optimizing many aspects of oncologic surgery. Further engineering optimization and additional in vivo clinical validation work is necessary before such a surgical navigation system can be fully realized in the everyday clinical setting.

  17. Altered swelling and ion fluxes in articular cartilage as a biomarker in osteoarthritis and joint immobilization: a computational analysis

    PubMed Central

    Manzano, Sara; Manzano, Raquel; Doblaré, Manuel; Doweidar, Mohamed Hamdy

    2015-01-01

    In healthy cartilage, mechano-electrochemical phenomena act together to maintain tissue homeostasis. Osteoarthritis (OA) and degenerative diseases disrupt this biological equilibrium by causing structural deterioration and subsequent dysfunction of the tissue. Swelling and ion flux alteration as well as abnormal ion distribution are proposed as primary indicators of tissue degradation. In this paper, we present an extension of a previous three-dimensional computational model of the cartilage behaviour developed by the authors to simulate the contribution of the main tissue components in its behaviour. The model considers the mechano-electrochemical events as concurrent phenomena in a three-dimensional environment. This model has been extended here to include the effect of repulsion of negative charges attached to proteoglycans. Moreover, we have studied the fluctuation of these charges owning to proteoglycan variations in healthy and pathological articular cartilage. In this sense, standard patterns of healthy and degraded tissue behaviour can be obtained which could be a helpful diagnostic tool. By introducing measured properties of unhealthy cartilage into the computational model, the severity of tissue degeneration can be predicted avoiding complex tissue extraction and subsequent in vitro analysis. In this work, the model has been applied to monitor and analyse cartilage behaviour at different stages of OA and in both short (four, six and eight weeks) and long-term (11 weeks) fully immobilized joints. Simulation results showed marked differences in the corresponding swelling phenomena, in outgoing cation fluxes and in cation distributions. Furthermore, long-term immobilized patients display similar swelling as well as fluxes and distribution of cations to patients in the early stages of OA, thus, preventive treatments are highly recommended to avoid tissue deterioration. PMID:25392400

  18. The computational nature of memory modification.

    PubMed

    Gershman, Samuel J; Monfils, Marie-H; Norman, Kenneth A; Niv, Yael

    2017-03-15

    Retrieving a memory can modify its influence on subsequent behavior. We develop a computational theory of memory modification, according to which modification of a memory trace occurs through classical associative learning, but which memory trace is eligible for modification depends on a structure learning mechanism that discovers the units of association by segmenting the stream of experience into statistically distinct clusters (latent causes). New memories are formed when the structure learning mechanism infers that a new latent cause underlies current sensory observations. By the same token, old memories are modified when old and new sensory observations are inferred to have been generated by the same latent cause. We derive this framework from probabilistic principles, and present a computational implementation. Simulations demonstrate that our model can reproduce the major experimental findings from studies of memory modification in the Pavlovian conditioning literature.

  19. Performance Analysis, Design Considerations, and Applications of Extreme-Scale In Situ Infrastructures

    DOE PAGES

    Ayachit, Utkarsh; Bauer, Andrew; Duque, Earl P. N.; ...

    2016-11-01

    A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. Our paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead,more » performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.« less

  20. Virtual evaluation of stent graft deployment: a validated modeling and simulation study.

    PubMed

    De Bock, S; Iannaccone, F; De Santis, G; De Beule, M; Van Loo, D; Devos, D; Vermassen, F; Segers, P; Verhegghe, B

    2012-09-01

    The presented study details the virtual deployment of a bifurcated stent graft (Medtronic Talent) in an Abdominal Aortic Aneurysm model, using the finite element method. The entire deployment procedure is modeled, with the stent graft being crimped and bent according to the vessel geometry, and subsequently released. The finite element results are validated in vitro with placement of the device in a silicone mock aneurysm, using high resolution CT scans to evaluate the result. The presented work confirms the capability of finite element computer simulations to predict the deformed configuration after endovascular aneurysm repair (EVAR). These simulations can be used to quantify mechanical parameters, such as neck dilations, radial forces and stresses in the device, that are difficult or impossible to obtain from medical imaging. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Stochastic Effects in Computational Biology of Space Radiation Cancer Risk

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Pluth, Janis; Harper, Jane; O'Neill, Peter

    2007-01-01

    Estimating risk from space radiation poses important questions on the radiobiology of protons and heavy ions. We are considering systems biology models to study radiation induced repair foci (RIRF) at low doses, in which less than one-track on average transverses the cell, and the subsequent DNA damage processing and signal transduction events. Computational approaches for describing protein regulatory networks coupled to DNA and oxidative damage sites include systems of differential equations, stochastic equations, and Monte-Carlo simulations. We review recent developments in the mathematical description of protein regulatory networks and possible approaches to radiation effects simulation. These include robustness, which states that regulatory networks maintain their functions against external and internal perturbations due to compensating properties of redundancy and molecular feedback controls, and modularity, which leads to general theorems for considering molecules that interact through a regulatory mechanism without exchange of matter leading to a block diagonal reduction of the connecting pathways. Identifying rate-limiting steps, robustness, and modularity in pathways perturbed by radiation damage are shown to be valid techniques for reducing large molecular systems to realistic computer simulations. Other techniques studied are the use of steady-state analysis, and the introduction of composite molecules or rate-constants to represent small collections of reactants. Applications of these techniques to describe spatial and temporal distributions of RIRF and cell populations following low dose irradiation are described.

  2. SU-E-I-63: Quantitative Evaluation of the Effects of Orthopedic Metal Artifact Reduction (OMAR) Software On CT Images for Radiotherapy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jani, S

    Purpose: CT simulation for patients with metal implants can often be challenging due to artifacts that obscure tumor/target delineation and normal organ definition. Our objective was to evaluate the effectiveness of Orthopedic Metal Artifact Reduction (OMAR), a commercially available software, in reducing metal-induced artifacts and its effect on computed dose during treatment planning. Methods: CT images of water surrounding metallic cylindrical rods made of aluminum, copper and iron were studied in terms of Hounsfield Units (HU) spread. Metal-induced artifacts were characterized in terms of HU/Volume Histogram (HVH) using the Pinnacle treatment planning system. Effects of OMAR on enhancing our abilitymore » to delineate organs on CT and subsequent dose computation were examined in nine (9) patients with hip implants and two (2) patients with breast tissue expanders. Results: Our study characterized water at 1000 HU with a standard deviation (SD) of about 20 HU. The HVHs allowed us to evaluate how the presence of metal changed the HU spread. For example, introducing a 2.54 cm diameter copper rod in water increased the SD in HU of the surrounding water from 20 to 209, representing an increase in artifacts. Subsequent use of OMAR brought the SD down to 78. Aluminum produced least artifacts whereas Iron showed largest amount of artifacts. In general, an increase in kVp and mA during CT scanning showed better effectiveness of OMAR in reducing artifacts. Our dose analysis showed that some isodose contours shifted by several mm with OMAR but infrequently and were nonsignificant in planning process. Computed volumes of various dose levels showed <2% change. Conclusions: In our experience, OMAR software greatly reduced the metal-induced CT artifacts for the majority of patients with implants, thereby improving our ability to delineate tumor and surrounding organs. OMAR had a clinically negligible effect on computed dose within tissues. Partially funded by unrestricted educational grant from Philips.« less

  3. Influence of regional climate change on meteorological characteristics and their subsequent effect on ozone dispersion in Taiwan

    NASA Astrophysics Data System (ADS)

    Cheng, Fang-Yi; Jian, Shan-Ping; Yang, Zhih-Min; Yen, Ming-Cheng; Tsuang, Ben-Jei

    2015-02-01

    The objective of this study is to understand the influence of regional climate change on local meteorological conditions and their subsequent effect on local ozone (O3) dispersion in Taiwan. The 33-year NCEP-DOE Reanalysis 2 (NNR2) data set (1979-2011) was analyzed to understand the variations in regional-scale atmospheric conditions in East Asia and the western North Pacific. To save computational processing time, two scenarios representative of past (1979-86) and current (2004-11) atmospheric conditions were selected but only targeting the autumn season (September, October and November) when the O3 concentrations were at high levels. Numerical simulations were performed using weather research and forecasting (WRF) model and Community Multiscale Air Quality (CMAQ) model for the past and current scenarios individually but only for the month of October because of limited computational resources. Analysis of NNR2 data exhibited increased air temperature, weakened Asian continental anticyclone, enhanced northeasterly monsoonal flow, and a deepened low-pressure system forming near Taiwan. With enhanced evaporation from oceans along with a deepened low-pressure system, precipitation amounts increased in Taiwan in the current scenario. As demonstrated in the WRF simulation, the land surface physical process responded to the enhanced precipitation resulting in damper soil conditions, and reduced ground temperatures that in turn restricted the development of boundary layer height. The weakened land-sea breeze flow was simulated in the current scenario. With reduced dispersion capability, air pollutants would tend to accumulate near the emission source leading to a degradation of air quality in this region. The conditions would be even worse in southwestern Taiwan due to the fact that stagnant wind fields would occur more frequently in the current scenario. On the other hand, in northern Taiwan, the simulated O3 concentrations are lower during the day in the current scenario due to the enhanced cloud conditions and reduced solar radiation.

  4. Flow and air conditioning simulations of computer turbinectomized nose models.

    PubMed

    Pérez-Mota, J; Solorio-Ordaz, F; Cervantes-de Gortari, J

    2018-04-16

    Air conditioning for the human respiratory system is the most important function of the nose. When obstruction occurs in the nasal airway, turbinectomy is used to correct such pathology. However, mucosal atrophy may occur sometime after this surgery when it is overdone. There is not enough information about long-term recovery of nasal air conditioning performance after partial or total surgery. The purpose of this research was to assess if, based on the flow and temperature/humidity characteristics of the air intake to the choana, partial resection of turbinates is better than total resection. A normal nasal cavity geometry was digitized from tomographic scans and a model was printed in 3D. Dynamic (sinusoidal) laboratory tests and computer simulations of airflow were conducted with full agreement between numerical and experimental results. Computational adaptations were subsequently performed to represent six turbinectomy variations and a swollen nasal cavity case. Streamlines along the nasal cavity and temperature and humidity distributions at the choana indicated that the middle turbinate partial resection is the best alternative. These findings may facilitate the diagnosis of nasal obstruction and can be useful both to plan a turbinectomy and to reduce postoperative discomfort. Graphical Abstract ᅟ.

  5. Computer simulations of the energy dissipation rate in a fluorescence-activated cell sorter: Implications to cells.

    PubMed

    Mollet, Mike; Godoy-Silva, Ruben; Berdugo, Claudia; Chalmers, Jeffrey J

    2008-06-01

    Fluorescence activated cell sorting, FACS, is a widely used method to sort subpopulations of cells to high purities. To achieve relatively high sorting speeds, FACS instruments operate by forcing suspended cells to flow in a single file line through a laser(s) beam(s). Subsequently, this flow stream breaks up into individual drops which can be charged and deflected into multiple collection streams. Previous work by Ma et al. (2002) and Mollet et al. (2007; Biotechnol Bioeng 98:772-788) indicates that subjecting cells to hydrodynamic forces consisting of both high extensional and shear components in micro-channels results in significant cell damage. Using the fluid dynamics software FLUENT, computer simulations of typical fluid flow through the nozzle of a BD FACSVantage indicate that hydrodynamic forces, quantified using the scalar parameter energy dissipation rate, are similar in the FACS nozzle to levels reported to create significant cell damage in micro-channels. Experimental studies in the FACSVantage, operated under the same conditions as the simulations confirmed significant cell damage in two cell lines, Chinese Hamster Ovary cells (CHO) and THP1, a human acute monocytic leukemia cell line.

  6. Geomechanical Analysis of Underground Coal Gasification Reactor Cool Down for Subsequent CO2 Storage

    NASA Astrophysics Data System (ADS)

    Sarhosis, Vasilis; Yang, Dongmin; Kempka, Thomas; Sheng, Yong

    2013-04-01

    Underground coal gasification (UCG) is an efficient method for the conversion of conventionally unmineable coal resources into energy and feedstock. If the UCG process is combined with the subsequent storage of process CO2 in the former UCG reactors, a near-zero carbon emission energy source can be realised. This study aims to present the development of a computational model to simulate the cooling process of UCG reactors in abandonment to decrease the initial high temperature of more than 400 °C to a level where extensive CO2 volume expansion due to temperature changes can be significantly reduced during the time of CO2 injection. Furthermore, we predict the cool down temperature conditions with and without water flushing. A state of the art coupled thermal-mechanical model was developed using the finite element software ABAQUS to predict the cavity growth and the resulting surface subsidence. In addition, the multi-physics computational software COMSOL was employed to simulate the cavity cool down process which is of uttermost relevance for CO2 storage in the former UCG reactors. For that purpose, we simulated fluid flow, thermal conduction as well as thermal convection processes between fluid (water and CO2) and solid represented by coal and surrounding rocks. Material properties for rocks and coal were obtained from extant literature sources and geomechanical testings which were carried out on samples derived from a prospective demonstration site in Bulgaria. The analysis of results showed that the numerical models developed allowed for the determination of the UCG reactor growth, roof spalling, surface subsidence and heat propagation during the UCG process and the subsequent CO2 storage. It is anticipated that the results of this study can support optimisation of the preparation procedure for CO2 storage in former UCG reactors. The proposed scheme was discussed so far, but not validated by a coupled numerical analysis and if proved to be applicable it could provide a significant optimisation of the UCG process by means of CO2 storage efficiency. The proposed coupled UCG-CCS scheme allows for meeting EU targets for greenhouse gas emissions and increases the coal yield otherwise impossible to exploit.

  7. GFDL's unified regional-global weather-climate modeling system with variable resolution capability for severe weather predictions and regional climate simulations

    NASA Astrophysics Data System (ADS)

    Lin, S. J.

    2015-12-01

    The NOAA/Geophysical Fluid Dynamics Laboratory has been developing a unified regional-global modeling system with variable resolution capabilities that can be used for severe weather predictions (e.g., tornado outbreak events and cat-5 hurricanes) and ultra-high-resolution (1-km) regional climate simulations within a consistent global modeling framework. The fundation of this flexible regional-global modeling system is the non-hydrostatic extension of the vertically Lagrangian dynamical core (Lin 2004, Monthly Weather Review) known in the community as FV3 (finite-volume on the cubed-sphere). Because of its flexability and computational efficiency, the FV3 is one of the final candidates of NOAA's Next Generation Global Prediction System (NGGPS). We have built into the modeling system a stretched (single) grid capability, a two-way (regional-global) multiple nested grid capability, and the combination of the stretched and two-way nests, so as to make convection-resolving regional climate simulation within a consistent global modeling system feasible using today's High Performance Computing System. One of our main scientific goals is to enable simulations of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously regarded as impossible. In this presentation I will demonstrate that it is computationally feasible to simulate not only super-cell thunderstorms, but also the subsequent genesis of tornadoes using a global model that was originally designed for century long climate simulations. As a unified weather-climate modeling system, we evaluated the performance of the model with horizontal resolution ranging from 1 km to as low as 200 km. In particular, for downscaling studies, we have developed various tests to ensure that the large-scale circulation within the global varaible resolution system is well simulated while at the same time the small-scale can be accurately captured within the targeted high resolution region.

  8. Development of a dynamic coupled hydro-geomechanical code and its application to induced seismicity

    NASA Astrophysics Data System (ADS)

    Miah, Md Mamun

    This research describes the importance of a hydro-geomechanical coupling in the geologic sub-surface environment from fluid injection at geothermal plants, large-scale geological CO2 sequestration for climate mitigation, enhanced oil recovery, and hydraulic fracturing during wells construction in the oil and gas industries. A sequential computational code is developed to capture the multiphysics interaction behavior by linking a flow simulation code TOUGH2 and a geomechanics modeling code PyLith. Numerical formulation of each code is discussed to demonstrate their modeling capabilities. The computational framework involves sequential coupling, and solution of two sub-problems- fluid flow through fractured and porous media and reservoir geomechanics. For each time step of flow calculation, pressure field is passed to the geomechanics code to compute effective stress field and fault slips. A simplified permeability model is implemented in the code that accounts for the permeability of porous and saturated rocks subject to confining stresses. The accuracy of the TOUGH-PyLith coupled simulator is tested by simulating Terzaghi's 1D consolidation problem. The modeling capability of coupled poroelasticity is validated by benchmarking it against Mandel's problem. The code is used to simulate both quasi-static and dynamic earthquake nucleation and slip distribution on a fault from the combined effect of far field tectonic loading and fluid injection by using an appropriate fault constitutive friction model. Results from the quasi-static induced earthquake simulations show a delayed response in earthquake nucleation. This is attributed to the increased total stress in the domain and not accounting for pressure on the fault. However, this issue is resolved in the final chapter in simulating a single event earthquake dynamic rupture. Simulation results show that fluid pressure has a positive effect on slip nucleation and subsequent crack propagation. This is confirmed by running a sensitivity analysis that shows an increase in injection well distance results in delayed slip nucleation and rupture propagation on the fault.

  9. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  10. Ensembler: Enabling High-Throughput Molecular Simulations at the Superfamily Scale.

    PubMed

    Parton, Daniel L; Grinaway, Patrick B; Hanson, Sonya M; Beauchamp, Kyle A; Chodera, John D

    2016-06-01

    The rapidly expanding body of available genomic and protein structural data provides a rich resource for understanding protein dynamics with biomolecular simulation. While computational infrastructure has grown rapidly, simulations on an omics scale are not yet widespread, primarily because software infrastructure to enable simulations at this scale has not kept pace. It should now be possible to study protein dynamics across entire (super)families, exploiting both available structural biology data and conformational similarities across homologous proteins. Here, we present a new tool for enabling high-throughput simulation in the genomics era. Ensembler takes any set of sequences-from a single sequence to an entire superfamily-and shepherds them through various stages of modeling and refinement to produce simulation-ready structures. This includes comparative modeling to all relevant PDB structures (which may span multiple conformational states of interest), reconstruction of missing loops, addition of missing atoms, culling of nearly identical structures, assignment of appropriate protonation states, solvation in explicit solvent, and refinement and filtering with molecular simulation to ensure stable simulation. The output of this pipeline is an ensemble of structures ready for subsequent molecular simulations using computer clusters, supercomputers, or distributed computing projects like Folding@home. Ensembler thus automates much of the time-consuming process of preparing protein models suitable for simulation, while allowing scalability up to entire superfamilies. A particular advantage of this approach can be found in the construction of kinetic models of conformational dynamics-such as Markov state models (MSMs)-which benefit from a diverse array of initial configurations that span the accessible conformational states to aid sampling. We demonstrate the power of this approach by constructing models for all catalytic domains in the human tyrosine kinase family, using all available kinase catalytic domain structures from any organism as structural templates. Ensembler is free and open source software licensed under the GNU General Public License (GPL) v2. It is compatible with Linux and OS X. The latest release can be installed via the conda package manager, and the latest source can be downloaded from https://github.com/choderalab/ensembler.

  11. Low-Visibility Visual Simulation with Real Fog

    NASA Technical Reports Server (NTRS)

    Chase, Wendell D.

    1982-01-01

    An environmental fog simulation (EFS) attachment was developed to aid in the study of natural low-visibility visual cues and subsequently used to examine the realism effect upon the aircraft simulator visual scene. A review of the basic fog equations indicated that the two major factors must be accounted for in the simulation of low visibility-one due to atmospheric attenuation and one due to veiling luminance. These factors are compared systematically by: comparing actual measurements lo those computed from the Fog equations, and comparing runway-visual-range-related visual-scene contrast values with the calculated values. These values are also compared with the simulated equivalent equations and with contrast measurements obtained from a current electronic fog synthesizer to help identify areas in which improvements are needed. These differences in technique, the measured values, the Features of both systems, a pilot opinion survey of the EFS fog, and improvements (by combining features of both systems) that are expected to significantly increase the potential as well as flexibility for producing a very high-fidelity, low-visibility visual simulation are discussed.

  12. Low-visibility visual simulation with real fog

    NASA Technical Reports Server (NTRS)

    Chase, W. D.

    1981-01-01

    An environmental fog simulation (EFS) attachment was developed to aid in the study of natural low-visibility visual cues and subsequently used to examine the realism effect upon the aircraft simulator visual scene. A review of the basic fog equations indicated that two major factors must be accounted for in the simulation of low visibility - one due to atmospheric attenuation and one due to veiling luminance. These factors are compared systematically by (1) comparing actual measurements to those computed from the fog equations, and (2) comparing runway-visual-range-related visual-scene contrast values with the calculated values. These values are also compared with the simulated equivalent equations and with contrast measurements obtained from a current electronic fog synthesizer to help identify areas in which improvements are needed. These differences in technique, the measured values, the features of both systems, a pilot opinion survey of the EFS fog, and improvements (by combining features of both systems) that are expected to significantly increase the potential as well as flexibility for producing a very high-fidelity low-visibility visual simulation are discussed.

  13. A novel track-before-detect algorithm based on optimal nonlinear filtering for detecting and tracking infrared dim target

    NASA Astrophysics Data System (ADS)

    Tian, Yuexin; Gao, Kun; Liu, Ying; Han, Lu

    2015-08-01

    Aiming at the nonlinear and non-Gaussian features of the real infrared scenes, an optimal nonlinear filtering based algorithm for the infrared dim target tracking-before-detecting application is proposed. It uses the nonlinear theory to construct the state and observation models and uses the spectral separation scheme based Wiener chaos expansion method to resolve the stochastic differential equation of the constructed models. In order to improve computation efficiency, the most time-consuming operations independent of observation data are processed on the fore observation stage. The other observation data related rapid computations are implemented subsequently. Simulation results show that the algorithm possesses excellent detection performance and is more suitable for real-time processing.

  14. Investigation to realize a computationally efficient implementation of the high-order instantaneous-moments-based fringe analysis method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod

    2010-06-01

    Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.

  15. Numerical Analysis of a Pulse Detonation Cross Flow Heat Load Experiment

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Naples, Andrew .; Hoke, John L.; Schauer, Fred

    2011-01-01

    A comparison between experimentally measured and numerically simulated, time-averaged, point heat transfer rates in a pulse detonation (PDE) engine is presented. The comparison includes measurements and calculations for heat transfer to a cylinder in crossflow and to the tube wall itself using a novel spool design. Measurements are obtained at several locations and under several operating conditions. The measured and computed results are shown to be in substantial agreement, thereby validating the modeling approach. The model, which is based in computational fluid dynamics (CFD) is then used to interpret the results. A preheating of the incoming fuel charge is predicted, which results in increased volumetric flow and subsequent overfilling. The effect is validated with additional measurements.

  16. Simulation on Natural Convection of a Nanofluid along an Isothermal Inclined Plate

    NASA Astrophysics Data System (ADS)

    Mitra, Asish

    2017-08-01

    A numerical algorithm is presented for studying laminar natural convection flow of a nanofluid along an isothermal inclined plate. By means of similarity transformation, the original nonlinear partial differential equations of flow are transformed to a set of nonlinear ordinary differential equations. Subsequently they are reduced to a first order system and integrated using Newton Raphson and adaptive Runge-Kutta methods. The computer codes are developed for this numerical analysis in Matlab environment. Dimensionless velocity, temperature profiles and nanoparticle concentration for various angles of inclination are illustrated graphically. The effects of Prandtl number, Brownian motion parameter and thermophoresis parameter on Nusselt number are also discussed. The results of the present simulation are then compared with previous one available in literature with good agreement.

  17. Using thermodynamic integration MD simulation to compute relative protein-ligand binding free energy of a GSK3β kinase inhibitor and its analogs.

    PubMed

    Lee, Hsing-Chou; Hsu, Wen-Chi; Liu, An-Lun; Hsu, Chia-Jen; Sun, Ying-Chieh

    2014-06-01

    Thermodynamic integration molecular dynamics simulation was used to investigate how TI-MD simulation preforms in reproducing relative protein-ligand binding free energy of a pair of analogous GSK3β kinase inhibitors of available experimental data (see Fig. 1), and to predict the affinity for other analogs. The computation for the pair gave a ΔΔG of 1.0 kcal/mol, which was in reasonably good agreement with the experimental value of -0.1 kcal/mol. The error bar was estimated at 0.5 kcal/mol. Subsequently, we employed the same protocol to proceed with simulations to find analogous inhibitors with a stronger affinity. Four analogs with a substitution at one site inside the binding pocket were the first to be tried, but no significant enhancement in affinity was found. Subsequent simulations for another 7 analogs was focused on substitutions at the benzene ring of another site, which gave two analogs (analogs 9 and 10) with ΔΔG values of -0.6 and -0.8 kcal/mol, respectively. Both analogs had a OH group at the meta position and another OH group at the ortho position at the other side of the benzene ring, as shown in Table 3. To explore further, another 4 analogs with this characteristic were investigated. Three analogs with ΔΔG values of -2.2, -1.7 and -1.2 kcal/mol, respectively, were found. Hydrogen bond analysis suggested that the additional hydrogen bonds of the added OH groups with Gln185 and/or Asn64, which did not appear in the reference inhibitor or as an analog with one substitution only in the examined cases, were the main contributors to an enhanced affinity. A prediction for better inhibitors should interest experimentalists of enzyme and/or cell assays. Analysis of the interactions between GSK3β kinase and the investigated analogs will be useful in the design of GSK3β kinase inhibitors for compounds of this class. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Non-equilibrium hydrogen ionization in 2D simulations of the solar atmosphere

    NASA Astrophysics Data System (ADS)

    Leenaarts, J.; Carlsson, M.; Hansteen, V.; Rutten, R. J.

    2007-10-01

    Context: The ionization of hydrogen in the solar chromosphere and transition region does not obey LTE or instantaneous statistical equilibrium because the timescale is long compared with important hydrodynamical timescales, especially of magneto-acoustic shocks. Since the pressure, temperature, and electron density depend sensitively on hydrogen ionization, numerical simulation of the solar atmosphere requires non-equilibrium treatment of all pertinent hydrogen transitions. The same holds for any diagnostic application employing hydrogen lines. Aims: To demonstrate the importance and to quantify the effects of non-equilibrium hydrogen ionization, both on the dynamical structure of the solar atmosphere and on hydrogen line formation, in particular Hα. Methods: We implement an algorithm to compute non-equilibrium hydrogen ionization and its coupling into the MHD equations within an existing radiation MHD code, and perform a two-dimensional simulation of the solar atmosphere from the convection zone to the corona. Results: Analysis of the simulation results and comparison to a companion simulation assuming LTE shows that: a) non-equilibrium computation delivers much smaller variations of the chromospheric hydrogen ionization than for LTE. The ionization is smaller within shocks but subsequently remains high in the cool intershock phases. As a result, the chromospheric temperature variations are much larger than for LTE because in non-equilibrium, hydrogen ionization is a less effective internal energy buffer. The actual shock temperatures are therefore higher and the intershock temperatures lower. b) The chromospheric populations of the hydrogen n = 2 level, which governs the opacity of Hα, are coupled to the ion populations. They are set by the high temperature in shocks and subsequently remain high in the cool intershock phases. c) The temperature structure and the hydrogen level populations differ much between the chromosphere above photospheric magnetic elements and above quiet internetwork. d) The hydrogen n = 2 population and column density are persistently high in dynamic fibrils, suggesting that these obtain their visibility from being optically thick in Hα also at low temperature. Movie and Appendix A are only available in electronic form at http://www.aanda.org

  19. The computational nature of memory modification

    PubMed Central

    Gershman, Samuel J; Monfils, Marie-H; Norman, Kenneth A; Niv, Yael

    2017-01-01

    Retrieving a memory can modify its influence on subsequent behavior. We develop a computational theory of memory modification, according to which modification of a memory trace occurs through classical associative learning, but which memory trace is eligible for modification depends on a structure learning mechanism that discovers the units of association by segmenting the stream of experience into statistically distinct clusters (latent causes). New memories are formed when the structure learning mechanism infers that a new latent cause underlies current sensory observations. By the same token, old memories are modified when old and new sensory observations are inferred to have been generated by the same latent cause. We derive this framework from probabilistic principles, and present a computational implementation. Simulations demonstrate that our model can reproduce the major experimental findings from studies of memory modification in the Pavlovian conditioning literature. DOI: http://dx.doi.org/10.7554/eLife.23763.001 PMID:28294944

  20. Physical-geometric optics method for large size faceted particles.

    PubMed

    Sun, Bingqiang; Yang, Ping; Kattawar, George W; Zhang, Xiaodong

    2017-10-02

    A new physical-geometric optics method is developed to compute the single-scattering properties of faceted particles. It incorporates a general absorption vector to accurately account for inhomogeneous wave effects, and subsequently yields the relevant analytical formulas effective and computationally efficient for absorptive scattering particles. A bundle of rays incident on a certain facet can be traced as a single beam. For a beam incident on multiple facets, a systematic beam-splitting technique based on computer graphics is used to split the original beam into several sub-beams so that each sub-beam is incident only on an individual facet. The new beam-splitting technique significantly reduces the computational burden. The present physical-geometric optics method can be generalized to arbitrary faceted particles with either convex or concave shapes and with a homogeneous or an inhomogeneous (e.g., a particle with a core) composition. The single-scattering properties of irregular convex homogeneous and inhomogeneous hexahedra are simulated and compared to their counterparts from two other methods including a numerically rigorous method.

  1. Micro-structurally detailed model of a therapeutic hydrogel injectate in a rat biventricular cardiac geometry for computational simulations

    PubMed Central

    Sirry, Mazin S.; Davies, Neil H.; Kadner, Karen; Dubuis, Laura; Saleh, Muhammad G.; Meintjes, Ernesta M.; Spottiswoode, Bruce S.; Zilla, Peter; Franz, Thomas

    2013-01-01

    Biomaterial injection based therapies have showed cautious success in restoration of cardiac function and prevention of adverse remodelling into heart failure after myocardial infarction (MI). However, the underlying mechanisms are not well understood. Computational studies utilised simplified representations of the therapeutic myocardial injectates. Wistar rats underwent experimental infarction followed by immediate injection of polyethylene glycol hydrogel in the infarct region. Hearts were explanted, cryo-sectioned and the region with the injectate histologically analysed. Histological micrographs were used to reconstruct the dispersed hydrogel injectate. Cardiac magnetic resonance imaging (CMRI) data from a healthy rat were used to obtain an end-diastolic biventricular geometry which was subsequently adjusted and combined with the injectate model. The computational geometry of the injectate exhibited microscopic structural details found the in situ. The combination of injectate and cardiac geometry provides realistic geometries for multiscale computational studies of intra-myocardial injectate therapies for the rat model that has been widely used for MI research. PMID:23682845

  2. The SCEC/USGS dynamic earthquake rupture code verification exercise

    USGS Publications Warehouse

    Harris, R.A.; Barall, M.; Archuleta, R.; Dunham, E.; Aagaard, Brad T.; Ampuero, J.-P.; Bhat, H.; Cruz-Atienza, Victor M.; Dalguer, L.; Dawson, P.; Day, S.; Duan, B.; Ely, G.; Kaneko, Y.; Kase, Y.; Lapusta, N.; Liu, Yajing; Ma, S.; Oglesby, D.; Olsen, K.; Pitarka, A.; Song, S.; Templeton, E.

    2009-01-01

    Numerical simulations of earthquake rupture dynamics are now common, yet it has been difficult to test the validity of these simulations because there have been few field observations and no analytic solutions with which to compare the results. This paper describes the Southern California Earthquake Center/U.S. Geological Survey (SCEC/USGS) Dynamic Earthquake Rupture Code Verification Exercise, where codes that simulate spontaneous rupture dynamics in three dimensions are evaluated and the results produced by these codes are compared using Web-based tools. This is the first time that a broad and rigorous examination of numerous spontaneous rupture codes has been performed—a significant advance in this science. The automated process developed to attain this achievement provides for a future where testing of codes is easily accomplished.Scientists who use computer simulations to understand earthquakes utilize a range of techniques. Most of these assume that earthquakes are caused by slip at depth on faults in the Earth, but hereafter the strategies vary. Among the methods used in earthquake mechanics studies are kinematic approaches and dynamic approaches.The kinematic approach uses a computer code that prescribes the spatial and temporal evolution of slip on the causative fault (or faults). These types of simulations are very helpful, especially since they can be used in seismic data inversions to relate the ground motions recorded in the field to slip on the fault(s) at depth. However, these kinematic solutions generally provide no insight into the physics driving the fault slip or information about why the involved fault(s) slipped that much (or that little). In other words, these kinematic solutions may lack information about the physical dynamics of earthquake rupture that will be most helpful in forecasting future events.To help address this issue, some researchers use computer codes to numerically simulate earthquakes and construct dynamic, spontaneous rupture (hereafter called “spontaneous rupture”) solutions. For these types of numerical simulations, rather than prescribing the slip function at each location on the fault(s), just the friction constitutive properties and initial stress conditions are prescribed. The subsequent stresses and fault slip spontaneously evolve over time as part of the elasto-dynamic solution. Therefore, spontaneous rupture computer simulations of earthquakes allow us to include everything that we know, or think that we know, about earthquake dynamics and to test these ideas against earthquake observations.

  3. Combining configurational energies and forces for molecular force field optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlcek, Lukas; Sun, Weiwei; Kent, Paul R. C.

    While quantum chemical simulations have been increasingly used as an invaluable source of information for atomistic model development, the high computational expenses typically associated with these techniques often limit thorough sampling of the systems of interest. It is therefore of great practical importance to use all available information as efficiently as possible, and in a way that allows for consistent addition of constraints that may be provided by macroscopic experiments. We propose a simple approach that combines information from configurational energies and forces generated in a molecular dynamics simulation to increase the effective number of samples. Subsequently, this information ismore » used to optimize a molecular force field by minimizing the statistical distance similarity metric. We also illustrate the methodology on an example of a trajectory of configurations generated in equilibrium molecular dynamics simulations of argon and water and compare the results with those based on the force matching method.« less

  4. Combining configurational energies and forces for molecular force field optimization

    DOE PAGES

    Vlcek, Lukas; Sun, Weiwei; Kent, Paul R. C.

    2017-07-21

    While quantum chemical simulations have been increasingly used as an invaluable source of information for atomistic model development, the high computational expenses typically associated with these techniques often limit thorough sampling of the systems of interest. It is therefore of great practical importance to use all available information as efficiently as possible, and in a way that allows for consistent addition of constraints that may be provided by macroscopic experiments. We propose a simple approach that combines information from configurational energies and forces generated in a molecular dynamics simulation to increase the effective number of samples. Subsequently, this information ismore » used to optimize a molecular force field by minimizing the statistical distance similarity metric. We also illustrate the methodology on an example of a trajectory of configurations generated in equilibrium molecular dynamics simulations of argon and water and compare the results with those based on the force matching method.« less

  5. Computer-Assisted Interactive Documentary and Performance Arts in Illimitable Space

    NASA Astrophysics Data System (ADS)

    Sheridan, William Michael

    Winter can bring significant snow storm systems or nor'easters to New England. Understanding each factor which can affect nor'easters will allow forecasters to better predict the subsequent weather conditions. One important parameter is the sea surface temperature (SST) of the Atlantic Ocean, where many of these systems strengthen and gain much of their structure. The Weather Research and Forecasting (WRF) model was used to simulate four different nor'easters (Mar 2007, Dec 2007, Jan 2008, Dec 2010) using both observed and warmed SSTs. For the wanner SST simulations, the SSTs over the model domain were increased by 1°C. This change increased the total surface heat fluxes in all of the storms, and the resulting simulated storms were all more intense. The influence on the amount of snowfall over land was highly variable, depending on how close to the coastline the storms were and temperatures across the region.

  6. Multiscale simulation of red blood cell aggregation

    NASA Astrophysics Data System (ADS)

    Bagchi, P.; Popel, A. S.

    2004-11-01

    In humans and other mammals, aggregation of red blood cells (RBC) is a major determinant to blood viscosity in microcirculation under physiological and pathological conditions. Elevated levels of aggregation are often related to cardiovascular diseases, bacterial infection, diabetes, and obesity. Aggregation is a multiscale phenomenon that is governed by the molecular bond formation between adjacent cells, morphological and rheological properties of the cells, and the motion of the extra-cellular fluid in which the cells circulate. We have developed a simulation technique using front tracking methods for multiple fluids that includes the multiscale characteristics of aggregation. We will report the first-ever direct computer simulation of aggregation of deformable cells in shear flows. We will present results on the effect of shear rate, strength of the cross-bridging bonds, and the cell rheological properties on the rolling motion, deformation and subsequent breakage of an aggregate.

  7. Efficient generation of low-energy folded states of a model protein

    NASA Astrophysics Data System (ADS)

    Gordon, Heather L.; Kwan, Wai Kei; Gong, Chunhang; Larrass, Stefan; Rothstein, Stuart M.

    2003-01-01

    A number of short simulated annealing runs are performed on a highly-frustrated 46-"residue" off-lattice model protein. We perform, in an iterative fashion, a principal component analysis of the 946 nonbonded interbead distances, followed by two varieties of cluster analyses: hierarchical and k-means clustering. We identify several distinct sets of conformations with reasonably consistent cluster membership. Nonbonded distance constraints are derived for each cluster and are employed within a distance geometry approach to generate many new conformations, previously unidentified by the simulated annealing experiments. Subsequent analyses suggest that these new conformations are members of the parent clusters from which they were generated. Furthermore, several novel, previously unobserved structures with low energy were uncovered, augmenting the ensemble of simulated annealing results, and providing a complete distribution of low-energy states. The computational cost of this approach to generating low-energy conformations is small when compared to the expense of further Monte Carlo simulated annealing runs.

  8. Application of conventional molecular dynamics simulation in evaluating the stability of apomyoglobin in urea solution

    PubMed Central

    Zhang, Dawei; Lazim, Raudah

    2017-01-01

    In this study, we had exploited the advancement in computer technology to determine the stability of four apomyoglobin variants namely wild type, E109A, E109G and G65A/G73A by conducting conventional molecular dynamics simulations in explicit urea solution. Variations in RMSD, native contacts and solvent accessible surface area of the apomyoglobin variants during the simulation were calculated to probe the effect of mutation on the overall conformation of the protein. Subsequently, the mechanism leading to the destabilization of the apoMb variants was studied through the calculation of correlation matrix, principal component analyses, hydrogen bond analyses and RMSF. The results obtained here correlate well with the study conducted by Baldwin and Luo which showed improved stability of apomyoglobin with E109A mutation and contrariwise for E109G and G65A/G73A mutation. These positive observations showcase the feasibility of exploiting MD simulation in determining protein stability prior to protein expression. PMID:28300210

  9. Ultrafast spectroscopy reveals subnanosecond peptide conformational dynamics and validates molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Spörlein, Sebastian; Carstens, Heiko; Satzger, Helmut; Renner, Christian; Behrendt, Raymond; Moroder, Luis; Tavan, Paul; Zinth, Wolfgang; Wachtveitl, Josef

    2002-06-01

    Femtosecond time-resolved spectroscopy on model peptides with built-in light switches combined with computer simulation of light-triggered motions offers an attractive integrated approach toward the understanding of peptide conformational dynamics. It was applied to monitor the light-induced relaxation dynamics occurring on subnanosecond time scales in a peptide that was backbone-cyclized with an azobenzene derivative as optical switch and spectroscopic probe. The femtosecond spectra permit the clear distinguishing and characterization of the subpicosecond photoisomerization of the chromophore, the subsequent dissipation of vibrational energy, and the subnanosecond conformational relaxation of the peptide. The photochemical cis/trans-isomerization of the chromophore and the resulting peptide relaxations have been simulated with molecular dynamics calculations. The calculated reaction kinetics, as monitored by the energy content of the peptide, were found to match the spectroscopic data. Thus we verify that all-atom molecular dynamics simulations can quantitatively describe the subnanosecond conformational dynamics of peptides, strengthening confidence in corresponding predictions for longer time scales.

  10. Application of conventional molecular dynamics simulation in evaluating the stability of apomyoglobin in urea solution

    NASA Astrophysics Data System (ADS)

    Zhang, Dawei; Lazim, Raudah

    2017-03-01

    In this study, we had exploited the advancement in computer technology to determine the stability of four apomyoglobin variants namely wild type, E109A, E109G and G65A/G73A by conducting conventional molecular dynamics simulations in explicit urea solution. Variations in RMSD, native contacts and solvent accessible surface area of the apomyoglobin variants during the simulation were calculated to probe the effect of mutation on the overall conformation of the protein. Subsequently, the mechanism leading to the destabilization of the apoMb variants was studied through the calculation of correlation matrix, principal component analyses, hydrogen bond analyses and RMSF. The results obtained here correlate well with the study conducted by Baldwin and Luo which showed improved stability of apomyoglobin with E109A mutation and contrariwise for E109G and G65A/G73A mutation. These positive observations showcase the feasibility of exploiting MD simulation in determining protein stability prior to protein expression.

  11. Molecular Dynamics Studies of Self-Assembling Biomolecules and DNA-functionalized Gold Nanoparticles

    NASA Astrophysics Data System (ADS)

    Cho, Vince Y.

    This thesis is organized as following. In Chapter 2, we use fully atomistic MD simulations to study the conformation of DNA molecules that link gold nanoparticles to form nanoparticle superlattice crystals. In Chapter 3, we study the self-assembly of peptide amphiphiles (PAs) into a cylindrical micelle fiber by using CGMD simulations. Compared to fully atomistic MD simulations, CGMD simulations prove to be computationally cost-efficient and reasonably accurate for exploring self-assembly, and are used in all subsequent chapters. In Chapter 4, we apply CGMD methods to study the self-assembly of small molecule-DNA hybrid (SMDH) building blocks into well-defined cage-like dimers, and reveal the role of kinetics and thermodynamics in this process. In Chapter 5, we extend the CGMD model for this system and find that the assembly of SMDHs can be fine-tuned by changing parameters. In Chapter 6, we explore superlattice crystal structures of DNA-functionalized gold nanoparticles (DNA-AuNP) with the CGMD model and compare the hybridization.

  12. Application of conventional molecular dynamics simulation in evaluating the stability of apomyoglobin in urea solution.

    PubMed

    Zhang, Dawei; Lazim, Raudah

    2017-03-16

    In this study, we had exploited the advancement in computer technology to determine the stability of four apomyoglobin variants namely wild type, E109A, E109G and G65A/G73A by conducting conventional molecular dynamics simulations in explicit urea solution. Variations in RMSD, native contacts and solvent accessible surface area of the apomyoglobin variants during the simulation were calculated to probe the effect of mutation on the overall conformation of the protein. Subsequently, the mechanism leading to the destabilization of the apoMb variants was studied through the calculation of correlation matrix, principal component analyses, hydrogen bond analyses and RMSF. The results obtained here correlate well with the study conducted by Baldwin and Luo which showed improved stability of apomyoglobin with E109A mutation and contrariwise for E109G and G65A/G73A mutation. These positive observations showcase the feasibility of exploiting MD simulation in determining protein stability prior to protein expression.

  13. Transperineal ultrasound-guided implantation of electromagnetic transponders in the prostatic fossa for localization and tracking during external beam radiation therapy.

    PubMed

    Garsa, Adam A; Verma, Vivek; Michalski, Jeff M; Gay, Hiram A

    2014-01-01

    To describe a transperineal ultrasound-guided technique for implantation of electromagnetic transponders into the prostatic fossa. Patients were placed in the dorsal lithotomy position, and local anesthetic was administered. On ultrasound, the bladder, urethra, vesicourethral anastomosis, rectum, and the prostatic fossa were carefully identified. Three transponders were implanted into the prostatic fossa under ultrasound guidance in a triangular configuration and implantation was verified by fluoroscopy. Patients underwent computed tomography (CT) simulation approximately 1 week later. All patients in this study were subsequently treated with intensity modulated radiation therapy (IMRT) to the prostatic fossa. From 2008 to 2012, 180 patients received transperineal implantation of electromagnetic transponders into the prostatic fossa and subsequently received IMRT. There were no cases of severe hematuria or rectal bleeding requiring intervention. There were no grade 3 or 4 toxicities. Three patients (1.7%) had a transponder missing on the subsequent CT simulation. Thirteen patients (7.3%) had transponder migration with a geometric residual that exceeded 2 mm for 3 consecutive days (5.6%) or rotation that exceeded 10 degrees for 5 consecutive days (1.7%). These patients underwent a resimulation CT scan to identify the new transponder coordinates. A transperineal technique for implantation of electromagnetic transponders into the prostatic fossa is safe and well tolerated, with no severe toxicity after implantation. There is a low rate of transponder loss or migration.

  14. Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.

    PubMed

    Park, Jongin; Wi, Seok-Min; Lee, Jin S

    2016-02-01

    Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.

  15. Exploring transmembrane transport through alpha-hemolysin with grid-steered molecular dynamics.

    PubMed

    Wells, David B; Abramkina, Volha; Aksimentiev, Aleksei

    2007-09-28

    The transport of biomolecules across cell boundaries is central to cellular function. While structures of many membrane channels are known, the permeation mechanism is known only for a select few. Molecular dynamics (MD) is a computational method that can provide an accurate description of permeation events at the atomic level, which is required for understanding the transport mechanism. However, due to the relatively short time scales accessible to this method, it is of limited utility. Here, we present a method for all-atom simulation of electric field-driven transport of large solutes through membrane channels, which in tens of nanoseconds can provide a realistic account of a permeation event that would require a millisecond simulation using conventional MD. In this method, the average distribution of the electrostatic potential in a membrane channel under a transmembrane bias of interest is determined first from an all-atom MD simulation. This electrostatic potential, defined on a grid, is subsequently applied to a charged solute to steer its permeation through the membrane channel. We apply this method to investigate permeation of DNA strands, DNA hairpins, and alpha-helical peptides through alpha-hemolysin. To test the accuracy of the method, we computed the relative permeation rates of DNA strands having different sequences and global orientations. The results of the G-SMD simulations were found to be in good agreement in experiment.

  16. New atmospheric sensor analysis study

    NASA Technical Reports Server (NTRS)

    Parker, K. G.

    1989-01-01

    The functional capabilities of the ESAD Research Computing Facility are discussed. The system is used in processing atmospheric measurements which are used in the evaluation of sensor performance, conducting design-concept simulation studies, and also in modeling the physical and dynamical nature of atmospheric processes. The results may then be evaluated to furnish inputs into the final design specifications for new space sensors intended for future Spacelab, Space Station, and free-flying missions. In addition, data gathered from these missions may subsequently be analyzed to provide better understanding of requirements for numerical modeling of atmospheric phenomena.

  17. Application of programmable logic controllers to space simulation

    NASA Technical Reports Server (NTRS)

    Sushon, Janet

    1992-01-01

    Incorporating a state-of-the-art process control and instrumentation system into a complex system for thermal vacuum testing is discussed. The challenge was to connect several independent control systems provided by various vendors to a supervisory computer. This combination will sequentially control and monitor the process, collect the data, and transmit it to color a graphic system for subsequent manipulation. The vacuum system upgrade included: replacement of seventeen diffusion pumps with eight cryogenic pumps and one turbomolecular pump, replacing a relay based control system, replacing vacuum instrumentation, and upgrading the data acquisition system.

  18. The influence of anesthesia and fluid-structure interaction on simulated shear stress patterns in the carotid bifurcation of mice.

    PubMed

    De Wilde, David; Trachet, Bram; De Meyer, Guido; Segers, Patrick

    2016-09-06

    Low and oscillatory wall shear stresses (WSS) near aortic bifurcations have been linked to the onset of atherosclerosis. In previous work, we calculated detailed WSS patterns in the carotid bifurcation of mice using a Fluid-structure interaction (FSI) approach. We subsequently fed the animals a high-fat diet and linked the results of the FSI simulations to those of atherosclerotic plaque location on a within-subject basis. However, these simulations were based on boundary conditions measured under anesthesia, while active mice might experience different hemodynamics. Moreover, the FSI technique for mouse-specific simulations is both time- and labor-intensive, and might be replaced by simpler and easier Computational Fluid Dynamics (CFD) simulations. The goal of the current work was (i) to compare WSS patterns based on anesthesia conditions to those representing active resting and exercising conditions; and (ii) to compare WSS patterns based on FSI simulations to those based on steady-state and transient CFD simulations. For each of the 3 computational techniques (steady state CFD, transient CFD, FSI) we performed 5 simulations: 1 for anesthesia, 2 for conscious resting conditions and 2 more for conscious active conditions. The inflow, pressure and heart rate were scaled according to representative in vivo measurements obtained from literature. When normalized by the maximal shear stress value, shear stress patterns were similar for the 3 computational techniques. For all activity levels, steady state CFD led to an overestimation of WSS values, while FSI simulations yielded a clear increase in WSS reversal at the outer side of the sinus of the external carotid artery that was not visible in transient CFD-simulations. Furthermore, the FSI simulations in the highest locomotor activity state showed a flow recirculation zone in the external carotid artery that was not present under anesthesia. This recirculation went hand in hand with locally increased WSS reversal. Our data show that FSI simulations are not necessary to obtain normalized WSS patterns, but indispensable to assess the oscillatory behavior of the WSS in mice. Flow recirculation and WSS reversal at the external carotid artery may occur during high locomotor activity while they are not present under anesthesia. These phenomena might thus influence plaque formation to a larger extent than what was previously assumed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Force and Stress along Simulated Dissociation Pathways of Cucurbituril-Guest Systems.

    PubMed

    Velez-Vega, Camilo; Gilson, Michael K

    2012-03-13

    The field of host-guest chemistry provides computationally tractable yet informative model systems for biomolecular recognition. We applied molecular dynamics simulations to study the forces and mechanical stresses associated with forced dissociation of aqueous cucurbituril-guest complexes with high binding affinities. First, the unbinding transitions were modeled with constant velocity pulling (steered dynamics) and a soft spring constant, to model atomic force microscopy (AFM) experiments. The computed length-force profiles yield rupture forces in good agreement with available measurements. We also used steered dynamics with high spring constants to generate paths characterized by a tight control over the specified pulling distance; these paths were then equilibrated via umbrella sampling simulations and used to compute time-averaged mechanical stresses along the dissociation pathways. The stress calculations proved to be informative regarding the key interactions determining the length-force profiles and rupture forces. In particular, the unbinding transition of one complex is found to be a stepwise process, which is initially dominated by electrostatic interactions between the guest's ammoniums and the host's carbonyl groups, and subsequently limited by the extraction of the guest's bulky bicyclooctane moiety; the latter step requires some bond stretching at the cucurbituril's extraction portal. Conversely, the dissociation of a second complex with a more slender guest is mainly driven by successive electrostatic interactions between the different guest's ammoniums and the host's carbonyl groups. The calculations also provide information on the origins of thermodynamic irreversibilities in these forced dissociation processes.

  20. The Effect of a Prior Dissection Simulation on Middle School Students' Dissection Performance and Understanding of the Anatomy and Morphology of the Frog

    NASA Astrophysics Data System (ADS)

    Akpan, Joseph Paul; Andre, Thomas

    1999-06-01

    Science teachers, school administrators, educators, and the scientific community are faced with ethical controversies over animal dissection in classrooms. Simulation has been proposed as a way of dealing with this issue. One intriguing previous finding was that use of an interactive videodisc dissection facilitated performance on a subsequent actual dissection. This study examined the prior use of simulation of frog dissection in improving students' actual dissection performance and learning of frog anatomy and morphology. There were three experimental conditions: simulation before dissection (SBD); dissection before simulation (DBS); or dissection-only (DO). Results of the study indicated that students receiving SBD performed significantly better than students receiving DBS or DO on both actual dissection and knowledge of the anatomy and morphology. Students' attitudes toward the use of animals for dissection did not change significantly from pretest to posttest and did not interact with treatment. The genders did not differ in achievement, but males were more favorable towards dissection and computers than were females.

  1. Exploring GPCR-Lipid Interactions by Molecular Dynamics Simulations: Excitements, Challenges, and the Way Forward.

    PubMed

    Sengupta, Durba; Prasanna, Xavier; Mohole, Madhura; Chattopadhyay, Amitabha

    2018-06-07

    Gprotein-coupled receptors (GPCRs) are seven transmembrane receptors that mediate a large number of cellular responses and are important drug targets. One of the current challenges in GPCR biology is to analyze the molecular signatures of receptor-lipid interactions and their subsequent effects on GPCR structure, organization, and function. Molecular dynamics simulation studies have been successful in predicting molecular determinants of receptor-lipid interactions. In particular, predicted cholesterol interaction sites appear to correspond well with experimentally determined binding sites and estimated time scales of association. In spite of several success stories, the methodologies in molecular dynamics simulations are still emerging. In this Feature Article, we provide a comprehensive overview of coarse-grain and atomistic molecular dynamics simulations of GPCR-lipid interaction in the context of experimental observations. In addition, we discuss the effect of secondary and tertiary structural constraints in coarse-grain simulations in the context of functional dynamics and structural plasticity of GPCRs. We envision that this comprehensive overview will help resolve differences in computational studies and provide a way forward.

  2. Assessing potentially dangerous medical actions with the computer-based case simulation portion of the USMLE step 3 examination.

    PubMed

    Harik, Polina; Cuddy, Monica M; O'Donovan, Seosaimhin; Murray, Constance T; Swanson, David B; Clauser, Brian E

    2009-10-01

    The 2000 Institute of Medicine report on patient safety brought renewed attention to the issue of preventable medical errors, and subsequently specialty boards and the National Board of Medical Examiners were encouraged to play a role in setting expectations around safety education. This paper examines potentially dangerous actions taken by examinees during the portion of the United States Medical Licensing Examination Step 3 that is particularly well suited to evaluating lapses in physician decision making, the Computer-based Case Simulation (CCS). Descriptive statistics and a general linear modeling approach were used to analyze dangerous actions ordered by 25,283 examinees that completed CCS for the first time between November 2006 and January 2008. More than 20% of examinees ordered at least one dangerous action with the potential to cause significant patient harm. The propensity to order dangerous actions may vary across clinical cases. The CCS format may provide a means of collecting important information about patient-care situations in which examinees may be more likely to commit dangerous actions and the propensity of examinees to order dangerous tests and treatments.

  3. Highlights of the Workshop

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1997-01-01

    Economic stresses are forcing many industries to reduce cost and time-to-market, and to insert emerging technologies into their products. Engineers are asked to design faster, ever more complex systems. Hence, there is a need for novel design paradigms and effective design tools to reduce the design and development times. Several computational tools and facilities have been developed to support the design process. Some of these are described in subsequent presentations. The focus of the workshop is on the computational tools and facilities which have high potential for use in future design environment for aerospace systems. The outline for the introductory remarks is given. First, the characteristics and design drivers for future aerospace systems are outlined; second, simulation-based design environment, and some of its key modules are described; third, the vision for the next-generation design environment being planned by NASA, the UVA ACT Center and JPL is presented. The anticipated major benefits of the planned environment are listed; fourth, some of the government-supported programs related to simulation-based design are listed; and fifth, the objectives and format of the workshop are presented.

  4. Application of a Resource Theory for Magic States to Fault-Tolerant Quantum Computing.

    PubMed

    Howard, Mark; Campbell, Earl

    2017-03-03

    Motivated by their necessity for most fault-tolerant quantum computation schemes, we formulate a resource theory for magic states. First, we show that robustness of magic is a well-behaved magic monotone that operationally quantifies the classical simulation overhead for a Gottesman-Knill-type scheme using ancillary magic states. Our framework subsequently finds immediate application in the task of synthesizing non-Clifford gates using magic states. When magic states are interspersed with Clifford gates, Pauli measurements, and stabilizer ancillas-the most general synthesis scenario-then the class of synthesizable unitaries is hard to characterize. Our techniques can place nontrivial lower bounds on the number of magic states required for implementing a given target unitary. Guided by these results, we have found new and optimal examples of such synthesis.

  5. Combining Modeling and Gaming for Predictive Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riensche, Roderick M.; Whitney, Paul D.

    2012-08-22

    Many of our most significant challenges involve people. While human behavior has long been studied, there are recent advances in computational modeling of human behavior. With advances in computational capabilities come increases in the volume and complexity of data that humans must understand in order to make sense of and capitalize on these modeling advances. Ultimately, models represent an encapsulation of human knowledge. One inherent challenge in modeling is efficient and accurate transfer of knowledge from humans to models, and subsequent retrieval. The simulated real-world environment of games presents one avenue for these knowledge transfers. In this paper we describemore » our approach of combining modeling and gaming disciplines to develop predictive capabilities, using formal models to inform game development, and using games to provide data for modeling.« less

  6. Interior thermal insulation systems for historical building envelopes

    NASA Astrophysics Data System (ADS)

    Jerman, Miloš; Solař, Miloš; Černý, Robert

    2017-11-01

    The design specifics of interior thermal insulation systems applied for historical building envelopes are described. The vapor-tight systems and systems based on capillary thermal insulation materials are taken into account as two basic options differing in building-physical considerations. The possibilities of hygrothermal analysis of renovated historical envelopes including laboratory methods, computer simulation techniques, and in-situ tests are discussed. It is concluded that the application of computational models for hygrothermal assessment of interior thermal insulation systems should always be performed with a particular care. On one hand, they present a very effective tool for both service life assessment and possible planning of subsequent reconstructions. On the other, the hygrothermal analysis of any historical building can involve quite a few potential uncertainties which may affect negatively the accuracy of obtained results.

  7. Computational replication of the patient-specific stenting procedure for coronary artery bifurcations: From OCT and CT imaging to structural and hemodynamics analyses.

    PubMed

    Chiastra, Claudio; Wu, Wei; Dickerhoff, Benjamin; Aleiou, Ali; Dubini, Gabriele; Otake, Hiromasa; Migliavacca, Francesco; LaDisa, John F

    2016-07-26

    The optimal stenting technique for coronary artery bifurcations is still debated. With additional advances computational simulations can soon be used to compare stent designs or strategies based on verified structural and hemodynamics results in order to identify the optimal solution for each individual's anatomy. In this study, patient-specific simulations of stent deployment were performed for 2 cases to replicate the complete procedure conducted by interventional cardiologists. Subsequent computational fluid dynamics (CFD) analyses were conducted to quantify hemodynamic quantities linked to restenosis. Patient-specific pre-operative models of coronary bifurcations were reconstructed from CT angiography and optical coherence tomography (OCT). Plaque location and composition were estimated from OCT and assigned to models, and structural simulations were performed in Abaqus. Artery geometries after virtual stent expansion of Xience Prime or Nobori stents created in SolidWorks were compared to post-operative geometry from OCT and CT before being extracted and used for CFD simulations in SimVascular. Inflow boundary conditions based on body surface area, and downstream vascular resistances and capacitances were applied at branches to mimic physiology. Artery geometries obtained after virtual expansion were in good agreement with those reconstructed from patient images. Quantitative comparison of the distance between reconstructed and post-stent geometries revealed a maximum difference in area of 20.4%. Adverse indices of wall shear stress were more pronounced for thicker Nobori stents in both patients. These findings verify structural analyses of stent expansion, introduce a workflow to combine software packages for solid and fluid mechanics analysis, and underscore important stent design features from prior idealized studies. The proposed approach may ultimately be useful in determining an optimal choice of stent and position for each patient. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Augmentation of thrombin generation in neonates undergoing cardiopulmonary bypass.

    PubMed

    Guzzetta, N A; Szlam, F; Kiser, A S; Fernandez, J D; Szlam, A D; Leong, T; Tanaka, K A

    2014-02-01

    Factor concentrates are currently available and becoming increasingly used off-label for treatment of bleeding. We compared recombinant activated factor VII (rFVIIa) with three-factor prothrombin complex concentrate (3F-PCC) for the ability to augment thrombin generation (TG) in neonatal plasma after cardiopulmonary bypass (CPB). First, we used a computer-simulated coagulation model to assess the impact of rFVIIa and 3F-PCC, and then performed similar measurements ex vivo using plasma from neonates undergoing CPB. Simulated TG was computed according to the coagulation factor levels from umbilical cord plasma and the therapeutic levels of rFVIIa, 3F-PCC, or both. Subsequently, 11 neonates undergoing cardiac surgery were enrolled. Two blood samples were obtained from each neonate: pre-CPB and post-CPB after platelet and cryoprecipitate transfusion. The post-CPB products sample was divided into control (no treatment), control plus rFVIIa (60 nM), and control plus 3F-PCC (0.3 IU ml(-1)) aliquots. Three parameters of TG were measured ex vivo. The computer-simulated post-CPB model demonstrated that rFVIIa failed to substantially improve lag time, TG rate and peak thrombin without supplementing prothrombin. Ex vivo data showed that addition of rFVIIa post-CPB significantly shortened lag time; however, rate and peak were not statistically significantly improved. Conversely, 3F-PCC improved all TG parameters in parallel with increased prothrombin levels in both simulated and ex vivo post-CPB samples. Our data highlight the importance of prothrombin replacement in restoring TG. Despite a low content of FVII, 3F-PCC exerts potent procoagulant activity compared with rFVIIa ex vivo. Further clinical evaluation regarding the efficacy and safety of 3F-PCC is warranted.

  9. Ex Vivo Methods for Informing Computational Models of the Mitral Valve

    PubMed Central

    Bloodworth, Charles H.; Pierce, Eric L.; Easley, Thomas F.; Drach, Andrew; Khalighi, Amir H.; Toma, Milan; Jensen, Morten O.; Sacks, Michael S.; Yoganathan, Ajit P.

    2016-01-01

    Computational modeling of the mitral valve (MV) has potential applications for determining optimal MV repair techniques and risk of recurrent mitral regurgitation. Two key concerns for informing these models are (1) sensitivity of model performance to the accuracy of the input geometry, and, (2) acquisition of comprehensive data sets against which the simulation can be validated across clinically relevant geometries. Addressing the first concern, ex vivo micro-computed tomography (microCT) was used to image MVs at high resolution (~40 micron voxel size). Because MVs distorted substantially during static imaging, glutaraldehyde fixation was used prior to microCT. After fixation, MV leaflet distortions were significantly smaller (p<0.005), and detail of the chordal tree was appreciably greater. Addressing the second concern, a left heart simulator was designed to reproduce MV geometric perturbations seen in vivo in functional mitral regurgitation and after subsequent repair, and maintain compatibility with microCT. By permuting individual excised ovine MVs (n=5) through each state (healthy, diseased and repaired), and imaging with microCT in each state, a comprehensive data set was produced. Using this data set, work is ongoing to construct and validate high-fidelity MV biomechanical models. These models will seek to link MV function across clinically relevant states. PMID:27699507

  10. Code Modernization of VPIC

    NASA Astrophysics Data System (ADS)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  11. A simulation of temperature influence on echolocation click beams of the Indo-Pacific humpback dolphin (Sousa chinensis).

    PubMed

    Song, Zhongchang; Zhang, Yu; Wang, Xianyan; Wei, Chong

    2017-10-01

    A finite element method was used to investigate the temperature influence on sound beams of the Indo-Pacific humpback dolphin. The numerical models of a dolphin, which originated from previous computed tomography (CT) scanning and physical measurement results, were used to investigate sound beam patterns of the dolphin in temperatures from 21 °C to 39 °C, in increments of 2 °C. The -3 dB beam widths across the temperatures ranged from 9.3° to 12.6°, and main beam angle ranged from 4.7° to 7.2° for these temperatures. The subsequent simulation suggested that the dolphin's sound beam patterns, side lobes in particular, were influenced by temperature.

  12. Initial verification and validation of RAZORBACK - A research reactor transient analysis code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talley, Darren G.

    2015-09-01

    This report describes the work and results of the initial verification and validation (V&V) of the beta release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This initial V&V effort was intended to confirm that the code work to-date shows good agreement between simulation and actualmore » ACRR operations, indicating that the subsequent V&V effort for the official release of the code will be successful.« less

  13. Numerical Simulations of Dynamical Mass Transfer in Binaries

    NASA Astrophysics Data System (ADS)

    Motl, P. M.; Frank, J.; Tohline, J. E.

    1999-05-01

    We will present results from our ongoing research project to simulate dynamically unstable mass transfer in near contact binaries with mass ratios different from one. We employ a fully three-dimensional self-consistent field technique to generate synchronously rotating polytropic binaries. With our self-consistent field code we can create equilibrium binaries where one component is, by radius, within about 99 of filling its Roche lobe for example. These initial configurations are evolved using a three-dimensional, Eulerian hydrodynamics code. We make no assumptions about the symmetry of the subsequent flow and the entire binary system is evolved self-consistently under the influence of its own gravitational potential. For a given mass ratio and polytropic index for the binary components, mass transfer via Roche lobe overflow can be predicted to be stable or unstable through simple theoretical arguments. The validity of the approximations made in the stability calculations are tested against our numerical simulations. We acknowledge support from the U.S. National Science Foundation through grants AST-9720771, AST-9528424, and DGE-9355007. This research has been supported, in part, by grants of high-performance computing time on NPACI facilities at the San Diego Supercomputer Center, the Texas Advanced Computing Center and through the PET program of the NAVOCEANO DoD Major Shared Resource Center in Stennis, MS.

  14. A Twist on the Richtmyer-Meshkov Instability

    NASA Astrophysics Data System (ADS)

    Rollin, Bertrand; Koneru, Rahul; Ouellet, Frederick

    2017-11-01

    The Richtmyer-Meshkov instability is caused by the interaction of a shock wave with a perturbed interface between two fluids of different densities. Typical contexts in which it plays a key role include inertial confinement fusion, supernovae or scramjets. However, little is known of the phenomenology of this instability if one of the interacting media is a dense solid-particle phase. In the context of an explosive dispersal of particles, this gas-particle variant of the Richtmyer-Meshkov instability may play a role in the late time formation of aerodynamically stable particle jets. Thus, this numerical experiment aims at shedding some light on this phenomenon with the help of high fidelity numerical simulations. Using a Eulerian-Lagrangian approach, we track trajectories of computational particles composing an initially corrugated solid particle curtain, in a two-dimensional planar geometry. This study explores the effects of the initial shape (designed using single mode and multimode perturbations) and volume fraction of the particle curtain on its subsequent evolution. Complexities associated with compaction of the curtain of particles to the random close packing limit are avoided by constraining simulations to modest initial volume fraction of particles. This work was supported by the U.S. DoE, NNSA, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.

  15. Hybrid ray-FDTD model for the simulation of the ultrasonic inspection of CFRP parts

    NASA Astrophysics Data System (ADS)

    Jezzine, Karim; Ségur, Damien; Ecault, Romain; Dominguez, Nicolas; Calmon, Pierre

    2017-02-01

    Carbon Fiber Reinforced Polymers (CFRP) are commonly used in structural parts in the aeronautic industry, to reduce the weight of aircraft while maintaining high mechanical performances. Simulation of the ultrasonic inspections of these parts has to face the highly heterogeneous and anisotropic characteristics of these materials. To model the propagation of ultrasound in these composite structures, we propose two complementary approaches. The first one is based on a ray model predicting the propagation of the ultrasound in an anisotropic effective medium obtained from a homogenization of the material. The ray model is designed to deal with possibly curved parts and subsequent continuously varying anisotropic orientations. The second approach is based on the coupling of the ray model, and a finite difference scheme in time domain (FDTD). The ray model handles the ultrasonic propagation between the transducer and the FDTD computation zone that surrounds the composite part. In this way, the computational efficiency is preserved and the ultrasound scattering by the composite structure can be predicted. Inspections of flat or curved composite panels, as well as stiffeners can be performed. The models have been implemented in the CIVA software platform and compared to experiments. We also present an application of the simulation to the performance demonstration of the adaptive inspection technique SAUL (Surface Adaptive Ultrasound).

  16. Estimation of the time-dependent radioactive source-term from the Fukushima nuclear power plant accident using atmospheric transport modelling

    NASA Astrophysics Data System (ADS)

    Schoeppner, M.; Plastino, W.; Budano, A.; De Vincenzi, M.; Ruggieri, F.

    2012-04-01

    Several nuclear reactors at the Fukushima Dai-ichi power plant have been severely damaged from the Tōhoku earthquake and the subsequent tsunami in March 2011. Due to the extremely difficult on-site situation it has been not been possible to directly determine the emissions of radioactive material. However, during the following days and weeks radionuclides of 137-Caesium and 131-Iodine (amongst others) were detected at monitoring stations throughout the world. Atmospheric transport models are able to simulate the worldwide dispersion of particles accordant to location, time and meteorological conditions following the release. The Lagrangian atmospheric transport model Flexpart is used by many authorities and has been proven to make valid predictions in this regard. The Flexpart software has first has been ported to a local cluster computer at the Grid Lab of INFN and Department of Physics of University of Roma Tre (Rome, Italy) and subsequently also to the European Mediterranean Grid (EUMEDGRID). Due to this computing power being available it has been possible to simulate the transport of particles originating from the Fukushima Dai-ichi plant site. Using the time series of the sampled concentration data and the assumption that the Fukushima accident was the only source of these radionuclides, it has been possible to estimate the time-dependent source-term for fourteen days following the accident using the atmospheric transport model. A reasonable agreement has been obtained between the modelling results and the estimated radionuclide release rates from the Fukushima accident.

  17. X-ray micro computed tomography for the visualization of an atherosclerotic human coronary artery

    NASA Astrophysics Data System (ADS)

    Matviykiv, Sofiya; Buscema, Marzia; Deyhle, Hans; Pfohl, Thomas; Zumbuehl, Andreas; Saxer, Till; Müller, Bert

    2017-06-01

    Atherosclerosis refers to narrowing or blocking of blood vessels that can lead to a heart attack, chest pain or stroke. Constricted segments of diseased arteries exhibit considerably increased wall shear stress, compared to the healthy ones. One of the possibilities to improve patient’s treatment is the application of nano-therapeutic approaches, based on shear stress sensitive nano-containers. In order to tailor the chemical composition and subsequent physical properties of such liposomes, one has to know precisely the morphology of critically stenosed arteries at micrometre resolution. It is often obtained by means of histology, which has the drawback of offering only two-dimensional information. Additionally, it requires the artery to be decalcified before sectioning, which might lead to deformations within the tissue. Micro computed tomography (μCT) enables the three-dimensional (3D) visualization of soft and hard tissues at micrometre level. μCT allows lumen segmentation that is crucial for subsequent flow simulation analysis. In this communication, tomographic images of a human coronary artery before and after decalcification are qualitatively and quantitatively compared. We analyse the cross section of the diseased human coronary artery before and after decalcification, and calculate the lumen area of both samples.

  18. A Model to Simulate Titanium Behavior in the Iron Blast Furnace Hearth

    NASA Astrophysics Data System (ADS)

    Guo, Bao-Yu; Zulli, Paul; Maldonado, Daniel; Yu, Ai-Bing

    2010-08-01

    The erosion of hearth refractory is a major limitation to the campaign life of a blast furnace. Titanium from titania addition in the burden or tuyere injection can react with carbon and nitrogen in molten pig iron to form titanium carbonitride, giving the so-called titanium-rich scaffold or buildup on the hearth surface, to protect the hearth from subsequent erosion. In the current article, a mathematical model based on computational fluid dynamics is proposed to simulate the behavior of solid particles in the liquid iron. The model considers the fluid/solid particle flow through a packed bed, conjugated heat transfer, species transport, and thermodynamic of key chemical reactions. A region of high solid concentration is predicted at the hearth bottom surface. Regions of solid formation and dissolution can be identified, which depend on the local temperature and chemical equilibrium. The sensitivity to the key model parameters for the solid phase is analyzed. The model provides an insight into the fundamental mechanism of solid particle formation, and it may form a basic model for subsequent development to study the formation of titanium scaffold in the blast furnace hearth.

  19. Two dimensional kinetic analysis of electrostatic harmonic plasma waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca-Pongutá, E. C.; Ziebell, L. F.; Gaelzer, R.

    2016-06-15

    Electrostatic harmonic Langmuir waves are virtual modes excited in weakly turbulent plasmas, first observed in early laboratory beam-plasma experiments as well as in rocket-borne active experiments in space. However, their unequivocal presence was confirmed through computer simulated experiments and subsequently theoretically explained. The peculiarity of harmonic Langmuir waves is that while their existence requires nonlinear response, their excitation mechanism and subsequent early time evolution are governed by essentially linear process. One of the unresolved theoretical issues regards the role of nonlinear wave-particle interaction process over longer evolution time period. Another outstanding issue is that existing theories for these modes aremore » limited to one-dimensional space. The present paper carries out two dimensional theoretical analysis of fundamental and (first) harmonic Langmuir waves for the first time. The result shows that harmonic Langmuir wave is essentially governed by (quasi)linear process and that nonlinear wave-particle interaction plays no significant role in the time evolution of the wave spectrum. The numerical solutions of the two-dimensional wave spectra for fundamental and harmonic Langmuir waves are also found to be consistent with those obtained by direct particle-in-cell simulation method reported in the literature.« less

  20. Mab's orbital motion explained

    NASA Astrophysics Data System (ADS)

    Kumar, K.; de Pater, I.; Showalter, M. R.

    2015-07-01

    We explored the hypothesis that Mab's anomalous orbital motion, as deduced from Hubble Space Telescope (HST) data (Showalter, M.R., Lissauer, J.J. [2006]. Science (New York, NY) 311, 973-977), is the result of gravitational interactions with a putative suite of large bodies in the μ-ring. We conducted simulations to compute the gravitational effect of Mab (a recently discovered Uranian moon) on a cloud of test particles. Subsequently, by employing the data extracted from the test particle simulations, we executed random walk simulations to compute the back-reaction of nearby perturbers on Mab. By generating simulated observation metrics, we compared our results to the data retrieved from the HST. Our results indicate that the longitude residual change noted in the HST data (Δλr,Mab ≈ 1 deg) is well matched by our simulations. The eccentricity variations (ΔeMab ≈10-3) are however typically two orders of magnitude too small. We present a variety of reasons that could account for this discrepancy. The nominal scenario that we investigated assumes a perturber ring mass (mring) of 1 mMab (Mab's mass) and a perturber ring number density (ρn,ring) of 10 perturbers per 3 RHill,Mab (Mab's Hill radius). This effectively translates to a few tens of perturbers with radii of approximately 2-3 km, depending on the albedo assumed. The results obtained also include an interesting litmus test: variations of Mab's inclination on the order of the eccentricity changes should be observable. Our work provides clues for further investigation into the tantalizing prospect that the Mab/μ-ring system is undergoing re-accretion after a recent catastrophic disruption.

  1. Rupture Dynamics and Seismic Radiation on Rough Faults for Simulation-Based PSHA

    NASA Astrophysics Data System (ADS)

    Mai, P. M.; Galis, M.; Thingbaijam, K. K. S.; Vyas, J. C.; Dunham, E. M.

    2017-12-01

    Simulation-based ground-motion predictions may augment PSHA studies in data-poor regions or provide additional shaking estimations, incl. seismic waveforms, for critical facilities. Validation and calibration of such simulation approaches, based on observations and GMPE's, is important for engineering applications, while seismologists push to include the precise physics of the earthquake rupture process and seismic wave propagation in 3D heterogeneous Earth. Geological faults comprise both large-scale segmentation and small-scale roughness that determine the dynamics of the earthquake rupture process and its radiated seismic wavefield. We investigate how different parameterizations of fractal fault roughness affect the rupture evolution and resulting near-fault ground motions. Rupture incoherence induced by fault roughness generates realistic ω-2 decay for high-frequency displacement amplitude spectra. Waveform characteristics and GMPE-based comparisons corroborate that these rough-fault rupture simulations generate realistic synthetic seismogram for subsequent engineering application. Since dynamic rupture simulations are computationally expensive, we develop kinematic approximations that emulate the observed dynamics. Simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. The dynamic rake angle variations are anti-correlated with local dip angles. Based on a dynamically consistent Yoffe source-time function, we show that the seismic wavefield of the approximated kinematic rupture well reproduces the seismic radiation of the full dynamic source process. Our findings provide an innovative pseudo-dynamic source characterization that captures fault roughness effects on rupture dynamics. Including the correlations between kinematic source parameters, we present a new pseudo-dynamic rupture modeling approach for computing broadband ground-motion time-histories for simulation-based PSHA

  2. Near- and far-field aerodynamics in insect hovering flight: an integrated computational study.

    PubMed

    Aono, Hikaru; Liang, Fuyou; Liu, Hao

    2008-01-01

    We present the first integrative computational fluid dynamics (CFD) study of near- and far-field aerodynamics in insect hovering flight using a biology-inspired, dynamic flight simulator. This simulator, which has been built to encompass multiple mechanisms and principles related to insect flight, is capable of 'flying' an insect on the basis of realistic wing-body morphologies and kinematics. Our CFD study integrates near- and far-field wake dynamics and shows the detailed three-dimensional (3D) near- and far-field vortex flows: a horseshoe-shaped vortex is generated and wraps around the wing in the early down- and upstroke; subsequently, the horseshoe-shaped vortex grows into a doughnut-shaped vortex ring, with an intense jet-stream present in its core, forming the downwash; and eventually, the doughnut-shaped vortex rings of the wing pair break up into two circular vortex rings in the wake. The computed aerodynamic forces show reasonable agreement with experimental results in terms of both the mean force (vertical, horizontal and sideslip forces) and the time course over one stroke cycle (lift and drag forces). A large amount of lift force (approximately 62% of total lift force generated over a full wingbeat cycle) is generated during the upstroke, most likely due to the presence of intensive and stable, leading-edge vortices (LEVs) and wing tip vortices (TVs); and correspondingly, a much stronger downwash is observed compared to the downstroke. We also estimated hovering energetics based on the computed aerodynamic and inertial torques, and powers.

  3. A molecular dynamics study of freezing in a confined geometry

    NASA Technical Reports Server (NTRS)

    Ma, Wen-Jong; Banavar, Jayanth R.; Koplik, Joel

    1992-01-01

    The dynamics of freezing of a Lennard-Jones liquid in narrow channels bounded by molecular walls is studied by computer simulation. The time development of ordering is quantified and a novel freezing mechanism is observed. The liquid forms layers and subsequent in-plane ordering within a layer is accompanied by a sharpening of the layer in the transverse direction. The effects of channel size, the methods of quench, the liquid-wall interaction and the roughness of walls on the freezing mechanism are elucidated. Comparison with recent experiments on freezing in confined geometries is presented.

  4. Initial testing of a variable-stroke Stirling engine

    NASA Technical Reports Server (NTRS)

    Thieme, L. G.

    1985-01-01

    In support of the U.S. Department of Energy's Stirling Engine Highway Vehicle Systems Program, NASA Lewis Research Center is evaluating variable-stroke control for Stirling engines. The engine being tested is the Advenco Stirling engine; this engine was manufactured by Philips Research Laboratories of the Netherlands and uses a variable-angle swash-plate drive to achieve variable stroke operation. The engine is described, initial steady-state test data taken at Lewis are presented, a major drive system failure and subsequent modifications are described. Computer simulation results are presented to show potential part-load efficiency gains with variable-stroke control.

  5. Quantifying learning in medical students during a critical care medicine elective: a comparison of three evaluation instruments.

    PubMed

    Rogers, P L; Jacob, H; Rashwan, A S; Pinsky, M R

    2001-06-01

    To compare three different evaluative instruments and determine which is able to measure different aspects of medical student learning. Student learning was evaluated by using written examinations, objective structured clinical examination, and patient simulator that used two clinical scenarios before and after a structured critical care elective, by using a crossover design. Twenty-four 4th-yr students enrolled in the critical care medicine elective. All students took a multiple-choice written examination; evaluated a live simulated critically ill patient, requested data from a nurse, and intervened as appropriate at different stations (objective structured clinical examination); and evaluated the computer-controlled patient simulator and intervened as appropriate. Students' knowledge was assessed by using a multiple-choice examination containing the same data incorporated into the other examinations. Student performance on the objective structured clinical examination was evaluated at five stations. Both objective structured clinical examination and simulator tests were videotaped for subsequent scores of responses, quality of responses, and response time. The videotapes were reviewed for specific behaviors by faculty masked to time of examination. Students were expected to perform the following: a) assess airway, breathing, and circulation; b) prepare a mannequin for intubation; c) provide appropriate ventilator settings; d) manage hypotension; and e) request, interpret, and provide appropriate intervention for pulmonary artery catheter data. Students were expected to perform identical behaviors during the simulator examination; however, the entire examination was performed on the whole-body computer-controlled mannequin. The primary outcome measure was the difference in examination scores before and after the rotation. The mean preelective scores were 77 +/- 16%, 47 +/- 15%, and 41 +/- 14% for the written examination, objective structured clinical examination, and simulator, respectively, compared with 89 +/- 11%, 76 +/- 12%, and 62 +/- 15% after the elective (p <.0001). Prerotation scores for the written examination were significantly higher than the objective structured clinical examination or the simulator; postrotation scores were highest for the written examination and lowest for the simulator. Written examinations measure acquisition of knowledge but fail to predict if students can apply knowledge to problem solving, whereas both the objective structured clinical examination and the computer-controlled patient simulator can be used as effective performance evaluation tools.

  6. Uncertainty analyses of CO2 plume expansion subsequent to wellbore CO2 leakage into aquifers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Zhangshuan; Bacon, Diana H.; Engel, David W.

    2014-08-01

    In this study, we apply an uncertainty quantification (UQ) framework to CO2 sequestration problems. In one scenario, we look at the risk of wellbore leakage of CO2 into a shallow unconfined aquifer in an urban area; in another scenario, we study the effects of reservoir heterogeneity on CO2 migration. We combine various sampling approaches (quasi-Monte Carlo, probabilistic collocation, and adaptive sampling) in order to reduce the number of forward calculations while trying to fully explore the input parameter space and quantify the input uncertainty. The CO2 migration is simulated using the PNNL-developed simulator STOMP-CO2e (the water-salt-CO2 module). For computationally demandingmore » simulations with 3D heterogeneity fields, we combined the framework with a scalable version module, eSTOMP, as the forward modeling simulator. We built response curves and response surfaces of model outputs with respect to input parameters, to look at the individual and combined effects, and identify and rank the significance of the input parameters.« less

  7. Simulation-Guided 3D Nanomanufacturing via Focused Electron Beam Induced Deposition

    DOE PAGES

    Fowlkes, Jason D.; Winkler, Robert; Lewis, Brett B.; ...

    2016-06-10

    Focused electron beam induced deposition (FEBID) is one of the few techniques that enables direct-write synthesis of free-standing 3D nanostructures. While the fabrication of simple architectures such as vertical or curving nanowires has been achieved by simple trial and error, processing complex 3D structures is not tractable with this approach. This is due, inpart, to the dynamic interplay between electron–solid interactions and the transient spatial distribution of absorbed precursor molecules on the solid surface. Here, we demonstrate the ability to controllably deposit 3D lattice structures at the micro/nanoscale, which have received recent interest owing to superior mechanical and optical properties.more » Moreover, a hybrid Monte Carlo–continuum simulation is briefly overviewed, and subsequently FEBID experiments and simulations are directly compared. Finally, a 3D computer-aided design (CAD) program is introduced, which generates the beam parameters necessary for FEBID by both simulation and experiment. In using this approach, we demonstrate the fabrication of various 3D lattice structures using Pt-, Au-, and W-based precursors.« less

  8. Word aligned bitmap compression method, data structure, and apparatus

    DOEpatents

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  9. Flight-Time Identification of a UH-60A Helicopter and Slung Load

    NASA Technical Reports Server (NTRS)

    Cicolani, Luigi S.; McCoy, Allen H.; Tischler, Mark B.; Tucker, George E.; Gatenio, Pinhas; Marmar, Dani

    1998-01-01

    This paper describes a flight test demonstration of a system for identification of the stability and handling qualities parameters of a helicopter-slung load configuration simultaneously with flight testing, and the results obtained.Tests were conducted with a UH-60A Black Hawk at speeds from hover to 80 kts. The principal test load was an instrumented 8 x 6 x 6 ft cargo container. The identification used frequency domain analysis in the frequency range to 2 Hz, and focussed on the longitudinal and lateral control axes since these are the axes most affected by the load pendulum modes in the frequency range of interest for handling qualities. Results were computed for stability margins, handling qualities parameters and load pendulum stability. The computations took an average of 4 minutes before clearing the aircraft to the next test point. Important reductions in handling qualities were computed in some cases, depending, on control axis and load-slung combination. A database, including load dynamics measurements, was accumulated for subsequent simulation development and validation.

  10. Neighbour lists for smoothed particle hydrodynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Winkler, Daniel; Rezavand, Massoud; Rauch, Wolfgang

    2018-04-01

    The efficient iteration of neighbouring particles is a performance critical aspect of any high performance smoothed particle hydrodynamics (SPH) solver. SPH solvers that implement a constant smoothing length generally divide the simulation domain into a uniform grid to reduce the computational complexity of the neighbour search. Based on this method, particle neighbours are either stored per grid cell or for each individual particle, denoted as Verlet list. While the latter approach has significantly higher memory requirements, it has the potential for a significant computational speedup. A theoretical comparison is performed to estimate the potential improvements of the method based on unknown hardware dependent factors. Subsequently, the computational performance of both approaches is empirically evaluated on graphics processing units. It is shown that the speedup differs significantly for different hardware, dimensionality and floating point precision. The Verlet list algorithm is implemented as an alternative to the cell linked list approach in the open-source SPH solver DualSPHysics and provided as a standalone software package.

  11. Students' learning of clinical sonography: use of computer-assisted instruction and practical class.

    PubMed

    Wood, A K; Dadd, M J; Lublin, J R

    1996-08-01

    The application of information technology to teaching radiology will profoundly change the way learning is mediated to students. In this project, the integration of veterinary medical students' knowledge of sonography was promoted by a computer-assisted instruction program and a subsequent practical class. The computer-assisted instruction program emphasized the physical principles of clinical sonography and contained simulations and user-active experiments. In the practical class, the students used an actual sonographic machine for the first time and made images of a tissue-equivalent phantom. Students' responses to questionnaires were analyzed. On completing the overall project, 96% of the students said that they now understood sonographic concepts very or reasonably well, and 98% had become very or moderately interested in clinical sonography. The teaching and learning initiatives enhanced an integrated approach to learning, stimulated student interest and curiosity, improved understanding of sonographic principles, and contributed to an increased confidence and skill in using sonographic equipment.

  12. Modeling Early-Stage Processes of U-10 Wt.%Mo Alloy Using Integrated Computational Materials Engineering Concepts

    NASA Astrophysics Data System (ADS)

    Wang, Xiaowo; Xu, Zhijie; Soulami, Ayoub; Hu, Xiaohua; Lavender, Curt; Joshi, Vineet

    2017-12-01

    Low-enriched uranium alloyed with 10 wt.% molybdenum (U-10Mo) has been identified as a promising alternative to high-enriched uranium. Manufacturing U-10Mo alloy involves multiple complex thermomechanical processes that pose challenges for computational modeling. This paper describes the application of integrated computational materials engineering (ICME) concepts to integrate three individual modeling components, viz. homogenization, microstructure-based finite element method for hot rolling, and carbide particle distribution, to simulate the early-stage processes of U-10Mo alloy manufacture. The resulting integrated model enables information to be passed between different model components and leads to improved understanding of the evolution of the microstructure. This ICME approach is then used to predict the variation in the thickness of the Zircaloy-2 barrier as a function of the degree of homogenization and to analyze the carbide distribution, which can affect the recrystallization, hardness, and fracture properties of U-10Mo in subsequent processes.

  13. Modeling Commercial Turbofan Engine Icing Risk With Ice Crystal Ingestion

    NASA Technical Reports Server (NTRS)

    Jorgenson, Philip C. E.; Veres, Joseph P.

    2013-01-01

    The occurrence of ice accretion within commercial high bypass aircraft turbine engines has been reported under certain atmospheric conditions. Engine anomalies have taken place at high altitudes that have been attributed to ice crystal ingestion, partially melting, and ice accretion on the compression system components. The result was degraded engine performance, and one or more of the following: loss of thrust control (roll back), compressor surge or stall, and flameout of the combustor. As ice crystals are ingested into the fan and low pressure compression system, the increase in air temperature causes a portion of the ice crystals to melt. It is hypothesized that this allows the ice-water mixture to cover the metal surfaces of the compressor stationary components which leads to ice accretion through evaporative cooling. Ice accretion causes a blockage which subsequently results in the deterioration in performance of the compressor and engine. The focus of this research is to apply an engine icing computational tool to simulate the flow through a turbofan engine and assess the risk of ice accretion. The tool is comprised of an engine system thermodynamic cycle code, a compressor flow analysis code, and an ice particle melt code that has the capability of determining the rate of sublimation, melting, and evaporation through the compressor flow path, without modeling the actual ice accretion. A commercial turbofan engine which has previously experienced icing events during operation in a high altitude ice crystal environment has been tested in the Propulsion Systems Laboratory (PSL) altitude test facility at NASA Glenn Research Center. The PSL has the capability to produce a continuous ice cloud which are ingested by the engine during operation over a range of altitude conditions. The PSL test results confirmed that there was ice accretion in the engine due to ice crystal ingestion, at the same simulated altitude operating conditions as experienced previously in flight. The computational tool was utilized to help guide a portion of the PSL testing, and was used to predict ice accretion could also occur at significantly lower altitudes. The predictions were qualitatively verified by subsequent testing of the engine in the PSL. The PSL test has helped to calibrate the engine icing computational tool to assess the risk of ice accretion. The results from the computer simulation identified prevalent trends in wet bulb temperature, ice particle melt ratio, and engine inlet temperature as a function of altitude for predicting engine icing risk due to ice crystal ingestion.

  14. Accumulation and transport of microbial-size particles in a pressure protected model burn unit: CFD simulations and experimental evidence

    PubMed Central

    2011-01-01

    Background Controlling airborne contamination is of major importance in burn units because of the high susceptibility of burned patients to infections and the unique environmental conditions that can accentuate the infection risk. In particular the required elevated temperatures in the patient room can create thermal convection flows which can transport airborne contaminates throughout the unit. In order to estimate this risk and optimize the design of an intensive care room intended to host severely burned patients, we have relied on a computational fluid dynamic methodology (CFD). Methods The study was carried out in 4 steps: i) patient room design, ii) CFD simulations of patient room design to model air flows throughout the patient room, adjacent anterooms and the corridor, iii) construction of a prototype room and subsequent experimental studies to characterize its performance iv) qualitative comparison of the tendencies between CFD prediction and experimental results. The Electricité De France (EDF) open-source software Code_Saturne® (http://www.code-saturne.org) was used and CFD simulations were conducted with an hexahedral mesh containing about 300 000 computational cells. The computational domain included the treatment room and two anterooms including equipment, staff and patient. Experiments with inert aerosol particles followed by time-resolved particle counting were conducted in the prototype room for comparison with the CFD observations. Results We found that thermal convection can create contaminated zones near the ceiling of the room, which can subsequently lead to contaminate transfer in adjacent rooms. Experimental confirmation of these phenomena agreed well with CFD predictions and showed that particles greater than one micron (i.e. bacterial or fungal spore sizes) can be influenced by these thermally induced flows. When the temperature difference between rooms was 7°C, a significant contamination transfer was observed to enter into the positive pressure room when the access door was opened, while 2°C had little effect. Based on these findings the constructed burn unit was outfitted with supplemental air exhaust ducts over the doors to compensate for the thermal convective flows. Conclusions CFD simulations proved to be a particularly useful tool for the design and optimization of a burn unit treatment room. Our results, which have been confirmed qualitatively by experimental investigation, stressed that airborne transfer of microbial size particles via thermal convection flows are able to bypass the protective overpressure in the patient room, which can represent a potential risk of cross contamination between rooms in protected environments. PMID:21371304

  15. Accumulation and transport of microbial-size particles in a pressure protected model burn unit: CFD simulations and experimental evidence.

    PubMed

    Beauchêne, Christian; Laudinet, Nicolas; Choukri, Firas; Rousset, Jean-Luc; Benhamadouche, Sofiane; Larbre, Juliette; Chaouat, Marc; Benbunan, Marc; Mimoun, Maurice; Lajonchère, Jean-Patrick; Bergeron, Vance; Derouin, Francis

    2011-03-03

    Controlling airborne contamination is of major importance in burn units because of the high susceptibility of burned patients to infections and the unique environmental conditions that can accentuate the infection risk. In particular the required elevated temperatures in the patient room can create thermal convection flows which can transport airborne contaminates throughout the unit. In order to estimate this risk and optimize the design of an intensive care room intended to host severely burned patients, we have relied on a computational fluid dynamic methodology (CFD). The study was carried out in 4 steps: i) patient room design, ii) CFD simulations of patient room design to model air flows throughout the patient room, adjacent anterooms and the corridor, iii) construction of a prototype room and subsequent experimental studies to characterize its performance iv) qualitative comparison of the tendencies between CFD prediction and experimental results. The Electricité De France (EDF) open-source software Code_Saturne® (http://www.code-saturne.org) was used and CFD simulations were conducted with an hexahedral mesh containing about 300 000 computational cells. The computational domain included the treatment room and two anterooms including equipment, staff and patient. Experiments with inert aerosol particles followed by time-resolved particle counting were conducted in the prototype room for comparison with the CFD observations. We found that thermal convection can create contaminated zones near the ceiling of the room, which can subsequently lead to contaminate transfer in adjacent rooms. Experimental confirmation of these phenomena agreed well with CFD predictions and showed that particles greater than one micron (i.e. bacterial or fungal spore sizes) can be influenced by these thermally induced flows. When the temperature difference between rooms was 7°C, a significant contamination transfer was observed to enter into the positive pressure room when the access door was opened, while 2°C had little effect. Based on these findings the constructed burn unit was outfitted with supplemental air exhaust ducts over the doors to compensate for the thermal convective flows. CFD simulations proved to be a particularly useful tool for the design and optimization of a burn unit treatment room. Our results, which have been confirmed qualitatively by experimental investigation, stressed that airborne transfer of microbial size particles via thermal convection flows are able to bypass the protective overpressure in the patient room, which can represent a potential risk of cross contamination between rooms in protected environments.

  16. Smell Detection Agent Based Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  17. Transition to turbulence in plane channel flows

    NASA Technical Reports Server (NTRS)

    Biringen, S.

    1984-01-01

    Results obtained from a numerical simulation of the final stages of transition to turbulence in plane channel flow are described. Three dimensional, incompressible Navier-Stokes equations are numerically integrated to obtain the time evolution of two and three dimensional finite amplitude disturbances. Computations are performed on CYBER-203 vector processor for a 32x51x32 grid. Results are presented for no-slip boundary conditions at the solid walls as well as for periodic suction blowing to simulate active control of transition by mass transfer. Solutions indicate that the method is capable of simulating the complex character of vorticity dynamics during the various stages of transition and final breakdown. In particular, evidence points to the formation of a lambda-shape vortex and the subsequent system of horseshoe vortices inclined to the main flow direction as the main elements of transition. Calculations involving periodic suction-blowing indicate that interference with a wave of suitable phase and amplitude reduces the disturbance growth rates.

  18. Quantum Dynamics in Biological Systems

    NASA Astrophysics Data System (ADS)

    Shim, Sangwoo

    In the first part of this dissertation, recent efforts to understand quantum mechanical effects in biological systems are discussed. Especially, long-lived quantum coherences observed during the electronic energy transfer process in the Fenna-Matthews-Olson complex at physiological condition are studied extensively using theories of open quantum systems. In addition to the usual master equation based approaches, the effect of the protein structure is investigated in atomistic detail through the combined application of quantum chemistry and molecular dynamics simulations. To evaluate the thermalized reduced density matrix, a path-integral Monte Carlo method with a novel importance sampling approach is developed for excitons coupled to an arbitrary phonon bath at a finite temperature. In the second part of the thesis, simulations of molecular systems and applications to vibrational spectra are discussed. First, the quantum dynamics of a molecule is simulated by combining semiclassical initial value representation and density funcitonal theory with analytic derivatives. A computationally-tractable approximation to the sum-of-states formalism of Raman spectra is subsequently discussed.

  19. Einsatz hydrogeochemischer Modelle in der Wasseraufbereitung

    NASA Astrophysics Data System (ADS)

    Wisotzky, Frank

    2012-09-01

    As part of a model-data investigation project, results of several water treatment studies were compared with hydrochemical models. The models used comparable hydrogeological reaction types that occur within aquifers and water treatment processes. They were tested on 3 different examples of water softening, de-nitrification and iron removal. Comparison of simulated and measured water chemical dynamics showed good agreement. In addition to mixing and formation of complexes, de-acidification and de-carbonisation processes were reproduced in the first example. The second example investigated de-nitrification in a straw filter and in a water plant filter with subsequent aeration. The third example showed iron removal where reactions with partially combusted dolomite were simulated with a computer model. All simulations showed good agreement with the observed data. The models have the advantage of yielding parameter results that are difficult to measure. This includes nitrogen gas release and the content of reacted and degradable organic substances. These tools may help to provide better insights into water treatment reactions.

  20. Effective mie-scattering and CO2 absorption in the dust-laden Martian atmosphere and its impact on radiative-convective temperature changes in the lower scale heights

    NASA Technical Reports Server (NTRS)

    Pallmann, A. J.

    1976-01-01

    A time dependent computer model of radiative-convective-conductive heat transfer in the Martian ground-atmosphere system was refined by incorporating an intermediate line strength CO2 band absorption which together with the strong-and weak-line approximation closely simulated the radiative transmission through a vertically inhomogeneous stratification. About 33,000 CO2 lines were processed to cover the spectral range of solar and planetary radiation. Absorption by silicate dust particulates, was taken into consideration to study its impact on the ground-atmosphere temperature field as a function of time. This model was subsequently attuned to IRIS, IR-radiometric and S-band occultation data. Satisfactory simulations of the measured IRIS spectra were accomplished for the dust-free condition. In the case of variable dust loads, the simulations were sufficiently fair so that some inferences into the effect of dust on temperature were justified.

  1. On kinetic modelling for solar redox thermochemical H2O and CO2 splitting over NiFe2O4 for H2, CO and syngas production.

    PubMed

    Dimitrakis, Dimitrios A; Syrigou, Maria; Lorentzou, Souzana; Kostoglou, Margaritis; Konstandopoulos, Athanasios G

    2017-10-11

    This study aims at developing a kinetic model that can adequately describe solar thermochemical water and carbon dioxide splitting with nickel ferrite powder as the active redox material. The kinetic parameters of water splitting of a previous study are revised to include transition times and new kinetic parameters for carbon dioxide splitting are developed. The computational results show a satisfactory agreement with experimental data and continuous multicycle operation under varying operating conditions is simulated. Different test cases are explored in order to improve the product yield. At first a parametric analysis is conducted, investigating the appropriate duration of the oxidation and the thermal reduction step that maximizes the hydrogen yield. Subsequently, a non-isothermal oxidation step is simulated and proven as an interesting option for increasing the hydrogen production. The kinetic model is adapted to simulate the production yields in structured solar reactor components, i.e. extruded monolithic structures, as well.

  2. Topologically Guided, Automated Construction of Metal–Organic Frameworks and Their Evaluation for Energy-Related Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colón, Yamil J.; Gómez-Gualdrón, Diego A.; Snurr, Randall Q.

    Metal-organic frameworks (MOFs) are promising materials for a range of energy and environmental applications. Here we describe in detail a computational algorithm and code to generate MOFs based on edge-transitive topological nets for subsequent evaluation via molecular simulation. This algorithm has been previously used by us to construct and evaluate 13 512 MOFs of 41 different topologies for cryo-adsorbed hydrogen storage. Grand canonical Monte Carlo simulations are used here to evaluate the 13 512 structures for the storage of gaseous fuels such as hydrogen and methane and nondistillative separation of xenon/krypton mixtures at various operating conditions. MOF performance for bothmore » gaseous fuel storage and xenon/krypton separation is influenced by topology. Simulation data suggest that gaseous fuel storage performance is topology-dependent due to MOF properties such as void fraction and surface area combining differently in different topologies, whereas xenon/krypton separation performance is topology-dependent due to how topology constrains the pore size distribution.« less

  3. Simulation of loss mechanisms in organic solar cells: A description of the mesoscopic Monte Carlo technique and an evaluation of the first reaction method.

    PubMed

    Groves, Chris; Kimber, Robin G E; Walker, Alison B

    2010-10-14

    In this letter we evaluate the accuracy of the first reaction method (FRM) as commonly used to reduce the computational complexity of mesoscale Monte Carlo simulations of geminate recombination and the performance of organic photovoltaic devices. A wide range of carrier mobilities, degrees of energetic disorder, and applied electric field are considered. For the ranges of energetic disorder relevant for most polyfluorene, polythiophene, and alkoxy poly(phenylene vinylene) materials used in organic photovoltaics, the geminate separation efficiency predicted by the FRM agrees with the exact model to better than 2%. We additionally comment on the effects of equilibration on low-field geminate separation efficiency, and in doing so emphasize the importance of the energy at which geminate carriers are created upon their subsequent behavior.

  4. Numerical Simulations of STOVL Hot Gas Ingestion in Ground Proximity Using a Multigrid Solution Procedure

    NASA Technical Reports Server (NTRS)

    Wang, Gang

    2003-01-01

    A multi grid solution procedure for the numerical simulation of turbulent flows in complex geometries has been developed. A Full Multigrid-Full Approximation Scheme (FMG-FAS) is incorporated into the continuity and momentum equations, while the scalars are decoupled from the multi grid V-cycle. A standard kappa-Epsilon turbulence model with wall functions has been used to close the governing equations. The numerical solution is accomplished by solving for the Cartesian velocity components either with a traditional grid staggering arrangement or with a multiple velocity grid staggering arrangement. The two solution methodologies are evaluated for relative computational efficiency. The solution procedure with traditional staggering arrangement is subsequently applied to calculate the flow and temperature fields around a model Short Take-off and Vertical Landing (STOVL) aircraft hovering in ground proximity.

  5. An evaluation of software tools for the design and development of cockpit displays

    NASA Technical Reports Server (NTRS)

    Ellis, Thomas D., Jr.

    1993-01-01

    The use of all-glass cockpits at the NASA Langley Research Center (LaRC) simulation facility has changed the means of design, development, and maintenance of instrument displays. The human-machine interface has evolved from a physical hardware device to a software-generated electronic display system. This has subsequently caused an increased workload at the facility. As computer processing power increases and the glass cockpit becomes predominant in facilities, software tools used in the design and development of cockpit displays are becoming both feasible and necessary for a more productive simulation environment. This paper defines LaRC requirements of a display software development tool and compares two available applications against these requirements. As a part of the software engineering process, these tools reduce development time, provide a common platform for display development, and produce exceptional real-time results.

  6. The Use of Computer-Mediated Communication To Enhance Subsequent Face-to-Face Discussions.

    ERIC Educational Resources Information Center

    Dietz-Uhler, Beth; Bishop-Clark, Cathy

    2001-01-01

    Describes a study of undergraduate students that assessed the effects of synchronous (Internet chat) and asynchronous (Internet discussion board) computer-mediated communication on subsequent face-to-face discussions. Results showed that face-to-face discussions preceded by computer-mediated communication were perceived to be more enjoyable.…

  7. Mine fire experiments and simulation with MFIRE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laage, L.W.; Yang, Hang

    1995-12-31

    A major concern of mine fires is the heat generated ventilation disturbances which can move products of combustion (POC) through unexpected passageways. Fire emergency planning requires simulation of the interaction of the fire and ventilation system to predict the state of the ventilation system and the subsequent distribution of temperatures and POC. Several computer models were developed by the U.S. Bureau of Mines (USBM) to perform this simulation. The most recent, MFIRE, simulates a mine`s ventilation system and its response to altered ventilation parameters such as the development of new mine workings or changes in ventilation control structures, external influencemore » such as varying outside temperatures, and internal influences such as fires. Extensive output allows quantitative analysis of the effects of the proposed alteration to die ventilation system. This paper describes recent USBM research to validate MFIRE`s calculation of temperature distribution in an airway due to a mine fire, as temperatures are the most significant source of ventilation disturbances. Fire tests were conducted at the Waldo Mine near Magdalena, NM. From these experiments, temperature profiles were developed as functions of time and distance from the fire and compared with simulations from MFIRE.« less

  8. Global brain dynamics during social exclusion predict subsequent behavioral conformity

    PubMed Central

    Wasylyshyn, Nick; Hemenway Falk, Brett; Garcia, Javier O; Cascio, Christopher N; O’Donnell, Matthew Brook; Bingham, C Raymond; Simons-Morton, Bruce; Vettel, Jean M; Falk, Emily B

    2018-01-01

    Abstract Individuals react differently to social experiences; for example, people who are more sensitive to negative social experiences, such as being excluded, may be more likely to adapt their behavior to fit in with others. We examined whether functional brain connectivity during social exclusion in the fMRI scanner can be used to predict subsequent conformity to peer norms. Adolescent males (n = 57) completed a two-part study on teen driving risk: a social exclusion task (Cyberball) during an fMRI session and a subsequent driving simulator session in which they drove alone and in the presence of a peer who expressed risk-averse or risk-accepting driving norms. We computed the difference in functional connectivity between social exclusion and social inclusion from each node in the brain to nodes in two brain networks, one previously associated with mentalizing (medial prefrontal cortex, temporoparietal junction, precuneus, temporal poles) and another with social pain (dorsal anterior cingulate cortex, anterior insula). Using predictive modeling, this measure of global connectivity during exclusion predicted the extent of conformity to peer pressure during driving in the subsequent experimental session. These findings extend our understanding of how global neural dynamics guide social behavior, revealing functional network activity that captures individual differences. PMID:29529310

  9. Multiscale systems biology of trauma-induced coagulopathy.

    PubMed

    Tsiklidis, Evan; Sims, Carrie; Sinno, Talid; Diamond, Scott L

    2018-07-01

    Trauma with hypovolemic shock is an extreme pathological state that challenges the body to maintain blood pressure and oxygenation in the face of hemorrhagic blood loss. In conjunction with surgical actions and transfusion therapy, survival requires the patient's blood to maintain hemostasis to stop bleeding. The physics of the problem are multiscale: (a) the systemic circulation sets the global blood pressure in response to blood loss and resuscitation therapy, (b) local tissue perfusion is altered by localized vasoregulatory mechanisms and bleeding, and (c) altered blood and vessel biology resulting from the trauma as well as local hemodynamics control the assembly of clotting components at the site of injury. Building upon ongoing modeling efforts to simulate arterial or venous thrombosis in a diseased vasculature, computer simulation of trauma-induced coagulopathy is an emerging approach to understand patient risk and predict response. Despite uncertainties in quantifying the patient's dynamic injury burden, multiscale systems biology may help link blood biochemistry at the molecular level to multiorgan responses in the bleeding patient. As an important goal of systems modeling, establishing early metrics of a patient's high-dimensional trajectory may help guide transfusion therapy or warn of subsequent later stage bleeding or thrombotic risks. This article is categorized under: Analytical and Computational Methods > Computational Methods Biological Mechanisms > Regulatory Biology Models of Systems Properties and Processes > Mechanistic Models. © 2018 Wiley Periodicals, Inc.

  10. Computer-based simulation training to improve learning outcomes in mannequin-based simulation exercises.

    PubMed

    Curtin, Lindsay B; Finn, Laura A; Czosnowski, Quinn A; Whitman, Craig B; Cawley, Michael J

    2011-08-10

    To assess the impact of computer-based simulation on the achievement of student learning outcomes during mannequin-based simulation. Participants were randomly assigned to rapid response teams of 5-6 students and then teams were randomly assigned to either a group that completed either computer-based or mannequin-based simulation cases first. In both simulations, students used their critical thinking skills and selected interventions independent of facilitator input. A predetermined rubric was used to record and assess students' performance in the mannequin-based simulations. Feedback and student performance scores were generated by the software in the computer-based simulations. More of the teams in the group that completed the computer-based simulation before completing the mannequin-based simulation achieved the primary outcome for the exercise, which was survival of the simulated patient (41.2% vs. 5.6%). The majority of students (>90%) recommended the continuation of simulation exercises in the course. Students in both groups felt the computer-based simulation should be completed prior to the mannequin-based simulation. The use of computer-based simulation prior to mannequin-based simulation improved the achievement of learning goals and outcomes. In addition to improving participants' skills, completing the computer-based simulation first may improve participants' confidence during the more real-life setting achieved in the mannequin-based simulation.

  11. Investigation of cellular detonation structure formation via linear stability theory and 2D and 3D numerical simulations

    NASA Astrophysics Data System (ADS)

    Borisov, S. P.; Kudryavtsev, A. N.

    2017-10-01

    Linear and nonlinear stages of the instability of a plane detonation wave (DW) and the subsequent process of formation of cellular detonation structure are investigated. A simple model with one-step irreversible chemical reaction is used. The linear analysis is employed to predict the DW front structure at the early stages of its formation. An emerging eigenvalue problem is solved with a global method using a Chebyshev pseudospectral method and the LAPACK software library. A local iterative shooting procedure is used for eigenvalue refinement. Numerical simulations of a propagation of a DW in plane and rectangular channels are performed with a shock capturing WENO scheme of 5th order. A special method of a computational domain shift is implemented in order to maintain the DW in the domain. It is shown that the linear analysis gives certain predictions about the DW structure that are in agreement with the numerical simulations of early stages of DW propagation. However, at later stages, a merger of detonation cells occurs so that their number is approximately halved. Computations of DW propagation in a square channel reveal two different types of spatial structure of the DW front, "rectangular" and "diagonal" types. A spontaneous transition from the rectangular to diagonal type of structure is observed during propagation of the DW.

  12. Atomistic Structure, Strength, and Kinetic Properties of Intergranular Films in Ceramics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garofalini, Stephen H

    2015-01-08

    Intergranular films (IGFs) present in polycrystalline oxide and nitride ceramics provide an excellent example of nanoconfined glasses that occupy only a small volume percentage of the bulk ceramic, but can significantly influence various mechanical, thermal, chemical, and optical properties. By employing molecular dynamics computer simulations, we have been able to predict structures and the locations of atoms at the crystal/IGF interface that were subsequently verified with the newest electron microscopies. Modification of the chemistry of the crystal surface in the simulations provided the necessary mechanism for adsorption of specific rare earth ions from the IGF in the liquid state tomore » the crystal surface. Such results had eluded other computational approaches such as ab-initio calculations because of the need to include not only the modified chemistry of the crystal surfaces but also an accurate description of the adjoining glassy IGF. This segregation of certain ions from the IGF to the crystal caused changes in the local chemistry of the IGF that affected fracture behavior in the simulations. Additional work with the rare earth ions La and Lu in the silicon oxynitride IGFs showed the mechanisms for their different affects on crystal growth, even though both types of ions are seen adhering to a bounding crystal surface that would normally imply equivalent affects on grain growth.« less

  13. Phase field models for heterogeneous nucleation: Application to inoculation in alpha-solidifying Ti-Al-B alloys

    NASA Astrophysics Data System (ADS)

    Apel, M.; Eiken, J.; Hecht, U.

    2014-02-01

    This paper aims at briefly reviewing phase field models applied to the simulation of heterogeneous nucleation and subsequent growth, with special emphasis on grain refinement by inoculation. The spherical cap and free growth model (e.g. A.L. Greer, et al., Acta Mater. 48, 2823 (2000)) has proven its applicability for different metallic systems, e.g. Al or Mg based alloys, by computing the grain refinement effect achieved by inoculation of the melt with inert seeding particles. However, recent experiments with peritectic Ti-Al-B alloys revealed that the grain refinement by TiB2 is less effective than predicted by the model. Phase field simulations can be applied to validate the approximations of the spherical cap and free growth model, e.g. by computing explicitly the latent heat release associated with different nucleation and growth scenarios. Here, simulation results for point-shaped nucleation, as well as for partially and completely wetted plate-like seed particles will be discussed with respect to recalescence and impact on grain refinement. It will be shown that particularly for large seeding particles (up to 30 μm), the free growth morphology clearly deviates from the assumed spherical cap and the initial growth - until the free growth barrier is reached - significantly contributes to the latent heat release and determines the recalescence temperature.

  14. Experimental Study and Computational Simulations of Key Pebble Bed Thermo-mechanics Issues for Design and Safety

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tokuhiro, Akira; Potirniche, Gabriel; Cogliati, Joshua

    2014-07-08

    An experimental and computational study, consisting of modeling and simulation (M&S), of key thermal-mechanical issues affecting the design and safety of pebble-bed (PB) reactors was conducted. The objective was to broaden understanding and experimentally validate thermal-mechanic phenomena of nuclear grade graphite, specifically, spheres in frictional contact as anticipated in the bed under reactor relevant pressures and temperatures. The contact generates graphite dust particulates that can subsequently be transported into the flowing gaseous coolent. Under postulated depressurization transients and with the potential for leaked fission products to be adsorbed onto graphite 'dust', there is the potential for fission products to escapemore » from the primary volume. This is a design safety concern. Furthermore, earlier safety assessment identified the distinct possibility for the dispersed dust to combust in contact with air if sufficient conditions are met. Both of these phenomena were noted as important to design review and containing uncertainty to warrant study. The team designed and conducted two separate effects tests to study and benchmark the potential dust-generation rate, as well as study the conditions under which a dust explosion may occure in a standardized, instrumented explosion chamber.« less

  15. CFD-CAA Coupled Calculations of a Tandem Cylinder Configuration to Assess Facility Installation Effects

    NASA Technical Reports Server (NTRS)

    Redonnet, Stephane; Lockard, David P.; Khorrami, Mehdi R.; Choudhari, Meelan M.

    2011-01-01

    This paper presents a numerical assessment of acoustic installation effects in the tandem cylinder (TC) experiments conducted in the NASA Langley Quiet Flow Facility (QFF), an open-jet, anechoic wind tunnel. Calculations that couple the Computational Fluid Dynamics (CFD) and Computational Aeroacoustics (CAA) of the TC configuration within the QFF are conducted using the CFD simulation results previously obtained at NASA LaRC. The coupled simulations enable the assessment of installation effects associated with several specific features in the QFF facility that may have impacted the measured acoustic signature during the experiment. The CFD-CAA coupling is based on CFD data along a suitably chosen surface, and employs a technique that was recently improved to account for installed configurations involving acoustic backscatter into the CFD domain. First, a CFD-CAA calculation is conducted for an isolated TC configuration to assess the coupling approach, as well as to generate a reference solution for subsequent assessments of QFF installation effects. Direct comparisons between the CFD-CAA calculations associated with the various installed configurations allow the assessment of the effects of each component (nozzle, collector, etc.) or feature (confined vs. free jet flow, etc.) characterizing the NASA LaRC QFF facility.

  16. Examination of Wave Speed in Rotating Detonation Engines Using Simplified Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.

    2018-01-01

    A simplified, two-dimensional, computational fluid dynamic (CFD) simulation, with a reactive Euler solver is used to examine possible causes for the low detonation wave propagation speeds that are consistently observed in air breathing rotating detonation engine (RDE) experiments. Intense, small-scale turbulence is proposed as the primary mechanism. While the solver cannot model this turbulence, it can be used to examine the most likely, and profound effect of turbulence. That is a substantial enlargement of the reaction zone, or equivalently, an effective reduction in the chemical reaction rate. It is demonstrated that in the unique flowfield of the RDE, a reduction in reaction rate leads to a reduction in the detonation speed. A subsequent test of reduced reaction rate in a purely one-dimensional pulsed detonation engine (PDE) flowfield yields no reduction in wave speed. The reasons for this are explained. The impact of reduced wave speed on RDE performance is then examined, and found to be minimal. Two other potential mechanisms are briefly examined. These are heat transfer, and reactive mixture non-uniformity. In the context of the simulation used for this study, both mechanisms are shown to have negligible effect on either wave speed or performance.

  17. A comparison of FE beam and continuum elements for typical nitinol stent geometries

    NASA Astrophysics Data System (ADS)

    Ballew, Wesley; Seelecke, Stefan

    2009-03-01

    With interest in improved efficiency and a more complete description of the SMA material, this paper compares finite element (FE) simulations of typical stent geometries using two different constitutive models and two different element types. Typically, continuum elements are used for the simulation of stents, for example the commercial FE software ANSYS offers a continuum element based on Auricchio's SMA model. Almost every stent geometry, however, is made up of long and slender components and can be modeled more efficiently, in the computational sense, with beam elements. Using the ANSYS user programmable material feature, we implement the free energy based SMA model developed by Mueller and Seelecke into the ANSYS beam element 188. Convergence behavior for both, beam and continuum formulations, is studied in terms of element and layer number, respectively. This is systematically illustrated first for the case of a straight cantilever beam under end loading, and subsequently for a section of a z-bend wire, a typical stent sub-geometry. It is shown that the computation times for the beam element are reduced to only one third of those of the continuum element, while both formulations display a comparable force/displacement response.

  18. Generalized EC&LSS computer program configuration control

    NASA Technical Reports Server (NTRS)

    Blakely, R. L.

    1976-01-01

    The generalized environmental control and life support system (ECLSS) computer program (G189A) simulation of the shuttle orbiter ECLSS was upgraded. The G189A component model configuration was changed to represent the current PV102 and subsequent vehicle ECLSS configurations as defined by baseline ARS and ATCS schematics. The diagrammatic output schematics of the gas, water, and freon loops were also revised to agree with the new ECLSS configuration. The accuracy of the transient analysis was enhanced by incorporating the thermal mass effects of the equipment, structure, and fluid in the ARS gas and water loops and in the ATCS freon loops. The sources of the data used to upgrade the simulation are: (1) ATCS freon loop line sizes and lengths; (2) ARS water loop line sizes and lengths; (3) ARS water loop and ATCS freon loop component and equipment weights; and (4) ARS cabin and avionics bay thermal capacitance and conductance values. A single G189A combination master program library tape was generated which contains all of the master program library versions which were previously maintained on separate tapes. A new component subroutine, PIPETL, was developed and incorporated into the G189A master program library.

  19. Experimental investigation of performance and dynamic loading of an axial-flow marine hydrokinetic turbine with comparison to predicted design values from BEM computations

    NASA Astrophysics Data System (ADS)

    van Ness, Katherine; Hill, Craig; Aliseda, Alberto; Polagye, Brian

    2017-11-01

    Experimental measurements of a 0.45-m diameter, variable-pitch marine hydrokinetic (MHK) turbine were collected in a tow tank at different tip speed ratios and blade pitch angles. The coefficients of power and thrust are computed from direct measurements of torque, force and angular speed at the hub level. Loads on individual blades were measured with a six-degree of freedom load cell mounted at the root of one of the turbine blades. This information is used to validate the performance predictions provided by blade element model (BEM) simulations used in the turbine design, specifically the open-source code WTPerf developed by the National Renewable Energy Lab (NREL). Predictions of blade and hub loads by NREL's AeroDyn are also validated for the first time for an axial-flow MHK turbine. The influence of design twist angle, combined with the variable pitch angle, on the flow separation and subsequent blade loading will be analyzed with the complementary information from simulations and experiments. Funding for this research was provided by the United States Naval Facilities Engineering Command.

  20. Ensemble modeling of stochastic unsteady open-channel flow in terms of its time-space evolutionary probability distribution - Part 1: theoretical development

    NASA Astrophysics Data System (ADS)

    Dib, Alain; Kavvas, M. Levent

    2018-03-01

    The Saint-Venant equations are commonly used as the governing equations to solve for modeling the spatially varied unsteady flow in open channels. The presence of uncertainties in the channel or flow parameters renders these equations stochastic, thus requiring their solution in a stochastic framework in order to quantify the ensemble behavior and the variability of the process. While the Monte Carlo approach can be used for such a solution, its computational expense and its large number of simulations act to its disadvantage. This study proposes, explains, and derives a new methodology for solving the stochastic Saint-Venant equations in only one shot, without the need for a large number of simulations. The proposed methodology is derived by developing the nonlocal Lagrangian-Eulerian Fokker-Planck equation of the characteristic form of the stochastic Saint-Venant equations for an open-channel flow process, with an uncertain roughness coefficient. A numerical method for its solution is subsequently devised. The application and validation of this methodology are provided in a companion paper, in which the statistical results computed by the proposed methodology are compared against the results obtained by the Monte Carlo approach.

  1. Detailed Analysis of the Binding Mode of Vanilloids to Transient Receptor Potential Vanilloid Type I (TRPV1) by a Mutational and Computational Study

    PubMed Central

    Mori, Yoshikazu; Ogawa, Kazuo; Warabi, Eiji; Yamamoto, Masahiro; Hirokawa, Takatsugu

    2016-01-01

    Transient receptor potential vanilloid type 1 (TRPV1) is a non-selective cation channel and a multimodal sensor protein. Since the precise structure of TRPV1 was obtained by electron cryo-microscopy, the binding mode of representative agonists such as capsaicin and resiniferatoxin (RTX) has been extensively characterized; however, detailed information on the binding mode of other vanilloids remains lacking. In this study, mutational analysis of human TRPV1 was performed, and four agonists (capsaicin, RTX, [6]-shogaol and [6]-gingerol) were used to identify amino acid residues involved in ligand binding and/or modulation of proton sensitivity. The detailed binding mode of each ligand was then simulated by computational analysis. As a result, three amino acids (L518, F591 and L670) were newly identified as being involved in ligand binding and/or modulation of proton sensitivity. In addition, in silico docking simulation and a subsequent mutational study suggested that [6]-gingerol might bind to and activate TRPV1 in a unique manner. These results provide novel insights into the binding mode of various vanilloids to the channel and will be helpful in developing a TRPV1 modulator. PMID:27606946

  2. Quantum simulator review

    NASA Astrophysics Data System (ADS)

    Bednar, Earl; Drager, Steven L.

    2007-04-01

    Quantum information processing's objective is to utilize revolutionary computing capability based on harnessing the paradigm shift offered by quantum computing to solve classically hard and computationally challenging problems. Some of our computationally challenging problems of interest include: the capability for rapid image processing, rapid optimization of logistics, protecting information, secure distributed simulation, and massively parallel computation. Currently, one important problem with quantum information processing is that the implementation of quantum computers is difficult to realize due to poor scalability and great presence of errors. Therefore, we have supported the development of Quantum eXpress and QuIDD Pro, two quantum computer simulators running on classical computers for the development and testing of new quantum algorithms and processes. This paper examines the different methods used by these two quantum computing simulators. It reviews both simulators, highlighting each simulators background, interface, and special features. It also demonstrates the implementation of current quantum algorithms on each simulator. It concludes with summary comments on both simulators.

  3. Signature modelling and radiometric rendering equations in infrared scene simulation systems

    NASA Astrophysics Data System (ADS)

    Willers, Cornelius J.; Willers, Maria S.; Lapierre, Fabian

    2011-11-01

    The development and optimisation of modern infrared systems necessitates the use of simulation systems to create radiometrically realistic representations (e.g. images) of infrared scenes. Such simulation systems are used in signature prediction, the development of surveillance and missile sensors, signal/image processing algorithm development and aircraft self-protection countermeasure system development and evaluation. Even the most cursory investigation reveals a multitude of factors affecting the infrared signatures of realworld objects. Factors such as spectral emissivity, spatial/volumetric radiance distribution, specular reflection, reflected direct sunlight, reflected ambient light, atmospheric degradation and more, all affect the presentation of an object's instantaneous signature. The signature is furthermore dynamically varying as a result of internal and external influences on the object, resulting from the heat balance comprising insolation, internal heat sources, aerodynamic heating (airborne objects), conduction, convection and radiation. In order to accurately render the object's signature in a computer simulation, the rendering equations must therefore account for all the elements of the signature. In this overview paper, the signature models, rendering equations and application frameworks of three infrared simulation systems are reviewed and compared. The paper first considers the problem of infrared scene simulation in a framework for simulation validation. This approach provides concise definitions and a convenient context for considering signature models and subsequent computer implementation. The primary radiometric requirements for an infrared scene simulator are presented next. The signature models and rendering equations implemented in OSMOSIS (Belgian Royal Military Academy), DIRSIG (Rochester Institute of Technology) and OSSIM (CSIR & Denel Dynamics) are reviewed. In spite of these three simulation systems' different application focus areas, their underlying physics-based approach is similar. The commonalities and differences between the different systems are investigated, in the context of their somewhat different application areas. The application of an infrared scene simulation system towards the development of imaging missiles and missile countermeasures are briefly described. Flowing from the review of the available models and equations, recommendations are made to further enhance and improve the signature models and rendering equations in infrared scene simulators.

  4. Micro Blowing Simulations Using a Coupled Finite-Volume Lattice-Boltzman n L ES Approach

    NASA Technical Reports Server (NTRS)

    Menon, S.; Feiz, H.

    1990-01-01

    Three dimensional large-eddy simulations (LES) of single and multiple jet-in-cross-flow (JICF) are conducted using the 19-bit Lattice Boltzmann Equation (LBE) method coupled with a conventional finite-volume (FV) scheme. In this coupled LBE-FV approach, the LBE-LES is employed to simulate the flow inside the jet nozzles while the FV-LES is used to simulate the crossflow. The key application area is the use of this technique is to study the micro blowing technique (MBT) for drag control similar to the recent experiments at NASA/GRC. It is necessary to resolve the flow inside the micro-blowing and suction holes with high resolution without being restricted by the FV time-step restriction. The coupled LBE-FV-LES approach achieves this objectives in a computationally efficient manner. A single jet in crossflow case is used for validation purpose and the results are compared with experimental data and full LBE-LES simulation. Good agreement with data is obtained. Subsequently, MBT over a flat plate with porosity of 25% is simulated using 9 jets in a compressible cross flow at a Mach number of 0.4. It is shown that MBT suppresses the near-wall vortices and reduces the skin friction by up to 50 percent. This is in good agreement with experimental data.

  5. CFD and PTV steady flow investigation in an anatomically accurate abdominal aortic aneurysm.

    PubMed

    Boutsianis, Evangelos; Guala, Michele; Olgac, Ufuk; Wildermuth, Simon; Hoyer, Klaus; Ventikos, Yiannis; Poulikakos, Dimos

    2009-01-01

    There is considerable interest in computational and experimental flow investigations within abdominal aortic aneurysms (AAAs). This task stipulates advanced grid generation techniques and cross-validation because of the anatomical complexity. The purpose of this study is to examine the feasibility of velocity measurements by particle tracking velocimetry (PTV) in realistic AAA models. Computed tomography and rapid prototyping were combined to digitize and construct a silicone replica of a patient-specific AAA. Three-dimensional velocity measurements were acquired using PTV under steady averaged resting boundary conditions. Computational fluid dynamics (CFD) simulations were subsequently carried out with identical boundary conditions. The computational grid was created by splitting the luminal volume into manifold and nonmanifold subsections. They were filled with tetrahedral and hexahedral elements, respectively. Grid independency was tested on three successively refined meshes. Velocity differences of about 1% in all three directions existed mainly within the AAA sack. Pressure revealed similar variations, with the sparser mesh predicting larger values. PTV velocity measurements were taken along the abdominal aorta and showed good agreement with the numerical data. The results within the aneurysm neck and sack showed average velocity variations of about 5% of the mean inlet velocity. The corresponding average differences increased for all velocity components downstream the iliac bifurcation to as much as 15%. The two domains differed slightly due to flow-induced forces acting on the silicone model. Velocity quantification through narrow branches was problematic due to decreased signal to noise ratio at the larger local velocities. Computational wall pressure and shear fields are also presented. The agreement between CFD simulations and the PTV experimental data was confirmed by three-dimensional velocity comparisons at several locations within the investigated AAA anatomy indicating the feasibility of this approach.

  6. A comprehensive pipeline for multi-resolution modeling of the mitral valve: Validation, computational efficiency, and predictive capability.

    PubMed

    Drach, Andrew; Khalighi, Amir H; Sacks, Michael S

    2018-02-01

    Multiple studies have demonstrated that the pathological geometries unique to each patient can affect the durability of mitral valve (MV) repairs. While computational modeling of the MV is a promising approach to improve the surgical outcomes, the complex MV geometry precludes use of simplified models. Moreover, the lack of complete in vivo geometric information presents significant challenges in the development of patient-specific computational models. There is thus a need to determine the level of detail necessary for predictive MV models. To address this issue, we have developed a novel pipeline for building attribute-rich computational models of MV with varying fidelity directly from the in vitro imaging data. The approach combines high-resolution geometric information from loaded and unloaded states to achieve a high level of anatomic detail, followed by mapping and parametric embedding of tissue attributes to build a high-resolution, attribute-rich computational models. Subsequent lower resolution models were then developed and evaluated by comparing the displacements and surface strains to those extracted from the imaging data. We then identified the critical levels of fidelity for building predictive MV models in the dilated and repaired states. We demonstrated that a model with a feature size of about 5 mm and mesh size of about 1 mm was sufficient to predict the overall MV shape, stress, and strain distributions with high accuracy. However, we also noted that more detailed models were found to be needed to simulate microstructural events. We conclude that the developed pipeline enables sufficiently complex models for biomechanical simulations of MV in normal, dilated, repaired states. Copyright © 2017 John Wiley & Sons, Ltd.

  7. N-BODY SIMULATION OF PLANETESIMAL FORMATION THROUGH GRAVITATIONAL INSTABILITY AND COAGULATION. II. ACCRETION MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michikoshi, Shugo; Kokubo, Eiichiro; Inutsuka, Shu-ichiro, E-mail: michikoshi@cfca.j, E-mail: kokubo@th.nao.ac.j, E-mail: inutsuka@tap.scphys.kyoto-u.ac.j

    2009-10-01

    The gravitational instability of a dust layer is one of the scenarios for planetesimal formation. If the density of a dust layer becomes sufficiently high as a result of the sedimentation of dust grains toward the midplane of a protoplanetary disk, the layer becomes gravitationally unstable and spontaneously fragments into planetesimals. Using a shearing box method, we performed local N-body simulations of gravitational instability of a dust layer and subsequent coagulation without gas and investigated the basic formation process of planetesimals. In this paper, we adopted the accretion model as a collision model. A gravitationally bound pair of particles ismore » replaced by a single particle with the total mass of the pair. This accretion model enables us to perform long-term and large-scale calculations. We confirmed that the formation process of planetesimals is the same as that in the previous paper with the rubble pile models. The formation process is divided into three stages: the formation of nonaxisymmetric structures; the creation of planetesimal seeds; and their collisional growth. We investigated the dependence of the planetesimal mass on the simulation domain size. We found that the mean mass of planetesimals formed in simulations is proportional to L {sup 3/2} {sub y}, where L{sub y} is the size of the computational domain in the direction of rotation. However, the mean mass of planetesimals is independent of L{sub x} , where L{sub x} is the size of the computational domain in the radial direction if L{sub x} is sufficiently large. We presented the estimation formula of the planetesimal mass taking into account the simulation domain size.« less

  8. Discrete-event computer simulation methods in the optimisation of a physiotherapy clinic.

    PubMed

    Villamizar, J R; Coelli, F C; Pereira, W C A; Almeida, R M V R

    2011-03-01

    To develop a computer model to analyse the performance of a standard physiotherapy clinic in the city of Rio de Janeiro, Brazil. The clinic receives an average of 80 patients/day and offers 10 treatment modalities. Details of patient procedures and treatment routines were obtained from direct interviews with clinic staff. Additional data (e.g. arrival time, treatment duration, length of stay) were obtained for 2000 patients from the clinic's computerised records from November 2005 to February 2006. A discrete-event model was used to simulate the clinic's operational routine. The initial model was built to reproduce the actual configuration of the clinic, and five simulation strategies were subsequently implemented, representing changes in the number of patients, human resources of the clinic and the scheduling of patient arrivals. Findings indicated that the actual clinic configuration could accept up to 89 patients/day, with an average length of stay of 119minutes and an average patient waiting time of 3minutes. When the scheduling of patient arrivals was increased to an interval of 6.5minutes, maximum attendance increased to 114 patients/day. For the actual clinic configuration, optimal staffing consisted of three physiotherapists and 12 students. According to the simulation, the same 89 patients could be attended when the infrastructure was decreased to five kinesiotherapy rooms, two cardiotherapy rooms and three global postural reeducation rooms. The model was able to evaluate the capacity of the actual clinic configuration, and additional simulation strategies indicated how the operation of the clinic depended on the main study variables. Copyright © 2010 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  9. A breakthrough for experiencing and understanding simulated physics

    NASA Technical Reports Server (NTRS)

    Watson, Val

    1988-01-01

    The use of computer simulation in physics research is discussed, focusing on improvements to graphic workstations. Simulation capabilities and applications of enhanced visualization tools are outlined. The elements of an ideal computer simulation are presented and the potential for improving various simulation elements is examined. The interface between the human and the computer and simulation models are considered. Recommendations are made for changes in computer simulation practices and applications of simulation technology in education.

  10. Method for decreasing CT simulation time of complex phantoms and systems through separation of material specific projection data

    NASA Astrophysics Data System (ADS)

    Divel, Sarah E.; Christensen, Soren; Wintermark, Max; Lansberg, Maarten G.; Pelc, Norbert J.

    2017-03-01

    Computer simulation is a powerful tool in CT; however, long simulation times of complex phantoms and systems, especially when modeling many physical aspects (e.g., spectrum, finite detector and source size), hinder the ability to realistically and efficiently evaluate and optimize CT techniques. Long simulation times primarily result from the tracing of hundreds of line integrals through each of the hundreds of geometrical shapes defined within the phantom. However, when the goal is to perform dynamic simulations or test many scan protocols using a particular phantom, traditional simulation methods inefficiently and repeatedly calculate line integrals through the same set of structures although only a few parameters change in each new case. In this work, we have developed a new simulation framework that overcomes such inefficiencies by dividing the phantom into material specific regions with the same time attenuation profiles, acquiring and storing monoenergetic projections of the regions, and subsequently scaling and combining the projections to create equivalent polyenergetic sinograms. The simulation framework is especially efficient for the validation and optimization of CT perfusion which requires analysis of many stroke cases and testing hundreds of scan protocols on a realistic and complex numerical brain phantom. Using this updated framework to conduct a 31-time point simulation with 80 mm of z-coverage of a brain phantom on two 16-core Linux serves, we have reduced the simulation time from 62 hours to under 2.6 hours, a 95% reduction.

  11. Evaluation of lung recruitment maneuvers in acute respiratory distress syndrome using computer simulation.

    PubMed

    Das, Anup; Cole, Oana; Chikhani, Marc; Wang, Wenfei; Ali, Tayyba; Haque, Mainul; Bates, Declan G; Hardman, Jonathan G

    2015-01-12

    Direct comparison of the relative efficacy of different recruitment maneuvers (RMs) for patients with acute respiratory distress syndrome (ARDS) via clinical trials is difficult, due to the heterogeneity of patient populations and disease states, as well as a variety of practical issues. There is also significant uncertainty regarding the minimum values of positive end-expiratory pressure (PEEP) required to ensure maintenance of effective lung recruitment using RMs. We used patient-specific computational simulation to analyze how three different RMs act to improve physiological responses, and investigate how different levels of PEEP contribute to maintaining effective lung recruitment. We conducted experiments on five 'virtual' ARDS patients using a computational simulator that reproduces static and dynamic features of a multivariable clinical dataset on the responses of individual ARDS patients to a range of ventilator inputs. Three recruitment maneuvers (sustained inflation (SI), maximal recruitment strategy (MRS) followed by a titrated PEEP, and prolonged recruitment maneuver (PRM)) were implemented and evaluated for a range of different pressure settings. All maneuvers demonstrated improvements in gas exchange, but the extent and duration of improvement varied significantly, as did the observed mechanism of operation. Maintaining adequate post-RM levels of PEEP was seen to be crucial in avoiding cliff-edge type re-collapse of alveolar units for all maneuvers. For all five patients, the MRS exhibited the most prolonged improvement in oxygenation, and we found that a PEEP setting of 35 cm H2O with a fixed driving pressure of 15 cm H2O (above PEEP) was sufficient to achieve 95% recruitment. Subsequently, we found that PEEP titrated to a value of 16 cm H2O was able to maintain 95% recruitment in all five patients. There appears to be significant scope for reducing the peak levels of PEEP originally specified in the MRS and hence to avoid exposing the lung to unnecessarily high pressures. More generally, our study highlights the huge potential of computer simulation to assist in evaluating the efficacy of different recruitment maneuvers, in understanding their modes of operation, in optimizing RMs for individual patients, and in supporting clinicians in the rational design of improved treatment strategies.

  12. Population array and agricultural data arrays for the Los Alamos National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobson, K.W.; Duffy, S.; Kowalewsky, K.

    1998-07-01

    To quantify or estimate the environmental and radiological impacts from man-made sources of radioactive effluents, certain dose assessment procedures were developed by various government and regulatory agencies. Some of these procedures encourage the use of computer simulations (models) to calculate air dispersion, environmental transport, and subsequent human exposure to radioactivity. Such assessment procedures are frequently used to demonstrate compliance with Department of Energy (DOE) and US Environmental Protection Agency (USEPA) regulations. Knowledge of the density and distribution of the population surrounding a source is an essential component in assessing the impacts from radioactive effluents. Also, as an aid to calculatingmore » the dose to a given population, agricultural data relevant to the dose assessment procedure (or computer model) are often required. This report provides such population and agricultural data for the area surrounding Los Alamos National Laboratory.« less

  13. Computational Reduction of Specimen Noise to Enable Improved Thermography Characterization of Flaws in Graphite Polymer Composites

    NASA Technical Reports Server (NTRS)

    Winfree, William P.; Howell, Patricia A.; Zalameda, Joseph N.

    2014-01-01

    Flaw detection and characterization with thermographic techniques in graphite polymer composites are often limited by localized variations in the thermographic response. Variations in properties such as acceptable porosity, fiber volume content and surface polymer thickness result in variations in the thermal response that in general cause significant variations in the initial thermal response. These result in a "noise" floor that increases the difficulty of detecting and characterizing deeper flaws. A method is presented for computationally removing a significant amount of the "noise" from near surface porosity by diffusing the early time response, then subtracting it from subsequent responses. Simulations of the thermal response of a composite are utilized in defining the limitations of the technique. This method for reducing the data is shown to give considerable improvement characterizing both the size and depth of damage. Examples are shown for data acquired on specimens with fabricated delaminations and impact damage.

  14. Computational modeling of the neural representation of object shape in the primate ventral visual system

    PubMed Central

    Eguchi, Akihiro; Mender, Bedeho M. W.; Evans, Benjamin D.; Humphreys, Glyn W.; Stringer, Simon M.

    2015-01-01

    Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects. In this paper, we investigate through computer simulation how these cell firing properties may develop through unsupervised visually-guided learning. Individual neurons in the model are shown to exploit statistical regularity and temporal continuity of the visual inputs during training to learn firing properties that are similar to neurons in V4 and TEO. Neurons in V4 encode the conformation of boundary contour elements at a particular position within an object regardless of the location of the object on the retina, while neurons in TEO integrate information from multiple boundary contour elements. This representation goes beyond mere object recognition, in which neurons simply respond to the presence of a whole object, but provides an essential foundation from which the brain is subsequently able to recognize the whole object. PMID:26300766

  15. Continuous development of current sheets near and away from magnetic nulls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Sanjay; Bhattacharyya, R.

    2016-04-15

    The presented computations compare the strength of current sheets which develop near and away from the magnetic nulls. To ensure the spontaneous generation of current sheets, the computations are performed congruently with Parker's magnetostatic theorem. The simulations evince current sheets near two dimensional and three dimensional magnetic nulls as well as away from them. An important finding of this work is in the demonstration of comparative scaling of peak current density with numerical resolution, for these different types of current sheets. The results document current sheets near two dimensional magnetic nulls to have larger strength while exhibiting a stronger scalingmore » than the current sheets close to three dimensional magnetic nulls or away from any magnetic null. The comparative scaling points to a scenario where the magnetic topology near a developing current sheet is important for energetics of the subsequent reconnection.« less

  16. Modulation of the error-related negativity by response conflict.

    PubMed

    Danielmeier, Claudia; Wessel, Jan R; Steinhauser, Marco; Ullsperger, Markus

    2009-11-01

    An arrow version of the Eriksen flanker task was employed to investigate the influence of conflict on the error-related negativity (ERN). The degree of conflict was modulated by varying the distance between flankers and the target arrow (CLOSE and FAR conditions). Error rates and reaction time data from a behavioral experiment were used to adapt a connectionist model of this task. This model was based on the conflict monitoring theory and simulated behavioral and event-related potential data. The computational model predicted an increased ERN amplitude in FAR incompatible (the low-conflict condition) compared to CLOSE incompatible errors (the high-conflict condition). A subsequent ERP experiment confirmed the model predictions. The computational model explains this finding with larger post-response conflict in far trials. In addition, data and model predictions of the N2 and the LRP support the conflict interpretation of the ERN.

  17. Computational reduction of specimen noise to enable improved thermography characterization of flaws in graphite polymer composites

    NASA Astrophysics Data System (ADS)

    Winfree, William P.; Howell, Patricia A.; Zalameda, Joseph N.

    2014-05-01

    Flaw detection and characterization with thermographic techniques in graphite polymer composites are often limited by localized variations in the thermographic response. Variations in properties such as acceptable porosity, fiber volume content and surface polymer thickness result in variations in the thermal response that in general cause significant variations in the initial thermal response. These result in a "noise" floor that increases the difficulty of detecting and characterizing deeper flaws. A method is presented for computationally removing a significant amount of the "noise" from near surface porosity by diffusing the early time response, then subtracting it from subsequent responses. Simulations of the thermal response of a composite are utilized in defining the limitations of the technique. This method for reducing the data is shown to give considerable improvement characterizing both the size and depth of damage. Examples are shown for data acquired on specimens with fabricated delaminations and impact damage.

  18. Computational and experimental analysis of DNA shuffling

    PubMed Central

    Maheshri, Narendra; Schaffer, David V.

    2003-01-01

    We describe a computational model of DNA shuffling based on the thermodynamics and kinetics of this process. The model independently tracks a representative ensemble of DNA molecules and records their states at every stage of a shuffling reaction. These data can subsequently be analyzed to yield information on any relevant metric, including reassembly efficiency, crossover number, type and distribution, and DNA sequence length distributions. The predictive ability of the model was validated by comparison to three independent sets of experimental data, and analysis of the simulation results led to several unique insights into the DNA shuffling process. We examine a tradeoff between crossover frequency and reassembly efficiency and illustrate the effects of experimental parameters on this relationship. Furthermore, we discuss conditions that promote the formation of useless “junk” DNA sequences or multimeric sequences containing multiple copies of the reassembled product. This model will therefore aid in the design of optimal shuffling reaction conditions. PMID:12626764

  19. A Comprehensive Study on Energy Efficiency and Performance of Flash-based SSD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Seon-Yeon; Kim, Youngjae; Urgaonkar, Bhuvan

    2011-01-01

    Use of flash memory as a storage medium is becoming popular in diverse computing environments. However, because of differences in interface, flash memory requires a hard-disk-emulation layer, called FTL (flash translation layer). Although the FTL enables flash memory storages to replace conventional hard disks, it induces significant computational and space overhead. Despite the low power consumption of flash memory, this overhead leads to significant power consumption in an overall storage system. In this paper, we analyze the characteristics of flash-based storage devices from the viewpoint of power consumption and energy efficiency by using various methodologies. First, we utilize simulation tomore » investigate the interior operation of flash-based storage of flash-based storages. Subsequently, we measure the performance and energy efficiency of commodity flash-based SSDs by using microbenchmarks to identify the block-device level characteristics and macrobenchmarks to reveal their filesystem level characteristics.« less

  20. Cognitive diagnosis modelling incorporating item response times.

    PubMed

    Zhan, Peida; Jiao, Hong; Liao, Dandan

    2018-05-01

    To provide more refined diagnostic feedback with collateral information in item response times (RTs), this study proposed joint modelling of attributes and response speed using item responses and RTs simultaneously for cognitive diagnosis. For illustration, an extended deterministic input, noisy 'and' gate (DINA) model was proposed for joint modelling of responses and RTs. Model parameter estimation was explored using the Bayesian Markov chain Monte Carlo (MCMC) method. The PISA 2012 computer-based mathematics data were analysed first. These real data estimates were treated as true values in a subsequent simulation study. A follow-up simulation study with ideal testing conditions was conducted as well to further evaluate model parameter recovery. The results indicated that model parameters could be well recovered using the MCMC approach. Further, incorporating RTs into the DINA model would improve attribute and profile correct classification rates and result in more accurate and precise estimation of the model parameters. © 2017 The British Psychological Society.

  1. Computer simulations of sympatric speciation in a simple food web

    NASA Astrophysics Data System (ADS)

    Luz-Burgoa, K.; Dell, Tony; de Oliveira, S. Moss

    2005-07-01

    Galapagos finches, have motivated much theoretical research aimed at understanding the processes associated with the formation of the species. Inspired by them, in this paper we investigate the process of sympatric speciation in a simple food web model. For that we modify the individual-based Penna model that has been widely used to study aging as well as other evolutionary processes. Initially, our web consists of a primary food source and a single herbivore species that feeds on this resource. Subsequently we introduce a predator that feeds on the herbivore. In both instances we manipulate directly a basal resource distribution and monitor the changes in the populations. Sympatric speciation is obtained for the top species in both cases, and our results suggest that the speciation velocity depends on how far up, in the food chain, the focus population is feeding. Simulations are done with three different sexual imprintinglike mechanisms, in order to discuss adaptation by natural selection.

  2. Characterizing Hypervelocity Impact Plasma Through Experiments and Simulations

    NASA Astrophysics Data System (ADS)

    Close, Sigrid; Lee, Nicolas; Fletcher, Alex; Nuttall, Andrew; Hew, Monica; Tarantino, Paul

    2017-10-01

    Hypervelocity micro particles, including meteoroids and space debris with masses <1 ng, routinely impact spacecraft and create dense plasma that expands at the isothermal sound speed. This plasma, with a charge separation commensurate with different species mobilities, can produce a strong electromagnetic pulse (EMP) with a broad frequency spectrum. Subsequent plasma oscillations resulting from instabilities can also emit significant power and may be responsible for many reported satellite anomalies. We present theory and recent results from ground-based impact tests aimed at characterizing hypervelocity impact plasma. We also show results from particle-in-cell (PIC) and computational fluid dynamics (CFD) simulations that allow us to extend to regimes not currently possible with ground-based technology. We show that significant impact-produced radio frequency (RF) emissions occurred in frequencies ranging from VHF through L-band and that these emissions were highly correlated with fast (>20 km/s) impacts that produced a fully ionized plasma.

  3. Geostatistical borehole image-based mapping of karst-carbonate aquifer pores

    USGS Publications Warehouse

    Michael Sukop,; Cunningham, Kevin J.

    2016-01-01

    Quantification of the character and spatial distribution of porosity in carbonate aquifers is important as input into computer models used in the calculation of intrinsic permeability and for next-generation, high-resolution groundwater flow simulations. Digital, optical, borehole-wall image data from three closely spaced boreholes in the karst-carbonate Biscayne aquifer in southeastern Florida are used in geostatistical experiments to assess the capabilities of various methods to create realistic two-dimensional models of vuggy megaporosity and matrix-porosity distribution in the limestone that composes the aquifer. When the borehole image data alone were used as the model training image, multiple-point geostatistics failed to detect the known spatial autocorrelation of vuggy megaporosity and matrix porosity among the three boreholes, which were only 10 m apart. Variogram analysis and subsequent Gaussian simulation produced results that showed a realistic conceptualization of horizontal continuity of strata dominated by vuggy megaporosity and matrix porosity among the three boreholes.

  4. Multi-Instance Learning Models for Automated Support of Analysts in Simulated Surveillance Environments

    NASA Technical Reports Server (NTRS)

    Birisan, Mihnea; Beling, Peter

    2011-01-01

    New generations of surveillance drones are being outfitted with numerous high definition cameras. The rapid proliferation of fielded sensors and supporting capacity for processing and displaying data will translate into ever more capable platforms, but with increased capability comes increased complexity and scale that may diminish the usefulness of such platforms to human operators. We investigate methods for alleviating strain on analysts by automatically retrieving content specific to their current task using a machine learning technique known as Multi-Instance Learning (MIL). We use MIL to create a real time model of the analysts' task and subsequently use the model to dynamically retrieve relevant content. This paper presents results from a pilot experiment in which a computer agent is assigned analyst tasks such as identifying caravanning vehicles in a simulated vehicle traffic environment. We compare agent performance between MIL aided trials and unaided trials.

  5. Development of Monte Carlo simulations to provide scanner-specific organ dose coefficients for contemporary CT

    NASA Astrophysics Data System (ADS)

    Jansen, Jan T. M.; Shrimpton, Paul C.

    2016-07-01

    The ImPACT (imaging performance assessment of CT scanners) CT patient dosimetry calculator is still used world-wide to estimate organ and effective doses (E) for computed tomography (CT) examinations, although the tool is based on Monte Carlo calculations reflecting practice in the early 1990’s. Subsequent developments in CT scanners, definitions of E, anthropomorphic phantoms, computers and radiation transport codes, have all fuelled an urgent need for updated organ dose conversion factors for contemporary CT. A new system for such simulations has been developed and satisfactorily tested. Benchmark comparisons of normalised organ doses presently derived for three old scanners (General Electric 9800, Philips Tomoscan LX and Siemens Somatom DRH) are within 5% of published values. Moreover, calculated normalised values of CT Dose Index for these scanners are in reasonable agreement (within measurement and computational uncertainties of  ±6% and  ±1%, respectively) with reported standard measurements. Organ dose coefficients calculated for a contemporary CT scanner (Siemens Somatom Sensation 16) demonstrate potential deviations by up to around 30% from the surrogate values presently assumed (through a scanner matching process) when using the ImPACT CT Dosimetry tool for newer scanners. Also, illustrative estimates of E for some typical examinations and a range of anthropomorphic phantoms demonstrate the significant differences (by some 10’s of percent) that can arise when changing from the previously adopted stylised mathematical phantom to the voxel phantoms presently recommended by the International Commission on Radiological Protection (ICRP), and when following the 2007 ICRP recommendations (updated from 1990) concerning tissue weighting factors. Further simulations with the validated dosimetry system will provide updated series of dose coefficients for a wide range of contemporary scanners.

  6. Constant-pH Hybrid Nonequilibrium Molecular Dynamics–Monte Carlo Simulation Method

    PubMed Central

    2016-01-01

    A computational method is developed to carry out explicit solvent simulations of complex molecular systems under conditions of constant pH. In constant-pH simulations, preidentified ionizable sites are allowed to spontaneously protonate and deprotonate as a function of time in response to the environment and the imposed pH. The method, based on a hybrid scheme originally proposed by H. A. Stern (J. Chem. Phys.2007, 126, 164112), consists of carrying out short nonequilibrium molecular dynamics (neMD) switching trajectories to generate physically plausible configurations with changed protonation states that are subsequently accepted or rejected according to a Metropolis Monte Carlo (MC) criterion. To ensure microscopic detailed balance arising from such nonequilibrium switches, the atomic momenta are altered according to the symmetric two-ends momentum reversal prescription. To achieve higher efficiency, the original neMD–MC scheme is separated into two steps, reducing the need for generating a large number of unproductive and costly nonequilibrium trajectories. In the first step, the protonation state of a site is randomly attributed via a Metropolis MC process on the basis of an intrinsic pKa; an attempted nonequilibrium switch is generated only if this change in protonation state is accepted. This hybrid two-step inherent pKa neMD–MC simulation method is tested with single amino acids in solution (Asp, Glu, and His) and then applied to turkey ovomucoid third domain and hen egg-white lysozyme. Because of the simple linear increase in the computational cost relative to the number of titratable sites, the present method is naturally able to treat extremely large systems. PMID:26300709

  7. Symplectic molecular dynamics simulations on specially designed parallel computers.

    PubMed

    Borstnik, Urban; Janezic, Dusanka

    2005-01-01

    We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.

  8. Large space structures controls research and development at Marshall Space Flight Center: Status and future plans

    NASA Technical Reports Server (NTRS)

    Buchanan, H. J.

    1983-01-01

    Work performed in Large Space Structures Controls research and development program at Marshall Space Flight Center is described. Studies to develop a multilevel control approach which supports a modular or building block approach to the buildup of space platforms are discussed. A concept has been developed and tested in three-axis computer simulation utilizing a five-body model of a basic space platform module. Analytical efforts have continued to focus on extension of the basic theory and subsequent application. Consideration is also given to specifications to evaluate several algorithms for controlling the shape of Large Space Structures.

  9. Revisiting the Boeing B-47 and the Avro Vulcan with implications on aircraft design today

    NASA Astrophysics Data System (ADS)

    van Seeters, Philip A.

    This project compares the cruise mission performance of the historic Boeing B-47 and Avro Vulcan. The author aims to demonstrate that despite superficial similarities, these aircraft perform quite differently away from their intended design points. The investigation uses computer aided design software, and an aircraft sizing program to generate digital models of both airplanes. Subsequent simulations of various missions quantify the performance mainly in terms of fuel efficiency, and productivity. Based on this comparison, the efforts conclude that these aircraft perform indeed differently, and that a performance comparison based on a design mission alone, is insufficient.

  10. Direct numerical simulation of auto-ignition of a hydrogen vortex ring reacting with hot air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doom, Jeff; Mahesh, Krishnan

    2009-04-15

    Direct numerical simulation (DNS) is used to study chemically reacting, laminar vortex rings. A novel, all-Mach number algorithm developed by Doom et al. [J. Doom, Y. Hou, K. Mahesh, J. Comput. Phys. 226 (2007) 1136-1151] is used. The chemical mechanism is a nine species, nineteen reaction mechanism for H{sub 2}/air combustion proposed by Mueller et al. [M.A. Mueller, T.J. Kim, R.A. Yetter, F.L. Dryer, Int. J. Chem. Kinet. 31 (1999) 113-125]. Diluted H{sub 2} at ambient temperature (300 K) is injected into hot air. The simulations study the effect of fuel/air ratios, oxidizer temperature, Lewis number and stroke ratio (ratiomore » of piston stroke length to diameter). Results show that auto-ignition occurs in fuel lean, high temperature regions with low scalar dissipation at a 'most reactive' mixture fraction, {zeta}{sub MR} (Mastorakos et al. [E. Mastorakos, T.A. Baritaud, T.J. Poinsot, Combust. Flame 109 (1997) 198-223]). Subsequent evolution of the flame is not predicted by {zeta}{sub MR}; a most reactive temperature T{sub MR} is defined and shown to predict both the initial auto-ignition as well as subsequent evolution. For stroke ratios less than the formation number, ignition in general occurs behind the vortex ring and propagates into the core. At higher oxidizer temperatures, ignition is almost instantaneous and occurs along the entire interface between fuel and oxidizer. For stroke ratios greater than the formation number, ignition initially occurs behind the leading vortex ring, then occurs along the length of the trailing column and propagates toward the ring. Lewis number is seen to affect both the initial ignition as well as subsequent flame evolution significantly. Non-uniform Lewis number simulations provide faster ignition and burnout time but a lower maximum temperature. The fuel rich reacting vortex ring provides the highest maximum temperature and the higher oxidizer temperature provides the fastest ignition time. The fuel lean reacting vortex ring has little effect on the flow and behaves similar to a non-reacting vortex ring. (author)« less

  11. Computationally efficient analysis of particle transport and deposition in a human whole-lung-airway model. Part I: Theory and model validation.

    PubMed

    Kolanjiyil, Arun V; Kleinstreuer, Clement

    2016-12-01

    Computational predictions of aerosol transport and deposition in the human respiratory tract can assist in evaluating detrimental or therapeutic health effects when inhaling toxic particles or administering drugs. However, the sheer complexity of the human lung, featuring a total of 16 million tubular airways, prohibits detailed computer simulations of the fluid-particle dynamics for the entire respiratory system. Thus, in order to obtain useful and efficient particle deposition results, an alternative modeling approach is necessary where the whole-lung geometry is approximated and physiological boundary conditions are implemented to simulate breathing. In Part I, the present new whole-lung-airway model (WLAM) represents the actual lung geometry via a basic 3-D mouth-to-trachea configuration while all subsequent airways are lumped together, i.e., reduced to an exponentially expanding 1-D conduit. The diameter for each generation of the 1-D extension can be obtained on a subject-specific basis from the calculated total volume which represents each generation of the individual. The alveolar volume was added based on the approximate number of alveoli per generation. A wall-displacement boundary condition was applied at the bottom surface of the first-generation WLAM, so that any breathing pattern due to the negative alveolar pressure can be reproduced. Specifically, different inhalation/exhalation scenarios (rest, exercise, etc.) were implemented by controlling the wall/mesh displacements to simulate realistic breathing cycles in the WLAM. Total and regional particle deposition results agree with experimental lung deposition results. The outcomes provide critical insight to and quantitative results of aerosol deposition in human whole-lung airways with modest computational resources. Hence, the WLAM can be used in analyzing human exposure to toxic particulate matter or it can assist in estimating pharmacological effects of administered drug-aerosols. As a practical WLAM application, the transport and deposition of asthma drugs from a commercial dry-powder inhaler is discussed in Part II. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Role of Wind Tunnels and Computer Codes in the Certification and Qualification of Rotorcraft for Flight in Forecast Icing

    NASA Technical Reports Server (NTRS)

    Flemming, Robert J.; Britton, Randall K.; Bond, Thomas H.

    1994-01-01

    The cost and time to certify or qualify a rotorcraft for flight in forecast icing has been a major impediment to the development of ice protection systems for helicopter rotors. Development and flight test programs for those aircraft that have achieved certification or qualification for flight in icing conditions have taken many years, and the costs have been very high. NASA, Sikorsky, and others have been conducting research into alternative means for providing information for the development of ice protection systems, and subsequent flight testing to substantiate the air-worthiness of a rotor ice protection system. Model rotor icing tests conducted in 1989 and 1993 have provided a data base for correlation of codes, and for the validation of wind tunnel icing test techniques. This paper summarizes this research, showing test and correlation trends as functions of cloud liquid water content, rotor lift, flight speed, and ambient temperature. Molds were made of several of the ice formations on the rotor blades. These molds were used to form simulated ice on the rotor blades, and the blades were then tested in a wind tunnel to determine flight performance characteristics. These simulated-ice rotor performance tests are discussed in the paper. The levels of correlation achieved and the role of these tools (codes and wind tunnel tests) in flight test planning, testing, and extension of flight data to the limits of the icing envelope are discussed. The potential application of simulated ice, the NASA LEWICE computer, the Sikorsky Generalized Rotor Performance aerodynamic computer code, and NASA Icing Research Tunnel rotor tests in a rotorcraft certification or qualification program are also discussed. The correlation of these computer codes with tunnel test data is presented, and a procedure or process to use these methods as part of a certification or qualification program is introduced.

  13. Consequence modeling using the fire dynamics simulator.

    PubMed

    Ryder, Noah L; Sutula, Jason A; Schemel, Christopher F; Hamer, Andrew J; Van Brunt, Vincent

    2004-11-11

    The use of Computational Fluid Dynamics (CFD) and in particular Large Eddy Simulation (LES) codes to model fires provides an efficient tool for the prediction of large-scale effects that include plume characteristics, combustion product dispersion, and heat effects to adjacent objects. This paper illustrates the strengths of the Fire Dynamics Simulator (FDS), an LES code developed by the National Institute of Standards and Technology (NIST), through several small and large-scale validation runs and process safety applications. The paper presents two fire experiments--a small room fire and a large (15 m diameter) pool fire. The model results are compared to experimental data and demonstrate good agreement between the models and data. The validation work is then extended to demonstrate applicability to process safety concerns by detailing a model of a tank farm fire and a model of the ignition of a gaseous fuel in a confined space. In this simulation, a room was filled with propane, given time to disperse, and was then ignited. The model yields accurate results of the dispersion of the gas throughout the space. This information can be used to determine flammability and explosive limits in a space and can be used in subsequent models to determine the pressure and temperature waves that would result from an explosion. The model dispersion results were compared to an experiment performed by Factory Mutual. Using the above examples, this paper will demonstrate that FDS is ideally suited to build realistic models of process geometries in which large scale explosion and fire failure risks can be evaluated with several distinct advantages over more traditional CFD codes. Namely transient solutions to fire and explosion growth can be produced with less sophisticated hardware (lower cost) than needed for traditional CFD codes (PC type computer verses UNIX workstation) and can be solved for longer time histories (on the order of hundreds of seconds of computed time) with minimal computer resources and length of model run. Additionally results that are produced can be analyzed, viewed, and tabulated during and following a model run within a PC environment. There are some tradeoffs, however, as rapid computations in PC's may require a sacrifice in the grid resolution or in the sub-grid modeling, depending on the size of the geometry modeled.

  14. Hybrid deterministic/stochastic simulation of complex biochemical systems.

    PubMed

    Lecca, Paola; Bagagiolo, Fabio; Scarpa, Marina

    2017-11-21

    In a biological cell, cellular functions and the genetic regulatory apparatus are implemented and controlled by complex networks of chemical reactions involving genes, proteins, and enzymes. Accurate computational models are indispensable means for understanding the mechanisms behind the evolution of a complex system, not always explored with wet lab experiments. To serve their purpose, computational models, however, should be able to describe and simulate the complexity of a biological system in many of its aspects. Moreover, it should be implemented by efficient algorithms requiring the shortest possible execution time, to avoid enlarging excessively the time elapsing between data analysis and any subsequent experiment. Besides the features of their topological structure, the complexity of biological networks also refers to their dynamics, that is often non-linear and stiff. The stiffness is due to the presence of molecular species whose abundance fluctuates by many orders of magnitude. A fully stochastic simulation of a stiff system is computationally time-expensive. On the other hand, continuous models are less costly, but they fail to capture the stochastic behaviour of small populations of molecular species. We introduce a new efficient hybrid stochastic-deterministic computational model and the software tool MoBioS (MOlecular Biology Simulator) implementing it. The mathematical model of MoBioS uses continuous differential equations to describe the deterministic reactions and a Gillespie-like algorithm to describe the stochastic ones. Unlike the majority of current hybrid methods, the MoBioS algorithm divides the reactions' set into fast reactions, moderate reactions, and slow reactions and implements a hysteresis switching between the stochastic model and the deterministic model. Fast reactions are approximated as continuous-deterministic processes and modelled by deterministic rate equations. Moderate reactions are those whose reaction waiting time is greater than the fast reaction waiting time but smaller than the slow reaction waiting time. A moderate reaction is approximated as a stochastic (deterministic) process if it was classified as a stochastic (deterministic) process at the time at which it crosses the threshold of low (high) waiting time. A Gillespie First Reaction Method is implemented to select and execute the slow reactions. The performances of MoBios were tested on a typical example of hybrid dynamics: that is the DNA transcription regulation. The simulated dynamic profile of the reagents' abundance and the estimate of the error introduced by the fully deterministic approach were used to evaluate the consistency of the computational model and that of the software tool.

  15. COMPUTATIONAL TOXICOLOGY - OBJECTIVE 2: DEVELOPING APPROACHES FOR PRIORITIZING CHEMICALS FOR SUBSEQUENT SCREENING AND TESTING

    EPA Science Inventory

    One of the strategic objectives of the Computational Toxicology Program is to develop approaches for prioritizing chemicals for subsequent screening and testing. Approaches currently available for this process require extensive resources. Therefore, less costly and time-extensi...

  16. Climate Modeling with a Million CPUs

    NASA Astrophysics Data System (ADS)

    Tobis, M.; Jackson, C. S.

    2010-12-01

    Michael Tobis, Ph.D. Research Scientist Associate University of Texas Institute for Geophysics Charles S. Jackson Research Scientist University of Texas Institute for Geophysics Meteorological, oceanographic, and climatological applications have been at the forefront of scientific computing since its inception. The trend toward ever larger and more capable computing installations is unabated. However, much of the increase in capacity is accompanied by an increase in parallelism and a concomitant increase in complexity. An increase of at least four additional orders of magnitude in the computational power of scientific platforms is anticipated. It is unclear how individual climate simulations can continue to make effective use of the largest platforms. Conversion of existing community codes to higher resolution, or to more complex phenomenology, or both, presents daunting design and validation challenges. Our alternative approach is to use the expected resources to run very large ensembles of simulations of modest size, rather than to await the emergence of very large simulations. We are already doing this in exploring the parameter space of existing models using the Multiple Very Fast Simulated Annealing algorithm, which was developed for seismic imaging. Our experiments have the dual intentions of tuning the model and identifying ranges of parameter uncertainty. Our approach is less strongly constrained by the dimensionality of the parameter space than are competing methods. Nevertheless, scaling up remains costly. Much could be achieved by increasing the dimensionality of the search and adding complexity to the search algorithms. Such ensemble approaches scale naturally to very large platforms. Extensions of the approach are anticipated. For example, structurally different models can be tuned to comparable effectiveness. This can provide an objective test for which there is no realistic precedent with smaller computations. We find ourselves inventing new code to manage our ensembles. Component computations involve tens to hundreds of CPUs and tens to hundreds of hours. The results of these moderately large parallel jobs influence the scheduling of subsequent jobs, and complex algorithms may be easily contemplated for this. The operating system concept of a "thread" re-emerges at a very coarse level, where each thread manages atomic computations of thousands of CPU-hours. That is, rather than multiple threads operating on a processor, at this level, multiple processors operate within a single thread. In collaboration with the Texas Advanced Computing Center, we are developing a software library at the system level, which should facilitate the development of computations involving complex strategies which invoke large numbers of moderately large multi-processor jobs. While this may have applications in other sciences, our key intent is to better characterize the coupled behavior of a very large set of climate model configurations.

  17. Analysis and Modeling of Realistic Compound Channels in Transparent Relay Transmissions

    PubMed Central

    Kanjirathumkal, Cibile K.; Mohammed, Sameer S.

    2014-01-01

    Analytical approaches for the characterisation of the compound channels in transparent multihop relay transmissions over independent fading channels are considered in this paper. Compound channels with homogeneous links are considered first. Using Mellin transform technique, exact expressions are derived for the moments of cascaded Weibull distributions. Subsequently, two performance metrics, namely, coefficient of variation and amount of fade, are derived using the computed moments. These metrics quantify the possible variations in the channel gain and signal to noise ratio from their respective average values and can be used to characterise the achievable receiver performance. This approach is suitable for analysing more realistic compound channel models for scattering density variations of the environment, experienced in multihop relay transmissions. The performance metrics for such heterogeneous compound channels having distinct distribution in each hop are computed and compared with those having identical constituent component distributions. The moments and the coefficient of variation computed are then used to develop computationally efficient estimators for the distribution parameters and the optimal hop count. The metrics and estimators proposed are complemented with numerical and simulation results to demonstrate the impact of the accuracy of the approaches. PMID:24701175

  18. Transient Three-Dimensional Analysis of Side Load in Liquid Rocket Engine Nozzles

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    2004-01-01

    Three-dimensional numerical investigations on the nozzle start-up side load physics were performed. The objective of this study is to identify the three-dimensional side load physics and to compute the associated aerodynamic side load using an anchored computational methodology. The computational methodology is based on an unstructured-grid, and pressure-based computational fluid dynamics formulation, and a simulated inlet condition based on a system calculation. Finite-rate chemistry was used throughout the study so that combustion effect is always included, and the effect of wall cooling on side load physics is studied. The side load physics captured include the afterburning wave, transition from free- shock to restricted-shock separation, and lip Lambda shock oscillation. With the adiabatic nozzle, free-shock separation reappears after the transition from free-shock separation to restricted-shock separation, and the subsequent flow pattern of the simultaneous free-shock and restricted-shock separations creates a very asymmetric Mach disk flow. With the cooled nozzle, the more symmetric restricted-shock separation persisted throughout the start-up transient after the transition, leading to an overall lower side load than that of the adiabatic nozzle. The tepee structures corresponding to the maximum side load were addressed.

  19. Iterative image reconstruction in elastic inhomogenous media with application to transcranial photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Poudel, Joemini; Matthews, Thomas P.; Mitsuhashi, Kenji; Garcia-Uribe, Alejandro; Wang, Lihong V.; Anastasio, Mark A.

    2017-03-01

    Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the photoacoustically induced initial pressure distribution within tissue. The PACT reconstruction problem corresponds to a time-domain inverse source problem, where the initial pressure distribution is recovered from the measurements recorded on an aperture outside the support of the source. A major challenge in transcranial PACT brain imaging is to compensate for aberrations in the measured data due to the propagation of the photoacoustic wavefields through the skull. To properly account for these effects, a wave equation-based inversion method should be employed that can model the heterogeneous elastic properties of the medium. In this study, an iterative image reconstruction method for 3D transcranial PACT is developed based on the elastic wave equation. To accomplish this, a forward model based on a finite-difference time-domain discretization of the elastic wave equation is established. Subsequently, gradient-based methods are employed for computing penalized least squares estimates of the initial source distribution that produced the measured photoacoustic data. The developed reconstruction algorithm is validated and investigated through computer-simulation studies.

  20. Predicting debris-flow initiation and run-out with a depth-averaged two-phase model and adaptive numerical methods

    NASA Astrophysics Data System (ADS)

    George, D. L.; Iverson, R. M.

    2012-12-01

    Numerically simulating debris-flow motion presents many challenges due to the complicated physics of flowing granular-fluid mixtures, the diversity of spatial scales (ranging from a characteristic particle size to the extent of the debris flow deposit), and the unpredictability of the flow domain prior to a simulation. Accurately predicting debris-flows requires models that are complex enough to represent the dominant effects of granular-fluid interaction, while remaining mathematically and computationally tractable. We have developed a two-phase depth-averaged mathematical model for debris-flow initiation and subsequent motion. Additionally, we have developed software that numerically solves the model equations efficiently on large domains. A unique feature of the mathematical model is that it includes the feedback between pore-fluid pressure and the evolution of the solid grain volume fraction, a process that regulates flow resistance. This feature endows the model with the ability to represent the transition from a stationary mass to a dynamic flow. With traditional approaches, slope stability analysis and flow simulation are treated separately, and the latter models are often initialized with force balances that are unrealistically far from equilibrium. Additionally, our new model relies on relatively few dimensionless parameters that are functions of well-known material properties constrained by physical data (eg. hydraulic permeability, pore-fluid viscosity, debris compressibility, Coulomb friction coefficient, etc.). We have developed numerical methods and software for accurately solving the model equations. By employing adaptive mesh refinement (AMR), the software can efficiently resolve an evolving debris flow as it advances through irregular topography, without needing terrain-fit computational meshes. The AMR algorithms utilize multiple levels of grid resolutions, so that computationally inexpensive coarse grids can be used where the flow is absent, and much higher resolution grids evolve with the flow. The reduction in computational cost, due to AMR, makes very large-scale problems tractable on personal computers. Model accuracy can be tested by comparison of numerical predictions and empirical data. These comparisons utilize controlled experiments conducted at the USGS debris-flow flume, which provide detailed data about flow mobilization and dynamics. Additionally, we have simulated historical large-scale debris flows, such as the (≈50 million m^3) debris flow that originated on Mt. Meager, British Columbia in 2010. This flow took a very complex route through highly variable topography and provides a valuable benchmark for testing. Maps of the debris flow deposit and data from seismic stations provide evidence regarding flow initiation, transit times and deposition. Our simulations reproduce many of the complex patterns of the event, such as run-out geometry and extent, and the large-scale nature of the flow and the complex topographical features demonstrate the utility of AMR in flow simulations.

  1. Quantum analogue computing.

    PubMed

    Kendon, Vivien M; Nemoto, Kae; Munro, William J

    2010-08-13

    We briefly review what a quantum computer is, what it promises to do for us and why it is so hard to build one. Among the first applications anticipated to bear fruit is the quantum simulation of quantum systems. While most quantum computation is an extension of classical digital computation, quantum simulation differs fundamentally in how the data are encoded in the quantum computer. To perform a quantum simulation, the Hilbert space of the system to be simulated is mapped directly onto the Hilbert space of the (logical) qubits in the quantum computer. This type of direct correspondence is how data are encoded in a classical analogue computer. There is no binary encoding, and increasing precision becomes exponentially costly: an extra bit of precision doubles the size of the computer. This has important consequences for both the precision and error-correction requirements of quantum simulation, and significant open questions remain about its practicality. It also means that the quantum version of analogue computers, continuous-variable quantum computers, becomes an equally efficient architecture for quantum simulation. Lessons from past use of classical analogue computers can help us to build better quantum simulators in future.

  2. Student Ability, Confidence, and Attitudes Toward Incorporating a Computer into a Patient Interview.

    PubMed

    Ray, Sarah; Valdovinos, Katie

    2015-05-25

    To improve pharmacy students' ability to effectively incorporate a computer into a simulated patient encounter and to improve their awareness of barriers and attitudes towards and their confidence in using a computer during simulated patient encounters. Students completed a survey that assessed their awareness of, confidence in, and attitudes towards computer use during simulated patient encounters. Students were evaluated with a rubric on their ability to incorporate a computer into a simulated patient encounter. Students were resurveyed and reevaluated after instruction. Students improved in their ability to effectively incorporate computer usage into a simulated patient encounter. They also became more aware of and improved their attitudes toward barriers regarding such usage and gained more confidence in their ability to use a computer during simulated patient encounters. Instruction can improve pharmacy students' ability to incorporate a computer into simulated patient encounters. This skill is critical to developing efficiency while maintaining rapport with patients.

  3. (Un)Folding Mechanisms of the FBP28 WW Domain in Explicit Solvent Revealed by Multiple Rare Event Simulation Methods

    PubMed Central

    Juraszek, Jarek; Bolhuis, Peter G.

    2010-01-01

    Abstract We report a numerical study of the (un)folding routes of the truncated FBP28 WW domain at ambient conditions using a combination of four advanced rare event molecular simulation techniques. We explore the free energy landscape of the native state, the unfolded state, and possible intermediates, with replica exchange molecular dynamics. Subsequent application of bias-exchange metadynamics yields three tentative unfolding pathways at room temperature. Using these paths to initiate a transition path sampling simulation reveals the existence of two major folding routes, differing in the formation order of the two main hairpins, and in hydrophobic side-chain interactions. Having established that the hairpin strand separation distances can act as reasonable reaction coordinates, we employ metadynamics to compute the unfolding barriers and find that the barrier with the lowest free energy corresponds with the most likely pathway found by transition path sampling. The unfolding barrier at 300 K is ∼17 kBT ≈ 42 kJ/mol, in agreement with the experimental unfolding rate constant. This work shows that combining several powerful simulation techniques provides a more complete understanding of the kinetic mechanism of protein folding. PMID:20159161

  4. Collision detection and modeling of rigid and deformable objects in laparoscopic simulator

    NASA Astrophysics Data System (ADS)

    Dy, Mary-Clare; Tagawa, Kazuyoshi; Tanaka, Hiromi T.; Komori, Masaru

    2015-03-01

    Laparoscopic simulators are viable alternatives for surgical training and rehearsal. Haptic devices can also be incorporated with virtual reality simulators to provide additional cues to the users. However, to provide realistic feedback, the haptic device must be updated by 1kHz. On the other hand, realistic visual cues, that is, the collision detection and deformation between interacting objects must be rendered at least 30 fps. Our current laparoscopic simulator detects the collision between a point on the tool tip, and on the organ surfaces, in which haptic devices are attached on actual tool tips for realistic tool manipulation. The triangular-mesh organ model is rendered using a mass spring deformation model, or finite element method-based models. In this paper, we investigated multi-point-based collision detection on the rigid tool rods. Based on the preliminary results, we propose a method to improve the collision detection scheme, and speed up the organ deformation reaction. We discuss our proposal for an efficient method to compute simultaneous multiple collision between rigid (laparoscopic tools) and deformable (organs) objects, and perform the subsequent collision response, with haptic feedback, in real-time.

  5. Transitional Flow in an Arteriovenous Fistula: Effect of Wall Distensibility

    NASA Astrophysics Data System (ADS)

    McGah, Patrick; Leotta, Daniel; Beach, Kirk; Aliseda, Alberto

    2012-11-01

    Arteriovenous fistulae are created surgically to provide adequate access for dialysis in patients with end-stage renal disease. Transitional flow and the subsequent pressure and shear stress fluctuations are thought to be causative in the fistula failure. Since 50% of fistulae require surgical intervention before year one, understanding the altered hemodynamic stresses is an important step toward improving clinical outcomes. We perform numerical simulations of a patient-specific model of a functioning fistula reconstructed from 3D ultrasound scans. Rigid wall simulations and fluid-structure interaction simulations using an in-house finite element solver for the wall deformations were performed and compared. In both the rigid and distensible wall cases, transitional flow is computed in fistula as evidenced by aperiodic high frequency velocity and pressure fluctuations. The spectrum of the fluctuations is much more narrow-banded in the distensible case, however, suggesting a partial stabilizing effect by the vessel elasticity. As a result, the distensible wall simulations predict shear stresses that are systematically 10-30% lower than the rigid cases. We propose a possible mechanism for stabilization involving the phase lag in the fluid work needed to deform the vessel wall. Support from an NIDDK R21 - DK08-1823.

  6. Probabilistic parameter estimation in a 2-step chemical kinetics model for n-dodecane jet autoignition

    NASA Astrophysics Data System (ADS)

    Hakim, Layal; Lacaze, Guilhem; Khalil, Mohammad; Sargsyan, Khachik; Najm, Habib; Oefelein, Joseph

    2018-05-01

    This paper demonstrates the development of a simple chemical kinetics model designed for autoignition of n-dodecane in air using Bayesian inference with a model-error representation. The model error, i.e. intrinsic discrepancy from a high-fidelity benchmark model, is represented by allowing additional variability in selected parameters. Subsequently, we quantify predictive uncertainties in the results of autoignition simulations of homogeneous reactors at realistic diesel engine conditions. We demonstrate that these predictive error bars capture model error as well. The uncertainty propagation is performed using non-intrusive spectral projection that can also be used in principle with larger scale computations, such as large eddy simulation. While the present calibration is performed to match a skeletal mechanism, it can be done with equal success using experimental data only (e.g. shock-tube measurements). Since our method captures the error associated with structural model simplifications, we believe that the optimised model could then lead to better qualified predictions of autoignition delay time in high-fidelity large eddy simulations than the existing detailed mechanisms. This methodology provides a way to reduce the cost of reaction kinetics in simulations systematically, while quantifying the accuracy of predictions of important target quantities.

  7. Security Analysis of Smart Grid Cyber Physical Infrastructures Using Modeling and Game Theoretic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Sheldon, Frederick T.

    Cyber physical computing infrastructures typically consist of a number of sites are interconnected. Its operation critically depends both on cyber components and physical components. Both types of components are subject to attacks of different kinds and frequencies, which must be accounted for the initial provisioning and subsequent operation of the infrastructure via information security analysis. Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, andmore » information assets. We concentrated our analysis on the electric sector failure scenarios and impact analyses by the NESCOR Working Group Study, From the Section 5 electric sector representative failure scenarios; we extracted the four generic failure scenarios and grouped them into three specific threat categories (confidentiality, integrity, and availability) to the system. These specific failure scenarios serve as a demonstration of our simulation. The analysis using our ABGT simulation demonstrates how to model the electric sector functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the cyber physical infrastructure network with respect to CIA.« less

  8. New Modeling Approaches to Study DNA Damage by the Direct and Indirect Effects of Ionizing Radiation

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2012-01-01

    DNA is damaged both by the direct and indirect effects of radiation. In the direct effect, the DNA itself is ionized, whereas the indirect effect involves the radiolysis of the water molecules surrounding the DNA and the subsequent reaction of the DNA with radical products. While this problem has been studied for many years, many unknowns still exist. To study this problem, we have developed the computer code RITRACKS [1], which simulates the radiation track structure for heavy ions and electrons, calculating all energy deposition events and the coordinates of all species produced by the water radiolysis. In this work, we plan to simulate DNA damage by using the crystal structure of a nucleosome and calculations performed by RITRACKS. The energy deposition events are used to calculate the dose deposited in nanovolumes [2] and therefore can be used to simulate the direct effect of the radiation. Using the positions of the radiolytic species with a radiation chemistry code [3] it will be possible to simulate DNA damage by indirect effect. The simulation results can be compared with results from previous calculations such as the frequencies of simple and complex strand breaks [4] and with newer experimental data using surrogate markers of DNA double ]strand breaks such as . ]H2AX foci [5].

  9. Stability of nanocrystalline Ni-based alloys: coupling Monte Carlo and molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Waseda, O.; Goldenstein, H.; Silva, G. F. B. Lenz e.; Neiva, A.; Chantrenne, P.; Morthomas, J.; Perez, M.; Becquart, C. S.; Veiga, R. G. A.

    2017-10-01

    The thermal stability of nanocrystalline Ni due to small additions of Mo or W (up to 1 at%) was investigated in computer simulations by means of a combined Monte Carlo (MC)/molecular dynamics (MD) two-steps approach. In the first step, energy-biased on-lattice MC revealed segregation of the alloying elements to grain boundaries. However, the condition for the thermodynamic stability of these nanocrystalline Ni alloys (zero grain boundary energy) was not fulfilled. Subsequently, MD simulations were carried out for up to 0.5 μs at 1000 K. At this temperature, grain growth was hindered for minimum global concentrations of 0.5 at% W and 0.7 at% Mo, thus preserving most of the nanocrystalline structure. This is in clear contrast to a pure Ni model system, for which the transformation into a monocrystal was observed in MD simulations within 0.2 μs at the same temperature. These results suggest that grain boundary segregation of low-soluble alloying elements in low-alloyed systems can produce high-temperature metastable nanocrystalline materials. MD simulations carried out at 1200 K for 1 at% Mo/W showed significant grain boundary migration accompanied by some degree of solute diffusion, thus providing additional evidence that solute drag mostly contributed to the nanostructure stability observed at lower temperature.

  10. AORTIC COARCTATION: RECENT DEVELOPMENTS IN EXPERIMENTAL AND COMPUTATIONAL METHODS TO ASSESS TREATMENTS FOR THIS SIMPLE CONDITION

    PubMed Central

    LaDisa, John F.; Taylor, Charles A.; Feinstein, Jeffrey A.

    2010-01-01

    Coarctation of the aorta (CoA) is often considered a relatively simple disease, but long-term outcomes suggest otherwise as life expectancies are decades less than in the average population and substantial morbidity often exists. What follows is an expanded version of collective work conducted by the authors’ and numerous collaborators that was presented at the 1st International Conference on Computational Simulation in Congenital Heart Disease pertaining to recent advances for CoA. The work begins by focusing on what is known about blood flow, pressure and indices of wall shear stress (WSS) in patients with normal vascular anatomy from both clinical imaging and the use of computational fluid dynamics (CFD) techniques. Hemodynamic alterations observed in CFD studies from untreated CoA patients and those undergoing surgical or interventional treatment are subsequently discussed. The impact of surgical approach, stent design and valve morphology are also presented for these patient populations. Finally, recent work from a representative experimental animal model of CoA that may offer insight into proposed mechanisms of long-term morbidity in CoA is presented. PMID:21152106

  11. Gas Near a Wall: Shortened Mean Free Path, Reduced Viscosity, and the Manifestation of the Knudsen Layer in the Navier-Stokes Solution of a Shear Flow

    NASA Astrophysics Data System (ADS)

    Abramov, Rafail V.

    2018-06-01

    For the gas near a solid planar wall, we propose a scaling formula for the mean free path of a molecule as a function of the distance from the wall, under the assumption of a uniform distribution of the incident directions of the molecular free flight. We subsequently impose the same scaling onto the viscosity of the gas near the wall and compute the Navier-Stokes solution of the velocity of a shear flow parallel to the wall. Under the simplifying assumption of constant temperature of the gas, the velocity profile becomes an explicit nonlinear function of the distance from the wall and exhibits a Knudsen boundary layer near the wall. To verify the validity of the obtained formula, we perform the Direct Simulation Monte Carlo computations for the shear flow of argon and nitrogen at normal density and temperature. We find excellent agreement between our velocity approximation and the computed DSMC velocity profiles both within the Knudsen boundary layer and away from it.

  12. Parameter estimation and sensitivity analysis in an agent-based model of Leishmania major infection

    PubMed Central

    Jones, Douglas E.; Dorman, Karin S.

    2009-01-01

    Computer models of disease take a systems biology approach toward understanding host-pathogen interactions. In particular, data driven computer model calibration is the basis for inference of immunological and pathogen parameters, assessment of model validity, and comparison between alternative models of immune or pathogen behavior. In this paper we describe the calibration and analysis of an agent-based model of Leishmania major infection. A model of macrophage loss following uptake of necrotic tissue is proposed to explain macrophage depletion following peak infection. Using Gaussian processes to approximate the computer code, we perform a sensitivity analysis to identify important parameters and to characterize their influence on the simulated infection. The analysis indicates that increasing growth rate can favor or suppress pathogen loads, depending on the infection stage and the pathogen’s ability to avoid detection. Subsequent calibration of the model against previously published biological observations suggests that L. major has a relatively slow growth rate and can replicate for an extended period of time before damaging the host cell. PMID:19837088

  13. Development of simulation computer complex specification

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The Training Simulation Computer Complex Study was one of three studies contracted in support of preparations for procurement of a shuttle mission simulator for shuttle crew training. The subject study was concerned with definition of the software loads to be imposed on the computer complex to be associated with the shuttle mission simulator and the development of procurement specifications based on the resulting computer requirements. These procurement specifications cover the computer hardware and system software as well as the data conversion equipment required to interface the computer to the simulator hardware. The development of the necessary hardware and software specifications required the execution of a number of related tasks which included, (1) simulation software sizing, (2) computer requirements definition, (3) data conversion equipment requirements definition, (4) system software requirements definition, (5) a simulation management plan, (6) a background survey, and (7) preparation of the specifications.

  14. Computational simulation of concurrent engineering for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Singhal, S. N.

    1992-01-01

    Results are summarized of an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulations methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties - fundamental in developing such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering for propulsion systems and systems in general. Benefits and facets needing early attention in the development are outlined.

  15. Computational simulation for concurrent engineering of aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Singhal, S. N.

    1993-01-01

    Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.

  16. Computational simulation for concurrent engineering of aerospace propulsion systems

    NASA Astrophysics Data System (ADS)

    Chamis, C. C.; Singhal, S. N.

    1993-02-01

    Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.

  17. Femtosecond excitation tuning and site energy memory of population transfer in poly(p-phenylenevinylene): Gated luminescence experiments and simulation

    NASA Astrophysics Data System (ADS)

    Sperling, J.; Milota, F.; Tortschanoff, A.; Warmuth, Ch.; Mollay, B.; Bässler, H.; Kauffmann, H. F.

    2002-12-01

    We present a comprehensive experimental and computational study on fs-relaxational dynamics of optical excitations in the conjugated polymer poly(p-phenylenevinylene) (PPV) under selective excitation tuning conditions into the long-wavelength, low-vibrational S1ν=0-density-of-states (DOS). The dependence of single-wavelength luminescence kinetics and time-windowed spectral transients on distinct, initial excitation boundaries at 1.4 K and at room temperature was measured applying the luminescence up-conversion technique. The typical energy-dispersive intra-DOS energy transfer was simulated by a combination of static Monte Carlo method with a dynamical algorithm for solving the energy-space transport Master-Equation in population-space. For various, selective excitations that give rise to specific S1-population distributions in distinct spatial and energetic subspaces inside the DOS, simulations confirm the experimental results and show that the subsequent, energy-dissipative, multilevel relaxation is hierarchically constrained, and reveals a pronounced site-energy memory effect with a migration-threshold, characteristic of the (dressed) excitation dynamics in the disordered PPV many-body system.

  18. DNA Packaging in Bacteriophage: Is Twist Important?

    PubMed Central

    Spakowitz, Andrew James; Wang, Zhen-Gang

    2005-01-01

    We study the packaging of DNA into a bacteriophage capsid using computer simulation, specifically focusing on the potential impact of twist on the final packaged conformation. We perform two dynamic simulations of packaging a polymer chain into a spherical confinement: one where the chain end is rotated as it is fed, and one where the chain is fed without end rotation. The final packaged conformation exhibits distinct differences in these two cases: the packaged conformation from feeding with rotation exhibits a spool-like character that is consistent with experimental and previous theoretical work, whereas feeding without rotation results in a folded conformation inconsistent with a spool conformation. The chain segment density shows a layered structure, which is more pronounced for packaging with rotation. However, in both cases, the conformation is marked by frequent jumps of the polymer chain from layer to layer, potentially influencing the ability to disentangle during subsequent ejection. Ejection simulations with and without Brownian forces show that Brownian forces are necessary to achieve complete ejection of the polymer chain in the absence of external forces. PMID:15805174

  19. DNA packaging in bacteriophage: is twist important?

    PubMed

    Spakowitz, Andrew James; Wang, Zhen-Gang

    2005-06-01

    We study the packaging of DNA into a bacteriophage capsid using computer simulation, specifically focusing on the potential impact of twist on the final packaged conformation. We perform two dynamic simulations of packaging a polymer chain into a spherical confinement: one where the chain end is rotated as it is fed, and one where the chain is fed without end rotation. The final packaged conformation exhibits distinct differences in these two cases: the packaged conformation from feeding with rotation exhibits a spool-like character that is consistent with experimental and previous theoretical work, whereas feeding without rotation results in a folded conformation inconsistent with a spool conformation. The chain segment density shows a layered structure, which is more pronounced for packaging with rotation. However, in both cases, the conformation is marked by frequent jumps of the polymer chain from layer to layer, potentially influencing the ability to disentangle during subsequent ejection. Ejection simulations with and without Brownian forces show that Brownian forces are necessary to achieve complete ejection of the polymer chain in the absence of external forces.

  20. SU-E-J-141: Comparison of Dose Calculation On Automatically Generated MRBased ED Maps and Corresponding Patient CT for Clinical Prostate EBRT Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schadewaldt, N; Schulz, H; Helle, M

    2014-06-01

    Purpose: To analyze the effect of computing radiation dose on automatically generated MR-based simulated CT images compared to true patient CTs. Methods: Six prostate cancer patients received a regular planning CT for RT planning as well as a conventional 3D fast-field dual-echo scan on a Philips 3.0T Achieva, adding approximately 2 min of scan time to the clinical protocol. Simulated CTs (simCT) where synthesized by assigning known average CT values to the tissue classes air, water, fat, cortical and cancellous bone. For this, Dixon reconstruction of the nearly out-of-phase (echo 1) and in-phase images (echo 2) allowed for water andmore » fat classification. Model based bone segmentation was performed on a combination of the DIXON images. A subsequent automatic threshold divides into cortical and cancellous bone. For validation, the simCT was registered to the true CT and clinical treatment plans were re-computed on the simCT in pinnacle{sup 3}. To differentiate effects related to the 5 tissue classes and changes in the patient anatomy not compensated by rigid registration, we also calculate the dose on a stratified CT, where HU values are sorted in to the same 5 tissue classes as the simCT. Results: Dose and volume parameters on PTV and risk organs as used for the clinical approval were compared. All deviations are below 1.1%, except the anal sphincter mean dose, which is at most 2.2%, but well below clinical acceptance threshold. Average deviations are below 0.4% for PTV and risk organs and 1.3% for the anal sphincter. The deviations of the stratifiedCT are in the same range as for the simCT. All plans would have passed clinical acceptance thresholds on the simulated CT images. Conclusion: This study demonstrated the clinical usability of MR based dose calculation with the presented Dixon acquisition and subsequent fully automatic image processing. N. Schadewaldt, H. Schulz, M. Helle and S. Renisch are employed by Phlips Technologie Innovative Techonologies, a subsidiary of Royal Philips NV.« less

  1. CFD Simulation of Liquid Rocket Engine Injectors

    NASA Technical Reports Server (NTRS)

    Farmer, Richard; Cheng, Gary; Chen, Yen-Sen; Garcia, Roberto (Technical Monitor)

    2001-01-01

    Detailed design issues associated with liquid rocket engine injectors and combustion chamber operation require CFD methodology which simulates highly three-dimensional, turbulent, vaporizing, and combusting flows. The primary utility of such simulations involves predicting multi-dimensional effects caused by specific injector configurations. SECA, Inc. and Engineering Sciences, Inc. have been developing appropriate computational methodology for NASA/MSFC for the past decade. CFD tools and computers have improved dramatically during this time period; however, the physical submodels used in these analyses must still remain relatively simple in order to produce useful results. Simulations of clustered coaxial and impinger injector elements for hydrogen and hydrocarbon fuels, which account for real fluid properties, is the immediate goal of this research. The spray combustion codes are based on the FDNS CFD code' and are structured to represent homogeneous and heterogeneous spray combustion. The homogeneous spray model treats the flow as a continuum of multi-phase, multicomponent fluids which move without thermal or velocity lags between the phases. Two heterogeneous models were developed: (1) a volume-of-fluid (VOF) model which represents the liquid core of coaxial or impinger jets and their atomization and vaporization, and (2) a Blob model which represents the injected streams as a cloud of droplets the size of the injector orifice which subsequently exhibit particle interaction, vaporization, and combustion. All of these spray models are computationally intensive, but this is unavoidable to accurately account for the complex physics and combustion which is to be predicted, Work is currently in progress to parallelize these codes to improve their computational efficiency. These spray combustion codes were used to simulate the three test cases which are the subject of the 2nd International Workshop on-Rocket Combustion Modeling. Such test cases are considered by these investigators to be very valuable for code validation because combustion kinetics, turbulence models and atomization models based on low pressure experiments of hydrogen air combustion do not adequately verify analytical or CFD submodels which are necessary to simulate rocket engine combustion. We wish to emphasize that the simulations which we prepared for this meeting are meant to test the accuracy of the approximations used in our general purpose spray combustion models, rather than represent a definitive analysis of each of the experiments which were conducted. Our goal is to accurately predict local temperatures and mixture ratios in rocket engines; hence predicting individual experiments is used only for code validation. To replace the conventional JANNAF standard axisymmetric finite-rate (TDK) computer code 2 for performance prediction with CFD cases, such codes must posses two features. Firstly, they must be as easy to use and of comparable run times for conventional performance predictions. Secondly, they must provide more detailed predictions of the flowfields near the injector face. Specifically, they must accurately predict the convective mixing of injected liquid propellants in terms of the injector element configurations.

  2. Towards real-time communication between in vivo neurophysiological data sources and simulator-based brain biomimetic models.

    PubMed

    Lee, Giljae; Matsunaga, Andréa; Dura-Bernal, Salvador; Zhang, Wenjie; Lytton, William W; Francis, Joseph T; Fortes, José Ab

    2014-11-01

    Development of more sophisticated implantable brain-machine interface (BMI) will require both interpretation of the neurophysiological data being measured and subsequent determination of signals to be delivered back to the brain. Computational models are the heart of the machine of BMI and therefore an essential tool in both of these processes. One approach is to utilize brain biomimetic models (BMMs) to develop and instantiate these algorithms. These then must be connected as hybrid systems in order to interface the BMM with in vivo data acquisition devices and prosthetic devices. The combined system then provides a test bed for neuroprosthetic rehabilitative solutions and medical devices for the repair and enhancement of damaged brain. We propose here a computer network-based design for this purpose, detailing its internal modules and data flows. We describe a prototype implementation of the design, enabling interaction between the Plexon Multichannel Acquisition Processor (MAP) server, a commercial tool to collect signals from microelectrodes implanted in a live subject and a BMM, a NEURON-based model of sensorimotor cortex capable of controlling a virtual arm. The prototype implementation supports an online mode for real-time simulations, as well as an offline mode for data analysis and simulations without real-time constraints, and provides binning operations to discretize continuous input to the BMM and filtering operations for dealing with noise. Evaluation demonstrated that the implementation successfully delivered monkey spiking activity to the BMM through LAN environments, respecting real-time constraints.

  3. Reproducible Hydrogeophysical Inversions through the Open-Source Library pyGIMLi

    NASA Astrophysics Data System (ADS)

    Wagner, F. M.; Rücker, C.; Günther, T.

    2017-12-01

    Many tasks in applied geosciences cannot be solved by a single measurement method and require the integration of geophysical, geotechnical and hydrological methods. In the emerging field of hydrogeophysics, researchers strive to gain quantitative information on process-relevant subsurface parameters by means of multi-physical models, which simulate the dynamic process of interest as well as its geophysical response. However, such endeavors are associated with considerable technical challenges, since they require coupling of different numerical models. This represents an obstacle for many practitioners and students. Even technically versatile users tend to build individually tailored solutions by coupling different (and potentially proprietary) forward simulators at the cost of scientific reproducibility. We argue that the reproducibility of studies in computational hydrogeophysics, and therefore the advancement of the field itself, requires versatile open-source software. To this end, we present pyGIMLi - a flexible and computationally efficient framework for modeling and inversion in geophysics. The object-oriented library provides management for structured and unstructured meshes in 2D and 3D, finite-element and finite-volume solvers, various geophysical forward operators, as well as Gauss-Newton based frameworks for constrained, joint and fully-coupled inversions with flexible regularization. In a step-by-step demonstration, it is shown how the hydrogeophysical response of a saline tracer migration can be simulated. Tracer concentration data from boreholes and measured voltages at the surface are subsequently used to estimate the hydraulic conductivity distribution of the aquifer within a single reproducible Python script.

  4. A discrete element and ray framework for rapid simulation of acoustical dispersion of microscale particulate agglomerations

    NASA Astrophysics Data System (ADS)

    Zohdi, T. I.

    2016-03-01

    In industry, particle-laden fluids, such as particle-functionalized inks, are constructed by adding fine-scale particles to a liquid solution, in order to achieve desired overall properties in both liquid and (cured) solid states. However, oftentimes undesirable particulate agglomerations arise due to some form of mutual-attraction stemming from near-field forces, stray electrostatic charges, process ionization and mechanical adhesion. For proper operation of industrial processes involving particle-laden fluids, it is important to carefully breakup and disperse these agglomerations. One approach is to target high-frequency acoustical pressure-pulses to breakup such agglomerations. The objective of this paper is to develop a computational model and corresponding solution algorithm to enable rapid simulation of the effect of acoustical pulses on an agglomeration composed of a collection of discrete particles. Because of the complex agglomeration microstructure, containing gaps and interfaces, this type of system is extremely difficult to mesh and simulate using continuum-based methods, such as the finite difference time domain or the finite element method. Accordingly, a computationally-amenable discrete element/discrete ray model is developed which captures the primary physical events in this process, such as the reflection and absorption of acoustical energy, and the induced forces on the particulate microstructure. The approach utilizes a staggered, iterative solution scheme to calculate the power transfer from the acoustical pulse to the particles and the subsequent changes (breakup) of the pulse due to the particles. Three-dimensional examples are provided to illustrate the approach.

  5. Influence of speckle image reconstruction on photometric precision for large solar telescopes

    NASA Astrophysics Data System (ADS)

    Peck, C. L.; Wöger, F.; Marino, J.

    2017-11-01

    Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.

  6. Jet-torus connection in radio galaxies. Relativistic hydrodynamics and synthetic emission

    NASA Astrophysics Data System (ADS)

    Fromm, C. M.; Perucho, M.; Porth, O.; Younsi, Z.; Ros, E.; Mizuno, Y.; Zensus, J. A.; Rezzolla, L.

    2018-01-01

    Context. High resolution very long baseline interferometry observations of active galactic nuclei have revealed asymmetric structures in the jets of radio galaxies. These asymmetric structures may be due to internal asymmetries in the jets or they may be induced by the different conditions in the surrounding ambient medium, including the obscuring torus, or a combination of the two. Aims: In this paper we investigate the influence of the ambient medium, including the obscuring torus, on the observed properties of jets from radio galaxies. Methods: We performed special-relativistic hydrodynamic (SRHD) simulations of over-pressured and pressure-matched jets using the special-relativistic hydrodynamics code Ratpenat, which is based on a second-order accurate finite-volume method and an approximate Riemann solver. Using a newly developed radiative transfer code to compute the electromagnetic radiation, we modelled several jets embedded in various ambient medium and torus configurations and subsequently computed the non-thermal emission produced by the jet and thermal absorption from the torus. To better compare the emission simulations with observations we produced synthetic radio maps, taking into account the properties of the observatory. Results: The detailed analysis of our simulations shows that the observed properties such as core shift could be used to distinguish between over-pressured and pressure matched jets. In addition to the properties of the jets, insights into the extent and density of the obscuring torus can be obtained from analyses of the single-dish spectrum and spectral index maps.

  7. Understanding Emergency Care Delivery Through Computer Simulation Modeling.

    PubMed

    Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L

    2018-02-01

    In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This article, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This article discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo simulation, system dynamics modeling, discrete-event simulation, and agent-based simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this article, our goal is to enhance adoption of computer simulation, a set of methods that hold great promise in addressing emergency care organization and design challenges. © 2017 by the Society for Academic Emergency Medicine.

  8. The Simultaneous Production Model; A Model for the Construction, Testing, Implementation and Revision of Educational Computer Simulation Environments.

    ERIC Educational Resources Information Center

    Zillesen, Pieter G. van Schaick

    This paper introduces a hardware and software independent model for producing educational computer simulation environments. The model, which is based on the results of 32 studies of educational computer simulations program production, implies that educational computer simulation environments are specified, constructed, tested, implemented, and…

  9. The Learning Effects of Computer Simulations in Science Education

    ERIC Educational Resources Information Center

    Rutten, Nico; van Joolingen, Wouter R.; van der Veen, Jan T.

    2012-01-01

    This article reviews the (quasi)experimental research of the past decade on the learning effects of computer simulations in science education. The focus is on two questions: how use of computer simulations can enhance traditional education, and how computer simulations are best used in order to improve learning processes and outcomes. We report on…

  10. Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer simulations

    NASA Astrophysics Data System (ADS)

    Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko

    2017-08-01

    We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.

  11. Boundary acquisition for setup of numerical simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diegert, C.

    1997-12-31

    The author presents a work flow diagram that includes a path that begins with taking experimental measurements, and ends with obtaining insight from results produced by numerical simulation. Two examples illustrate this path: (1) Three-dimensional imaging measurement at micron scale, using X-ray tomography, provides information on the boundaries of irregularly-shaped alumina oxide particles held in an epoxy matrix. A subsequent numerical simulation predicts the electrical field concentrations that would occur in the observed particle configurations. (2) Three-dimensional imaging measurement at meter scale, again using X-ray tomography, provides information on the boundaries fossilized bone fragments in a Parasaurolophus crest recently discoveredmore » in New Mexico. A subsequent numerical simulation predicts acoustic response of the elaborate internal structure of nasal passageways defined by the fossil record. The author must both add value, and must change the format of the three-dimensional imaging measurements before the define the geometric boundary initial conditions for the automatic mesh generation, and subsequent numerical simulation. The author applies a variety of filters and statistical classification algorithms to estimate the extents of the structures relevant to the subsequent numerical simulation, and capture these extents as faceted geometries. The author will describe the particular combination of manual and automatic methods used in the above two examples.« less

  12. Benefits of computer screen-based simulation in learning cardiac arrest procedures.

    PubMed

    Bonnetain, Elodie; Boucheix, Jean-Michel; Hamet, Maël; Freysz, Marc

    2010-07-01

    What is the best way to train medical students early so that they acquire basic skills in cardiopulmonary resuscitation as effectively as possible? Studies have shown the benefits of high-fidelity patient simulators, but have also demonstrated their limits. New computer screen-based multimedia simulators have fewer constraints than high-fidelity patient simulators. In this area, as yet, there has been no research on the effectiveness of transfer of learning from a computer screen-based simulator to more realistic situations such as those encountered with high-fidelity patient simulators. We tested the benefits of learning cardiac arrest procedures using a multimedia computer screen-based simulator in 28 Year 2 medical students. Just before the end of the traditional resuscitation course, we compared two groups. An experiment group (EG) was first asked to learn to perform the appropriate procedures in a cardiac arrest scenario (CA1) in the computer screen-based learning environment and was then tested on a high-fidelity patient simulator in another cardiac arrest simulation (CA2). While the EG was learning to perform CA1 procedures in the computer screen-based learning environment, a control group (CG) actively continued to learn cardiac arrest procedures using practical exercises in a traditional class environment. Both groups were given the same amount of practice, exercises and trials. The CG was then also tested on the high-fidelity patient simulator for CA2, after which it was asked to perform CA1 using the computer screen-based simulator. Performances with both simulators were scored on a precise 23-point scale. On the test on a high-fidelity patient simulator, the EG trained with a multimedia computer screen-based simulator performed significantly better than the CG trained with traditional exercises and practice (16.21 versus 11.13 of 23 possible points, respectively; p<0.001). Computer screen-based simulation appears to be effective in preparing learners to use high-fidelity patient simulators, which present simulations that are closer to real-life situations.

  13. Phase change energy storage for solar dynamic power systems

    NASA Technical Reports Server (NTRS)

    Chiaramonte, F. P.; Taylor, J. D.

    1992-01-01

    This paper presents the results of a transient computer simulation that was developed to study phase change energy storage techniques for Space Station Freedom (SSF) solar dynamic (SD) power systems. Such SD systems may be used in future growth SSF configurations. Two solar dynamic options are considered in this paper: Brayton and Rankine. Model elements consist of a single node receiver and concentrator, and takes into account overall heat engine efficiency and power distribution characteristics. The simulation not only computes the energy stored in the receiver phase change material (PCM), but also the amount of the PCM required for various combinations of load demands and power system mission constraints. For a solar dynamic power system in low earth orbit, the amount of stored PCM energy is calculated by balancing the solar energy input and the energy consumed by the loads corrected by an overall system efficiency. The model assumes an average 75 kW SD power system load profile which is connected to user loads via dedicated power distribution channels. The model then calculates the stored energy in the receiver and subsequently estimates the quantity of PCM necessary to meet peaking and contingency requirements. The model can also be used to conduct trade studies on the performance of SD power systems using different storage materials.

  14. Phase change energy storage for solar dynamic power systems

    NASA Astrophysics Data System (ADS)

    Chiaramonte, F. P.; Taylor, J. D.

    This paper presents the results of a transient computer simulation that was developed to study phase change energy storage techniques for Space Station Freedom (SSF) solar dynamic (SD) power systems. Such SD systems may be used in future growth SSF configurations. Two solar dynamic options are considered in this paper: Brayton and Rankine. Model elements consist of a single node receiver and concentrator, and takes into account overall heat engine efficiency and power distribution characteristics. The simulation not only computes the energy stored in the receiver phase change material (PCM), but also the amount of the PCM required for various combinations of load demands and power system mission constraints. For a solar dynamic power system in low earth orbit, the amount of stored PCM energy is calculated by balancing the solar energy input and the energy consumed by the loads corrected by an overall system efficiency. The model assumes an average 75 kW SD power system load profile which is connected to user loads via dedicated power distribution channels. The model then calculates the stored energy in the receiver and subsequently estimates the quantity of PCM necessary to meet peaking and contingency requirements. The model can also be used to conduct trade studies on the performance of SD power systems using different storage materials.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoehler, M; McCallen, D; Noble, C

    The analysis, and subsequent retrofit, of concrete arch bridges during recent years has relied heavily on the use of computational simulation. For seismic analysis in particular, computer simulation, typically utilizing linear approximations of structural behavior, has become standard practice. This report presents the results of a comprehensive study of the significance of model sophistication (i.e. linear vs. nonlinear) and pertinent modeling assumptions on the dynamic response of concrete arch bridges. The study uses the Bixby Creek Bridge, located in California, as a case study. In addition to presenting general recommendations for analysis of this class of structures, this report providesmore » an independent evaluation of the proposed seismic retrofit for the Bixby Creek Bridge. Results from the study clearly illustrate a reduction of displacement drifts and redistribution of member forces brought on by the inclusion of material nonlinearity. The analyses demonstrate that accurate modeling of expansion joints, for the Bixby Creek Bridge in particular, is critical to achieve representative modal and transient behavior. The inclusion of near-field displacement pulses in ground motion records was shown to significantly increase demand on the relatively softer, longer period Bixby Creek Bridge arch. Stiffer, shorter period arches, however, are more likely susceptible to variable support motions arising from the canyon topography typical for this class of bridges.« less

  16. Active Piezoelectric Structures for Tip Clearance Management Assessed

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Managing blade tip clearance in turbomachinery stages is critical to developing advanced subsonic propulsion systems. Active casing structures with embedded piezoelectric actuators appear to be a promising solution. They can control static and dynamic tip clearance, compensate for uneven deflections, and accomplish electromechanical coupling at the material level. In addition, they have a compact design. To assess the feasibility of this concept and assist the development of these novel structures, the NASA Lewis Research Center developed in-house computational capabilities for composite structures with piezoelectric actuators and sensors, and subsequently used them to simulate candidate active casing structures. The simulations indicated the potential of active casings to modify the blade tip clearance enough to improve stage efficiency. They also provided valuable design information, such as preliminary actuator configurations (number and location) and the corresponding voltage patterns required to compensate for uneven casing deformations. An active ovalization of a casing with four discrete piezoceramic actuators attached on the outer surface is shown. The center figure shows the predicted radial displacements along the hoop direction that are induced when electrostatic voltage is applied at the piezoceramic actuators. This work, which has demonstrated the capabilities of in-house computational models to analyze and design active casing structures, is expected to contribute toward the development of advanced subsonic engines.

  17. Axisymmetric bluff-body flow: A vortex solver for thin shells

    NASA Astrophysics Data System (ADS)

    Strickland, J. H.

    1992-05-01

    A method which is capable of solving the axisymmetric flow field over bluff bodies consisting of thin shells such as disks, partial spheres, rings, and other such shapes is presented in this report. The body may be made up of several shells whose edges are separated by gaps. The body may be moved axially according to arbitrary velocity time histories. In addition, the surfaces may possess axial and radial degrees of flexibility such that points on the surfaces may be allowed to move relative to each other according to some specified function of time. The surfaces may be either porous or impervious. The present solution technique is based on the axisymmetric vorticity transport equation. Physically, this technique simulates the generation of vorticity at body surfaces in the form of discrete ring vortices which are subsequently diffused and convected into the boundary layers and wake of the body. Relatively large numbers of vortices (1000 or more) are required to obtain good simulations. Since the direct calculation of perturbations from large numbers of ring vortices is computationally intensive, a fast multipole method was used to greatly reduce computer processing time. Several example calculations are presented for disks, disks with holes, hemispheres, and vented hemispheres. These results are compared with steady and unsteady experimental data.

  18. Homology Modeling of Dopamine D2 and D3 Receptors: Molecular Dynamics Refinement and Docking Evaluation

    PubMed Central

    Platania, Chiara Bianca Maria; Salomone, Salvatore; Leggio, Gian Marco; Drago, Filippo; Bucolo, Claudio

    2012-01-01

    Dopamine (DA) receptors, a class of G-protein coupled receptors (GPCRs), have been targeted for drug development for the treatment of neurological, psychiatric and ocular disorders. The lack of structural information about GPCRs and their ligand complexes has prompted the development of homology models of these proteins aimed at structure-based drug design. Crystal structure of human dopamine D3 (hD3) receptor has been recently solved. Based on the hD3 receptor crystal structure we generated dopamine D2 and D3 receptor models and refined them with molecular dynamics (MD) protocol. Refined structures, obtained from the MD simulations in membrane environment, were subsequently used in molecular docking studies in order to investigate potential sites of interaction. The structure of hD3 and hD2L receptors was differentiated by means of MD simulations and D3 selective ligands were discriminated, in terms of binding energy, by docking calculation. Robust correlation of computed and experimental Ki was obtained for hD3 and hD2L receptor ligands. In conclusion, the present computational approach seems suitable to build and refine structure models of homologous dopamine receptors that may be of value for structure-based drug discovery of selective dopaminergic ligands. PMID:22970199

  19. Exact and efficient simulation of concordant computation

    NASA Astrophysics Data System (ADS)

    Cable, Hugo; Browne, Daniel E.

    2015-11-01

    Concordant computation is a circuit-based model of quantum computation for mixed states, that assumes that all correlations within the register are discord-free (i.e. the correlations are essentially classical) at every step of the computation. The question of whether concordant computation always admits efficient simulation by a classical computer was first considered by Eastin in arXiv:quant-ph/1006.4402v1, where an answer in the affirmative was given for circuits consisting only of one- and two-qubit gates. Building on this work, we develop the theory of classical simulation of concordant computation. We present a new framework for understanding such computations, argue that a larger class of concordant computations admit efficient simulation, and provide alternative proofs for the main results of arXiv:quant-ph/1006.4402v1 with an emphasis on the exactness of simulation which is crucial for this model. We include detailed analysis of the arithmetic complexity for solving equations in the simulation, as well as extensions to larger gates and qudits. We explore the limitations of our approach, and discuss the challenges faced in developing efficient classical simulation algorithms for all concordant computations.

  20. Oxidative changes and signalling pathways are pivotal in initiating age-related changes in articular cartilage

    PubMed Central

    Hui, Wang; Young, David A; Rowan, Andrew D; Xu, Xin; Cawston, Tim E; Proctor, Carole J

    2016-01-01

    Objective To use a computational approach to investigate the cellular and extracellular matrix changes that occur with age in the knee joints of mice. Methods Knee joints from an inbred C57/BL1/6 (ICRFa) mouse colony were harvested at 3–30 months of age. Sections were stained with H&E, Safranin-O, Picro-sirius red and antibodies to matrix metalloproteinase-13 (MMP-13), nitrotyrosine, LC-3B, Bcl-2, and cleaved type II collagen used for immunohistochemistry. Based on this and other data from the literature, a computer simulation model was built using the Systems Biology Markup Language using an iterative approach of data analysis and modelling. Individual parameters were subsequently altered to assess their effect on the model. Results A progressive loss of cartilage matrix occurred with age. Nitrotyrosine, MMP-13 and activin receptor-like kinase-1 (ALK1) staining in cartilage increased with age with a concomitant decrease in LC-3B and Bcl-2. Stochastic simulations from the computational model showed a good agreement with these data, once transforming growth factor-β signalling via ALK1/ALK5 receptors was included. Oxidative stress and the interleukin 1 pathway were identified as key factors in driving the cartilage breakdown associated with ageing. Conclusions A progressive loss of cartilage matrix and cellularity occurs with age. This is accompanied with increased levels of oxidative stress, apoptosis and MMP-13 and a decrease in chondrocyte autophagy. These changes explain the marked predisposition of joints to develop osteoarthritis with age. Computational modelling provides useful insights into the underlying mechanisms involved in age-related changes in musculoskeletal tissues. PMID:25475114

  1. Predictive uncertainty analysis of plume distribution for geological carbon sequestration using sparse-grid Bayesian method

    NASA Astrophysics Data System (ADS)

    Shi, X.; Zhang, G.

    2013-12-01

    Because of the extensive computational burden, parametric uncertainty analyses are rarely conducted for geological carbon sequestration (GCS) process based multi-phase models. The difficulty of predictive uncertainty analysis for the CO2 plume migration in realistic GCS models is not only due to the spatial distribution of the caprock and reservoir (i.e. heterogeneous model parameters), but also because the GCS optimization estimation problem has multiple local minima due to the complex nonlinear multi-phase (gas and aqueous), and multi-component (water, CO2, salt) transport equations. The geological model built by Doughty and Pruess (2004) for the Frio pilot site (Texas) was selected and assumed to represent the 'true' system, which was composed of seven different facies (geological units) distributed among 10 layers. We chose to calibrate the permeabilities of these facies. Pressure and gas saturation values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. Each simulation of the model lasts about 2 hours. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid stochastic collocation method. This surrogate response surface global optimization algorithm is firstly used to calibrate the model parameters, then prediction uncertainty of the CO2 plume position is quantified due to the propagation from parametric uncertainty in the numerical experiments, which is also compared to the actual plume from the 'true' model. Results prove that the approach is computationally efficient for multi-modal optimization and prediction uncertainty quantification for computationally expensive simulation models. Both our inverse methodology and findings can be broadly applicable to GCS in heterogeneous storage formations.

  2. Conservative tightly-coupled simulations of stochastic multiscale systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taverniers, Søren; Pigarov, Alexander Y.; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu

    2016-05-15

    Multiphysics problems often involve components whose macroscopic dynamics is driven by microscopic random fluctuations. The fidelity of simulations of such systems depends on their ability to propagate these random fluctuations throughout a computational domain, including subdomains represented by deterministic solvers. When the constituent processes take place in nonoverlapping subdomains, system behavior can be modeled via a domain-decomposition approach that couples separate components at the interfaces between these subdomains. Its coupling algorithm has to maintain a stable and efficient numerical time integration even at high noise strength. We propose a conservative domain-decomposition algorithm in which tight coupling is achieved by employingmore » either Picard's or Newton's iterative method. Coupled diffusion equations, one of which has a Gaussian white-noise source term, provide a computational testbed for analysis of these two coupling strategies. Fully-converged (“implicit”) coupling with Newton's method typically outperforms its Picard counterpart, especially at high noise levels. This is because the number of Newton iterations scales linearly with the amplitude of the Gaussian noise, while the number of Picard iterations can scale superlinearly. At large time intervals between two subsequent inter-solver communications, the solution error for single-iteration (“explicit”) Picard's coupling can be several orders of magnitude higher than that for implicit coupling. Increasing the explicit coupling's communication frequency reduces this difference, but the resulting increase in computational cost can make it less efficient than implicit coupling at similar levels of solution error, depending on the communication frequency of the latter and the noise strength. This trend carries over into higher dimensions, although at high noise strength explicit coupling may be the only computationally viable option.« less

  3. A novel model for DNA sequence similarity analysis based on graph theory.

    PubMed

    Qi, Xingqin; Wu, Qin; Zhang, Yusen; Fuller, Eddie; Zhang, Cun-Quan

    2011-01-01

    Determination of sequence similarity is one of the major steps in computational phylogenetic studies. As we know, during evolutionary history, not only DNA mutations for individual nucleotide but also subsequent rearrangements occurred. It has been one of major tasks of computational biologists to develop novel mathematical descriptors for similarity analysis such that various mutation phenomena information would be involved simultaneously. In this paper, different from traditional methods (eg, nucleotide frequency, geometric representations) as bases for construction of mathematical descriptors, we construct novel mathematical descriptors based on graph theory. In particular, for each DNA sequence, we will set up a weighted directed graph. The adjacency matrix of the directed graph will be used to induce a representative vector for DNA sequence. This new approach measures similarity based on both ordering and frequency of nucleotides so that much more information is involved. As an application, the method is tested on a set of 0.9-kb mtDNA sequences of twelve different primate species. All output phylogenetic trees with various distance estimations have the same topology, and are generally consistent with the reported results from early studies, which proves the new method's efficiency; we also test the new method on a simulated data set, which shows our new method performs better than traditional global alignment method when subsequent rearrangements happen frequently during evolutionary history.

  4. Three-dimensional interactive Molecular Dynamics program for the study of defect dynamics in crystals

    NASA Astrophysics Data System (ADS)

    Patriarca, M.; Kuronen, A.; Robles, M.; Kaski, K.

    2007-01-01

    The study of crystal defects and the complex processes underlying their formation and time evolution has motivated the development of the program ALINE for interactive molecular dynamics experiments. This program couples a molecular dynamics code to a Graphical User Interface and runs on a UNIX-X11 Window System platform with the MOTIF library, which is contained in many standard Linux releases. ALINE is written in C, thus giving the user the possibility to modify the source code, and, at the same time, provides an effective and user-friendly framework for numerical experiments, in which the main parameters can be interactively varied and the system visualized in various ways. We illustrate the main features of the program through some examples of detection and dynamical tracking of point-defects, linear defects, and planar defects, such as stacking faults in lattice-mismatched heterostructures. Program summaryTitle of program:ALINE Catalogue identifier:ADYJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYJ_v1_0 Program obtainable from: CPC Program Library, Queen University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: Computers:DEC ALPHA 300, Intel i386 compatible computers, G4 Apple Computers Installations:Laboratory of Computational Engineering, Helsinki University of Technology, Helsinki, Finland Operating systems under which the program has been tested:True64 UNIX, Linux-i386, Mac OS X 10.3 and 10.4 Programming language used:Standard C and MOTIF libraries Memory required to execute with typical data:6 Mbytes but may be larger depending on the system size No. of lines in distributed program, including test data, etc.:16 901 No. of bytes in distributed program, including test data, etc.:449 559 Distribution format:tar.gz Nature of physical problem:Some phenomena involving defects take place inside three-dimensional crystals at times which can be hardly predicted. For this reason they are difficult to detect and track even within numerical experiments, especially when one is interested in studying their dynamical properties and time evolution. Furthermore, traditional simulation methods require the storage of a huge amount of data which in turn may imply a long work for their analysis. Method of solution:Simplifications of the simulation work described above strongly depend also on the computer performance. It has now become possible to realize some of such simplifications thanks to the real possibility of using interactive programs. The solution proposed here is based on the development of an interactive graphical simulation program both for avoiding large storage of data and the subsequent elaboration and analysis as well as for visualizing and tracking many phenomena inside three-dimensional samples. However, the full computational power of traditional simulation programs may not be available in general in programs with graphical user interfaces, due to their interactive nature. Nevertheless interactive programs can still be very useful for detecting processes difficult to visualize, restricting the range or making a fine tuning of the parameters, and tailoring the faster programs toward precise targets. Restrictions on the complexity of the problem:The restrictions on the applicability of the program are related to the computer resources available. The graphical interface and interactivity demand computational resources that depend on the particular numerical simulation to be performed. To preserve a balance between speed and resources, the choice of the number of atoms to be simulated is critical. With an average current computer, simulations of systems with more than 10 5 atoms may not be easily feasible on an interactive scheme. Another restriction is related to the fact that the program was originally designed to simulate systems in the solid phase, so that problems in the simulation may occur if some particular physical quantities are computed beyond the melting point. Typical running time:It depends on the machine architecture, system size, and user needs. Unusual features of the program:In the program, besides the window in which the system is represented in real space, an additional graphical window presenting the real time distribution histogram for different physical variables (such as kinetic or potential energy) is included. Such tool is very interesting for making demonstrative numerical experiments for teaching purposes as well as for research, e.g., for detecting and tracking crystal defects. The program includes: an initial condition builder, an interactive display of the simulation, a set of tools which allow the user to filter through different physical quantities the information—either displayed in real time or printed in the output files—and to perform an efficient search of the interesting regions of parameter space.

  5. SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model

    NASA Astrophysics Data System (ADS)

    Grelle, Gerardo; Bonito, Laura; Lampasi, Alessandro; Revellino, Paola; Guerriero, Luigi; Sappa, Giuseppe; Guadagno, Francesco Maria

    2016-04-01

    The SiSeRHMap (simulator for mapped seismic response using a hybrid model) is a computerized methodology capable of elaborating prediction maps of seismic response in terms of acceleration spectra. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code architecture composed of five interdependent modules. A GIS (geographic information system) cubic model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A meta-modelling process confers a hybrid nature to the methodology. In this process, the one-dimensional (1-D) linear equivalent analysis produces acceleration response spectra for a specified number of site profiles using one or more input motions. The shear wave velocity-thickness profiles, defined as trainers, are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Emul-spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated evolutionary algorithm (EA) and the Levenberg-Marquardt algorithm (LMA) as the final optimizer. In the final step, the GCM maps executor module produces a serial map set of a stratigraphic seismic response at different periods, grid solving the calibrated Emul-spectra model. In addition, the spectra topographic amplification is also computed by means of a 3-D validated numerical prediction model. This model is built to match the results of the numerical simulations related to isolate reliefs using GIS morphometric data. In this way, different sets of seismic response maps are developed on which maps of design acceleration response spectra are also defined by means of an enveloping technique.

  6. Inhibitor design strategy based on an enzyme structural flexibility: a case of bacterial MurD ligase.

    PubMed

    Perdih, Andrej; Hrast, Martina; Barreteau, Hélène; Gobec, Stanislav; Wolber, Gerhard; Solmajer, Tom

    2014-05-27

    Increasing bacterial resistance to available antibiotics stimulated the discovery of novel efficacious antibacterial agents. The biosynthesis of the bacterial peptidoglycan, where the MurD enzyme is involved in the intracellular phase of the UDP-MurNAc-pentapeptide formation, represents a collection of highly selective targets for novel antibacterial drug design. In our previous computational studies, the C-terminal domain motion of the MurD ligase was investigated using Targeted Molecular Dynamic (TMD) simulation and the Off-Path Simulation (OPS) technique. In this study, we present a drug design strategy using multiple protein structures for the identification of novel MurD ligase inhibitors. Our main focus was the ATP-binding site of the MurD enzyme. In the first stage, three MurD protein conformations were selected based on the obtained OPS/TMD data as the initial criterion. Subsequently, a two-stage virtual screening approach was utilized combining derived structure-based pharmacophores with molecular docking calculations. Selected compounds were then assayed in the established enzyme binding assays, and compound 3 from the aminothiazole class was discovered to act as a dual MurC/MurD inhibitor in the micomolar range. A steady-state kinetic study was performed on the MurD enzyme to provide further information about the mechanistic aspects of its inhibition. In the final stage, all used conformations of the MurD enzyme with compound 3 were simulated in classical molecular dynamics (MD) simulations providing atomistic insights of the experimental results. Overall, the study depicts several challenges that need to be addressed when trying to hit a flexible moving target such as the presently studied bacterial MurD enzyme and show the possibilities of how computational tools can be proficiently used at all stages of the drug discovery process.

  7. Descriptive and Criterion-Referenced Self-Assessment with L2 Readers

    ERIC Educational Resources Information Center

    Brantmeier, Cindy; Vanderplank, Robert

    2008-01-01

    Brantmeier [Brantmeier, C., 2006. "Advanced L2 learners and reading placement: self-assessment, computer-based testing, and subsequent performance." 'System 34" (1), 15-35] found that self-assessment (SA) of second language (L2) reading ability is not an accurate predictor for computer-based testing or subsequent classroom performance. With 359…

  8. Protostar formation in the early universe.

    PubMed

    Yoshida, Naoki; Omukai, Kazuyuki; Hernquist, Lars

    2008-08-01

    The nature of the first generation of stars in the universe remains largely unknown. Observations imply the existence of massive primordial stars early in the history of the universe, and the standard theory for the growth of cosmic structure predicts that structures grow hierarchically through gravitational instability. We have developed an ab initio computer simulation of the formation of primordial stars that follows the relevant atomic and molecular processes in a primordial gas in an expanding universe. The results show that primeval density fluctuations left over from the Big Bang can drive the formation of a tiny protostar with a mass 1% that of the Sun. The protostar is a seed for the subsequent formation of a massive primordial star.

  9. The Analysis of Fluorescence Decay by a Method of Moments

    PubMed Central

    Isenberg, Irvin; Dyson, Robert D.

    1969-01-01

    The fluorescence decay of the excited state of most biopolymers, and biopolymer conjugates and complexes, is not, in general, a simple exponential. The method of moments is used to establish a means of analyzing such multi-exponential decays. The method is tested by the use of computer simulated data, assuming that the limiting error is determined by noise generated by a pseudorandom number generator. Multi-exponential systems with relatively closely spaced decay constants may be successfully analyzed. The analyses show the requirements, in terms of precision, that data must meet. The results may be used both as an aid in the design of equipment and in the analysis of data subsequently obtained. PMID:5353139

  10. On firework blasts and qualitative parameter dependency.

    PubMed

    Zohdi, T I

    2016-01-01

    In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given.

  11. On firework blasts and qualitative parameter dependency

    PubMed Central

    Zohdi, T. I.

    2016-01-01

    In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given. PMID:26997903

  12. Real-time simulation of the TF30-P-3 turbofan engine using a hybrid computer

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.; Bruton, W. M.

    1974-01-01

    A real-time, hybrid-computer simulation of the TF30-P-3 turbofan engine was developed. The simulation was primarily analog in nature but used the digital portion of the hybrid computer to perform bivariate function generation associated with the performance of the engine's rotating components. FORTRAN listings and analog patching diagrams are provided. The hybrid simulation was controlled by a digital computer programmed to simulate the engine's standard hydromechanical control. Both steady-state and dynamic data obtained from the digitally controlled engine simulation are presented. Hybrid simulation data are compared with data obtained from a digital simulation provided by the engine manufacturer. The comparisons indicate that the real-time hybrid simulation adequately matches the baseline digital simulation.

  13. Displaying Computer Simulations Of Physical Phenomena

    NASA Technical Reports Server (NTRS)

    Watson, Val

    1991-01-01

    Paper discusses computer simulation as means of experiencing and learning to understand physical phenomena. Covers both present simulation capabilities and major advances expected in near future. Visual, aural, tactile, and kinesthetic effects used to teach such physical sciences as dynamics of fluids. Recommends classrooms in universities, government, and industry be linked to advanced computing centers so computer simulations integrated into education process.

  14. Detailed qualitative dynamic knowledge representation using a BioNetGen model of TLR-4 signaling and preconditioning.

    PubMed

    An, Gary C; Faeder, James R

    2009-01-01

    Intracellular signaling/synthetic pathways are being increasingly extensively characterized. However, while these pathways can be displayed in static diagrams, in reality they exist with a degree of dynamic complexity that is responsible for heterogeneous cellular behavior. Multiple parallel pathways exist and interact concurrently, limiting the ability to integrate the various identified mechanisms into a cohesive whole. Computational methods have been suggested as a means of concatenating this knowledge to aid in the understanding of overall system dynamics. Since the eventual goal of biomedical research is the identification and development of therapeutic modalities, computational representation must have sufficient detail to facilitate this 'engineering' process. Adding to the challenge, this type of representation must occur in a perpetual state of incomplete knowledge. We present a modeling approach to address this challenge that is both detailed and qualitative. This approach is termed 'dynamic knowledge representation,' and is intended to be an integrated component of the iterative cycle of scientific discovery. BioNetGen (BNG), a software platform for modeling intracellular signaling pathways, was used to model the toll-like receptor 4 (TLR-4) signal transduction cascade. The informational basis of the model was a series of reference papers on modulation of (TLR-4) signaling, and some specific primary research papers to aid in the characterization of specific mechanistic steps in the pathway. This model was detailed with respect to the components of the pathway represented, but qualitative with respect to the specific reaction coefficients utilized to execute the reactions. Responsiveness to simulated lipopolysaccharide (LPS) administration was measured by tumor necrosis factor (TNF) production. Simulation runs included evaluation of initial dose-dependent response to LPS administration at 10, 100, 1000 and 10,000, and a subsequent examination of preconditioning behavior with increasing LPS at 10, 100, 1000 and 10,000 and a secondary dose of LPS at 10,000 administered at approximately 27h of simulated time. Simulations of 'knockout' versions of the model allowed further examination of the interactions within the signaling cascade. The model demonstrated a dose-dependent TNF response curve to increasing stimulus by LPS. Preconditioning simulations demonstrated a similar dose-dependency of preconditioning doses leading to attenuation of response to subsequent LPS challenge - a 'tolerance' dynamic. These responses match dynamics reported in the literature. Furthermore, the simulated 'knockout' results suggested the existence and need for dual negative feedback control mechanisms, represented by the zinc ring-finger protein A20 and inhibitor kappa B proteins (IkappaB), in order for both effective attenuation of the initial stimulus signal and subsequent preconditioned 'tolerant' behavior. We present an example of detailed, qualitative dynamic knowledge representation using the TLR-4 signaling pathway, its control mechanisms and overall behavior with respect to preconditioning. The intent of this approach is to demonstrate a method of translating the extensive mechanistic knowledge being generated at the basic science level into an executable framework that can provide a means of 'conceptual model verification.' This allows for both the 'checking' of the dynamic consequences of a mechanistic hypothesis and the creation of a modular component of an overall model directed at the engineering goal of biomedical research. It is hoped that this paper will increase the use of knowledge representation and communication in this fashion, and facilitate the concatenation and integration of community-wide knowledge.

  15. The DYNAMO Simulation Language--An Alternate Approach to Computer Science Education.

    ERIC Educational Resources Information Center

    Bronson, Richard

    1986-01-01

    Suggests the use of computer simulation of continuous systems as a problem solving approach to computer languages. Outlines the procedures that the system dynamics approach employs in computer simulations. Explains the advantages of the special purpose language, DYNAMO. (ML)

  16. Launch Site Computer Simulation and its Application to Processes

    NASA Technical Reports Server (NTRS)

    Sham, Michael D.

    1995-01-01

    This paper provides an overview of computer simulation, the Lockheed developed STS Processing Model, and the application of computer simulation to a wide range of processes. The STS Processing Model is an icon driven model that uses commercial off the shelf software and a Macintosh personal computer. While it usually takes one year to process and launch 8 space shuttles, with the STS Processing Model this process is computer simulated in about 5 minutes. Facilities, orbiters, or ground support equipment can be added or deleted and the impact on launch rate, facility utilization, or other factors measured as desired. This same computer simulation technology can be used to simulate manufacturing, engineering, commercial, or business processes. The technology does not require an 'army' of software engineers to develop and operate, but instead can be used by the layman with only a minimal amount of training. Instead of making changes to a process and realizing the results after the fact, with computer simulation, changes can be made and processes perfected before they are implemented.

  17. Protocols for Handling Messages Between Simulation Computers

    NASA Technical Reports Server (NTRS)

    Balcerowski, John P.; Dunnam, Milton

    2006-01-01

    Practical Simulator Network (PSimNet) is a set of data-communication protocols designed especially for use in handling messages between computers that are engaging cooperatively in real-time or nearly-real-time training simulations. In a typical application, computers that provide individualized training at widely dispersed locations would communicate, by use of PSimNet, with a central host computer that would provide a common computational- simulation environment and common data. Originally intended for use in supporting interfaces between training computers and computers that simulate the responses of spacecraft scientific payloads, PSimNet could be especially well suited for a variety of other applications -- for example, group automobile-driver training in a classroom. Another potential application might lie in networking of automobile-diagnostic computers at repair facilities to a central computer that would compile the expertise of numerous technicians and engineers and act as an expert consulting technician.

  18. Reversible simulation of irreversible computation

    NASA Astrophysics Data System (ADS)

    Li, Ming; Tromp, John; Vitányi, Paul

    1998-09-01

    Computer computations are generally irreversible while the laws of physics are reversible. This mismatch is penalized by among other things generating excess thermic entropy in the computation. Computing performance has improved to the extent that efficiency degrades unless all algorithms are executed reversibly, for example by a universal reversible simulation of irreversible computations. All known reversible simulations are either space hungry or time hungry. The leanest method was proposed by Bennett and can be analyzed using a simple ‘reversible’ pebble game. The reachable reversible simulation instantaneous descriptions (pebble configurations) of such pebble games are characterized completely. As a corollary we obtain the reversible simulation by Bennett and, moreover, show that it is a space-optimal pebble game. We also introduce irreversible steps and give a theorem on the tradeoff between the number of allowed irreversible steps and the memory gain in the pebble game. In this resource-bounded setting the limited erasing needs to be performed at precise instants during the simulation. The reversible simulation can be modified so that it is applicable also when the simulated computation time is unknown.

  19. Massively parallel quantum computer simulator

    NASA Astrophysics Data System (ADS)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.

  20. Space-filling designs for computer experiments: A review

    DOE PAGES

    Joseph, V. Roshan

    2016-01-29

    Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less

  1. Space-filling designs for computer experiments: A review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph, V. Roshan

    Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less

  2. Computer Support of Operator Training: Constructing and Testing a Prototype of a CAL (Computer Aided Learning) Supported Simulation Environment.

    ERIC Educational Resources Information Center

    Zillesen, P. G. van Schaick; And Others

    Instructional feedback given to the learners during computer simulation sessions may be greatly improved by integrating educational computer simulation programs with hypermedia-based computer-assisted learning (CAL) materials. A prototype of a learning environment of this type called BRINE PURIFICATION was developed for use in corporate training…

  3. Computer Simulation in Mass Emergency and Disaster Response: An Evaluation of Its Effectiveness as a Tool for Demonstrating Strategic Competency in Emergency Department Medical Responders

    ERIC Educational Resources Information Center

    O'Reilly, Daniel J.

    2011-01-01

    This study examined the capability of computer simulation as a tool for assessing the strategic competency of emergency department nurses as they responded to authentically computer simulated biohazard-exposed patient case studies. Thirty registered nurses from a large, urban hospital completed a series of computer-simulated case studies of…

  4. Space Ultrareliable Modular Computer (SUMC) instruction simulator

    NASA Technical Reports Server (NTRS)

    Curran, R. T.

    1972-01-01

    The design principles, description, functional operation, and recommended expansion and enhancements are presented for the Space Ultrareliable Modular Computer interpretive simulator. Included as appendices are the user's manual, program module descriptions, target instruction descriptions, simulator source program listing, and a sample program printout. In discussing the design and operation of the simulator, the key problems involving host computer independence and target computer architectural scope are brought into focus.

  5. Computer-aided Instructional System for Transmission Line Simulation.

    ERIC Educational Resources Information Center

    Reinhard, Erwin A.; Roth, Charles H., Jr.

    A computer-aided instructional system has been developed which utilizes dynamic computer-controlled graphic displays and which requires student interaction with a computer simulation in an instructional mode. A numerical scheme has been developed for digital simulation of a uniform, distortionless transmission line with resistive terminations and…

  6. Verifying the Simulation Hypothesis via Infinite Nested Universe Simulacrum Loops

    NASA Astrophysics Data System (ADS)

    Sharma, Vikrant

    2017-01-01

    The simulation hypothesis proposes that local reality exists as a simulacrum within a hypothetical computer's dimension. More specifically, Bostrom's trilemma proposes that the number of simulations an advanced 'posthuman' civilization could produce makes the proposition very likely. In this paper a hypothetical method to verify the simulation hypothesis is discussed using infinite regression applied to a new type of infinite loop. Assign dimension n to any computer in our present reality, where dimension signifies the hierarchical level in nested simulations our reality exists in. A computer simulating known reality would be dimension (n-1), and likewise a computer simulating an artificial reality, such as a video game, would be dimension (n +1). In this method, among others, four key assumptions are made about the nature of the original computer dimension n. Summations show that regressing such a reality infinitely will create convergence, implying that the verification of whether local reality is a grand simulation is feasible to detect with adequate compute capability. The action of reaching said convergence point halts the simulation of local reality. Sensitivities to the four assumptions and implications are discussed.

  7. Numerical simulation code for self-gravitating Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Madarassy, Enikő J. M.; Toth, Viktor T.

    2013-04-01

    We completed the development of simulation code that is designed to study the behavior of a conjectured dark matter galactic halo that is in the form of a Bose-Einstein Condensate (BEC). The BEC is described by the Gross-Pitaevskii equation, which can be solved numerically using the Crank-Nicholson method. The gravitational potential, in turn, is described by Poisson’s equation, that can be solved using the relaxation method. Our code combines these two methods to study the time evolution of a self-gravitating BEC. The inefficiency of the relaxation method is balanced by the fact that in subsequent time iterations, previously computed values of the gravitational field serve as very good initial estimates. The code is robust (as evidenced by its stability on coarse grids) and efficient enough to simulate the evolution of a system over the course of 109 years using a finer (100×100×100) spatial grid, in less than a day of processor time on a contemporary desktop computer. Catalogue identifier: AEOR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOR_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5248 No. of bytes in distributed program, including test data, etc.: 715402 Distribution format: tar.gz Programming language: C++ or FORTRAN. Computer: PCs or workstations. Operating system: Linux or Windows. Classification: 1.5. Nature of problem: Simulation of a self-gravitating Bose-Einstein condensate by simultaneous solution of the Gross-Pitaevskii and Poisson equations in three dimensions. Solution method: The Gross-Pitaevskii equation is solved numerically using the Crank-Nicholson method; Poisson’s equation is solved using the relaxation method. The time evolution of the system is governed by the Gross-Pitaevskii equation; the solution of Poisson’s equation at each time step is used as an initial estimate for the next time step, which dramatically increases the efficiency of the relaxation method. Running time: Depends on the chosen size of the problem. On a typical personal computer, a 100×100×100 grid can be solved with a time span of 10 Gyr in approx. a day of running time.

  8. Current capabilities for simulating the extreme distortion of thin structures subjected to severe impacts

    NASA Technical Reports Server (NTRS)

    Key, Samuel W.

    1993-01-01

    The explicit transient dynamics technology in use today for simulating the impact and subsequent transient dynamic response of a structure has its origins in the 'hydrocodes' dating back to the late 1940's. The growth in capability in explicit transient dynamics technology parallels the growth in speed and size of digital computers. Computer software for simulating the explicit transient dynamic response of a structure is characterized by algorithms that use a large number of small steps. In explicit transient dynamics software there is a significant emphasis on speed and simplicity. The finite element technology used to generate the spatial discretization of a structure is based on a compromise between completeness of the representation for the physical processes modelled and speed in execution. That is, since it is expected in every calculation that the deformation will be finite and the material will be strained beyond the elastic range, the geometry and the associated gradient operators must be reconstructed, as well as complex stress-strain models evaluated at every time step. As a result, finite elements derived for explicit transient dynamics software use the simplest and barest constructions possible for computational efficiency while retaining an essential representation of the physical behavior. The best example of this technology is the four-node bending quadrilateral derived by Belytschko, Lin and Tsay. Today, the speed, memory capacity and availability of computer hardware allows a number of the previously used algorithms to be 'improved.' That is, it is possible with today's computing hardware to modify many of the standard algorithms to improve their representation of the physical process at the expense of added complexity and computational effort. The purpose is to review a number of these algorithms and identify the improvements possible. In many instances, both the older, faster version of the algorithm and the improved and somewhat slower version of the algorithm are found implemented together in software. Specifically, the following seven algorithmic items are examined: the invariant time derivatives of stress used in material models expressed in rate form; incremental objectivity and strain used in the numerical integration of the material models; the use of one-point element integration versus mean quadrature; shell elements used to represent the behavior of thin structural components; beam elements based on stress-resultant plasticity versus cross-section integration; the fidelity of elastic-plastic material models in their representation of ductile metals; and the use of Courant subcycling to reduce computational effort.

  9. Simultaneous Measurements of Temperature and Major Species Concentration in a Hydrocarbon-Fueled Dual Mode Scramjet Using WIDECARS

    NASA Astrophysics Data System (ADS)

    Gallo, Emanuela Carolina Angela

    Width increased dual-pump enhanced coherent anti-Stokes Raman spectroscopy (WIDECARS) measurements were conducted in a McKenna air-ethylene premixed burner, at nominal equivalence ratio range between 0.55 and 2.50 to provide quantitative measurements of six major combustion species (C2H 4, N2, O2, H2, CO, CO2) concentration and temperature simultaneously. The purpose of this test was to investigate the uncertainties in the experimental and spectral modeling methods in preparation for an subsequent scramjet C2H4/air combustion test at the University of Virginia-Aerospace Research Laboratory. A broadband Pyrromethene (PM) PM597 and PM650 dye laser mixture and optical cavity were studied and optimized to excite the Raman shift of all the target species. Two hundred single shot recorded spectra were processed, theoretically fitted and then compared to computational models, to verify where chemical equilibrium or adiabatic condition occurred, providing experimental flame location and formation, species concentrations, temperature, and heat losses inputs to computational kinetic models. The Stark effect, temperature, and concentration errors are discussed. Subsequently, WIDECARS measurements of a premixed air-ethylene flame were successfully acquired in a direct connect small-scale dual-mode scramjet combustor, at University of Virginia Supersonic Combustion Facility (UVaSCF). A nominal Mach 5 flight condition was simulated (stagnation pressure p0 = 300 kPa, temperature T0 = 1200 K, equivalence ratio range ER = 0.3 -- 0.4). The purpose of this test was to provide quantitative measurements of the six major combustion species concentration and temperature. Point-wise measurements were taken by mapping four two-dimensional orthogonal planes (before, within, and two planes after the cavity flame holder) with respect to the combustor freestream direction. Two hundred single shot recorded spectra were processed and theoretically fitted. Mean flow and standard deviation are provided for each investigated case. Within the flame limits tested, WIDECARS data were analyzed and compared with CFD simulations and OH-PLIF measurements.

  10. Kinematic Measurement of Knee Prosthesis from Single-Plane Projection Images

    NASA Astrophysics Data System (ADS)

    Hirokawa, Shunji; Ariyoshi, Shogo; Takahashi, Kenji; Maruyama, Koichi

    In this paper, the measurement of 3D motion from 2D perspective projections of knee prosthesis is described. The technique reported by Banks and Hodge was further developed in this study. The estimation was performed in two steps. The first-step estimation was performed on the assumption of orthogonal projection. Then, the second-step estimation was subsequently carried out based upon the perspective projection to accomplish more accurate estimation. The simulation results have demonstrated that the technique archived sufficient accuracies of position/orientation estimation for prosthetic kinematics. Then we applied our algorithm to the CCD images, thereby examining the influences of various artifacts, possibly incorporated through an imaging process, on the estimation accuracies. We found that accuracies in the experiment were influenced mainly by the geometric discrepancies between the prosthesis component and computer generated model and by the spacial inconsistencies between the coordinate axes of the positioner and that of the computer model. However, we verified that our algorithm could achieve proper and consistent estimation even for the CCD images.

  11. Image reconstruction from few-view CT data by gradient-domain dictionary learning.

    PubMed

    Hu, Zhanli; Liu, Qiegen; Zhang, Na; Zhang, Yunwan; Peng, Xi; Wu, Peter Z; Zheng, Hairong; Liang, Dong

    2016-05-21

    Decreasing the number of projections is an effective way to reduce the radiation dose exposed to patients in medical computed tomography (CT) imaging. However, incomplete projection data for CT reconstruction will result in artifacts and distortions. In this paper, a novel dictionary learning algorithm operating in the gradient-domain (Grad-DL) is proposed for few-view CT reconstruction. Specifically, the dictionaries are trained from the horizontal and vertical gradient images, respectively and the desired image is reconstructed subsequently from the sparse representations of both gradients by solving the least-square method. Since the gradient images are sparser than the image itself, the proposed approach could lead to sparser representations than conventional DL methods in the image-domain, and thus a better reconstruction quality is achieved. To evaluate the proposed Grad-DL algorithm, both qualitative and quantitative studies were employed through computer simulations as well as real data experiments on fan-beam and cone-beam geometry. The results show that the proposed algorithm can yield better images than the existing algorithms.

  12. Design optimization of hydraulic turbine draft tube based on CFD and DOE method

    NASA Astrophysics Data System (ADS)

    Nam, Mun chol; Dechun, Ba; Xiangji, Yue; Mingri, Jin

    2018-03-01

    In order to improve performance of the hydraulic turbine draft tube in its design process, the optimization for draft tube is performed based on multi-disciplinary collaborative design optimization platform by combining the computation fluid dynamic (CFD) and the design of experiment (DOE) in this paper. The geometrical design variables are considered as the median section in the draft tube and the cross section in its exit diffuser and objective function is to maximize the pressure recovery factor (Cp). Sample matrixes required for the shape optimization of the draft tube are generated by optimal Latin hypercube (OLH) method of the DOE technique and their performances are evaluated through computational fluid dynamic (CFD) numerical simulation. Subsequently the main effect analysis and the sensitivity analysis of the geometrical parameters of the draft tube are accomplished. Then, the design optimization of the geometrical design variables is determined using the response surface method. The optimization result of the draft tube shows a marked performance improvement over the original.

  13. Monte Carlo simulations guided by imaging to predict the in vitro ranking of radiosensitizing nanoparticles

    PubMed Central

    Retif, Paul; Reinhard, Aurélie; Paquot, Héna; Jouan-Hureaux, Valérie; Chateau, Alicia; Sancey, Lucie; Barberi-Heyob, Muriel; Pinel, Sophie; Bastogne, Thierry

    2016-01-01

    This article addresses the in silico–in vitro prediction issue of organometallic nanoparticles (NPs)-based radiosensitization enhancement. The goal was to carry out computational experiments to quickly identify efficient nanostructures and then to preferentially select the most promising ones for the subsequent in vivo studies. To this aim, this interdisciplinary article introduces a new theoretical Monte Carlo computational ranking method and tests it using 3 different organometallic NPs in terms of size and composition. While the ranking predicted in a classical theoretical scenario did not fit the reference results at all, in contrast, we showed for the first time how our accelerated in silico virtual screening method, based on basic in vitro experimental data (which takes into account the NPs cell biodistribution), was able to predict a relevant ranking in accordance with in vitro clonogenic efficiency. This corroborates the pertinence of such a prior ranking method that could speed up the preclinical development of NPs in radiation therapy. PMID:27920524

  14. Monte Carlo simulations guided by imaging to predict the in vitro ranking of radiosensitizing nanoparticles.

    PubMed

    Retif, Paul; Reinhard, Aurélie; Paquot, Héna; Jouan-Hureaux, Valérie; Chateau, Alicia; Sancey, Lucie; Barberi-Heyob, Muriel; Pinel, Sophie; Bastogne, Thierry

    This article addresses the in silico-in vitro prediction issue of organometallic nanoparticles (NPs)-based radiosensitization enhancement. The goal was to carry out computational experiments to quickly identify efficient nanostructures and then to preferentially select the most promising ones for the subsequent in vivo studies. To this aim, this interdisciplinary article introduces a new theoretical Monte Carlo computational ranking method and tests it using 3 different organometallic NPs in terms of size and composition. While the ranking predicted in a classical theoretical scenario did not fit the reference results at all, in contrast, we showed for the first time how our accelerated in silico virtual screening method, based on basic in vitro experimental data (which takes into account the NPs cell biodistribution), was able to predict a relevant ranking in accordance with in vitro clonogenic efficiency. This corroborates the pertinence of such a prior ranking method that could speed up the preclinical development of NPs in radiation therapy.

  15. Uses of Computer Simulation Models in Ag-Research and Everyday Life

    USDA-ARS?s Scientific Manuscript database

    When the news media talks about models they could be talking about role models, fashion models, conceptual models like the auto industry uses, or computer simulation models. A computer simulation model is a computer code that attempts to imitate the processes and functions of certain systems. There ...

  16. A meta-analysis of outcomes from the use of computer-simulated experiments in science education

    NASA Astrophysics Data System (ADS)

    Lejeune, John Van

    The purpose of this study was to synthesize the findings from existing research on the effects of computer simulated experiments on students in science education. Results from 40 reports were integrated by the process of meta-analysis to examine the effect of computer-simulated experiments and interactive videodisc simulations on student achievement and attitudes. Findings indicated significant positive differences in both low-level and high-level achievement of students who use computer-simulated experiments and interactive videodisc simulations as compared to students who used more traditional learning activities. No significant differences in retention, student attitudes toward the subject, or toward the educational method were found. Based on the findings of this study, computer-simulated experiments and interactive videodisc simulations should be used to enhance students' learning in science, especially in cases where the use of traditional laboratory activities are expensive, dangerous, or impractical.

  17. Extension of coarse-grained UNRES force field to treat carbon nanotubes.

    PubMed

    Sieradzan, Adam K; Mozolewska, Magdalena A

    2018-04-26

    Carbon nanotubes (CNTs) have recently received considerable attention because of their possible applications in various branches of nanotechnology. For their cogent application, knowledge of their interactions with biological macromolecules, especially proteins, is essential and computer simulations are very useful for such studies. Classical all-atom force fields limit simulation time scale and size of the systems significantly. Therefore, in this work, we implemented CNTs into the coarse-grained UNited RESidue (UNRES) force field. A CNT is represented as a rigid infinite-length cylinder which interacts with a protein through the Kihara potential. Energy conservation in microcanonical coarse-grained molecular dynamics simulations and temperature conservation in canonical simulations with UNRES containing the CNT component have been verified. Subsequently, studies of three proteins, bovine serum albumin (BSA), soybean peroxidase (SBP), and α-chymotrypsin (CT), with and without CNTs, were performed to examine the influence of CNTs on the structure and dynamics of these proteins. It was found that nanotubes bind to these proteins and influence their structure. Our results show that the UNRES force field can be used for further studies of CNT-protein systems with 3-4 order of magnitude larger timescale than using regular all-atom force fields. Graphical abstract Bovine serum albumin (BSA), soybean peroxidase (SBP), and α-chymotrypsin (CT), with and without CNTsᅟ.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reedlunn, Benjamin

    Room D was an in-situ, isothermal, underground experiment conducted at the Waste Isolation Pilot Plant between 1984 and 1991. The room was carefully instrumented to measure the horizontal and vertical closure immediately upon excavation and for several years thereafter. Early finite element simulations of salt creep around Room D under-predicted the vertical closure by 4.5×, causing investigators to explore a series of changes to the way Room D was modeled. Discrepancies between simulations and measurements were resolved through a series of adjustments to model parameters, which were openly acknowledged in published reports. Interest in Room D has been rekindled recentlymore » by the U.S./German Joint Project III and Project WEIMOS, which seek to improve the predictions of rock salt constitutive models. Joint Project participants calibrate their models solely against laboratory tests, and benchmark the models against underground experiments, such as room D. This report describes updating legacy Room D simulations to today’s computational standards by rectifying several numerical issues. Subsequently, the constitutive model used in previous modeling is recalibrated two different ways against a suite of new laboratory creep experiments on salt extracted from the repository horizon of the Waste Isolation Pilot Plant. Simulations with the new, laboratory-based, calibrations under-predict Room D vertical closure by 3.1×. A list of potential improvements is discussed.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reedlunn, Benjamin

    Room D was an in-situ, isothermal, underground experiment conducted at theWaste Isolation Pilot Plant between 1984 and 1991. The room was carefully instrumented to measure the horizontal and vertical closure immediately upon excavation and for several years thereafter. Early finite element simulations of salt creep around Room D under predicted the vertical closure by 4.5×, causing investigators to explore a series of changes to the way Room D was modeled. Discrepancies between simulations and measurements were resolved through a series of adjustments to model parameters, which were openly acknowledged in published reports. Interest in Room D has been rekindled recentlymore » by the U.S./German Joint Project III and Project WEIMOS, which seek to improve the predictions of rock salt constitutive models. Joint Project participants calibrate their models solely against laboratory tests, and benchmark the models against underground experiments, such as room D. This report describes updating legacy Room D simulations to today’s computational standards by rectifying several numerical issues. Subsequently, the constitutive model used in previous modeling is recalibrated two different ways against a suite of new laboratory creep experiments on salt extracted from the repository horizon of the Waste Isolation Pilot Plant. Simulations with the new, laboratory-based, calibrations under predict Room D vertical closure by 3.1×. A list of potential improvements is discussed.« less

  20. A search for a heavy Majorana neutrino and a radiation damage simulation for the HF detector

    NASA Astrophysics Data System (ADS)

    Wetzel, James William

    A search for heavy Majorana neutrinos is performed using an event signature defined by two same-sign muons accompanied by two jets. This search is an extension of previous searches, (L3, DELPHI, CMS, ATLAS), using 19.7 fb -1 of data from the 2012 Large Hadron Collider experimental run collected by the Compact Muon Solenoid experiment. A mass window of 40-500 GeV/ c2 is explored. No excess events above Standard Model backgrounds is observed, and limits are set on the mixing element squared, |VmuN|2, as a function of Majorana neutFnrino mass. The Hadronic Forward (HF) Detector's performance will degrade as a function of the number of particles delivered to the detector over time, a quantity referred to as integrated luminosity and measured in inverse femtobarns (fb-1). In order to better plan detector upgrades, the CMS Forward Calorimetry Task Force (FCAL) group and the CMS Hadronic Calorimeter (HCAL) group have requested that radiation damage be simulated and the subsequent performance of the HF subdetector be studied. The simulation was implemented into both the CMS FastSim and CMS FullSim simulation packages. Standard calorimetry performance metrics were computed and are reported. The HF detector can expect to perform well through the planned delivery of 3000 fb-1.

  1. Network Reduction Algorithm for Developing Distribution Feeders for Real-Time Simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagarajan, Adarsh; Nelson, Austin A; Prabakar, Kumaraguru

    As advanced grid-support functions (AGF) become more widely used in grid-connected photovoltaic (PV) inverters, utilities are increasingly interested in their impacts when implemented in the field. These effects can be understood by modeling feeders in real-time simulators and test PV inverters using power hardware-in-the-loop (PHIL) techniques. This paper presents a novel feeder model reduction algorithm using a ruin & reconstruct methodology that enables large feeders to be solved and operated on real-time computing platforms. Two Hawaiian Electric feeder models in Synergi Electric's load flow software were converted to reduced order models in OpenDSS, and subsequently implemented in the OPAL-RT real-timemore » digital testing platform. Smart PV inverters were added to the realtime model with AGF responses modeled after characterizing commercially available hardware inverters. Finally, hardware inverters were tested in conjunction with the real-time model using PHIL techniques so that the effects of AGFs on the feeders could be analyzed.« less

  2. A Continuum Method for Determining Membrane Protein Insertion Energies and the Problem of Charged Residues

    PubMed Central

    Choe, Seungho; Hecht, Karen A.; Grabe, Michael

    2008-01-01

    Continuum electrostatic approaches have been extremely successful at describing the charged nature of soluble proteins and how they interact with binding partners. However, it is unclear whether continuum methods can be used to quantitatively understand the energetics of membrane protein insertion and stability. Recent translation experiments suggest that the energy required to insert charged peptides into membranes is much smaller than predicted by present continuum theories. Atomistic simulations have pointed to bilayer inhomogeneity and membrane deformation around buried charged groups as two critical features that are neglected in simpler models. Here, we develop a fully continuum method that circumvents both of these shortcomings by using elasticity theory to determine the shape of the deformed membrane and then subsequently uses this shape to carry out continuum electrostatics calculations. Our method does an excellent job of quantitatively matching results from detailed molecular dynamics simulations at a tiny fraction of the computational cost. We expect that this method will be ideal for studying large membrane protein complexes. PMID:18474636

  3. Development of an Active Flow Control Technique for an Airplane High-Lift Configuration

    NASA Technical Reports Server (NTRS)

    Shmilovich, Arvin; Yadlin, Yoram; Dickey, Eric D.; Hartwich, Peter M.; Khodadoust, Abdi

    2017-01-01

    This study focuses on Active Flow Control methods used in conjunction with airplane high-lift systems. The project is motivated by the simplified high-lift system, which offers enhanced airplane performance compared to conventional high-lift systems. Computational simulations are used to guide the implementation of preferred flow control methods, which require a fluidic supply. It is first demonstrated that flow control applied to a high-lift configuration that consists of simple hinge flaps is capable of attaining the performance of the conventional high-lift counterpart. A set of flow control techniques has been subsequently considered to identify promising candidates, where the central requirement is that the mass flow for actuation has to be within available resources onboard. The flow control methods are based on constant blowing, fluidic oscillators, and traverse actuation. The simulations indicate that the traverse actuation offers a substantial reduction in required mass flow, and it is especially effective when the frequency of actuation is consistent with the characteristic time scale of the flow.

  4. Mechanics of kinetochore microtubules and their interactions with chromosomes during cell division

    NASA Astrophysics Data System (ADS)

    Nazockdast, Ehssan; Fürthauer, Sebastian; Redemann, Stephanie; Baumgart, Johannes; Lindow, Norbert; Kratz, Andrea; Prohaska, Steffen; Müller-Reichert, Thomas; Shelley, Michael

    2016-11-01

    The accurate segregation of chromosomes, and subsequent cell division, in Eukaryotic cells is achieved by the interactions of an assembly of microtubules (MTs) and motor-proteins, known as the mitotic spindle. We use a combination of our computational platform for simulating cytoskeletal assemblies and our structural data from high-resolution electron tomography of the mitotic spindle, to study the kinetics and mechanics of MTs in the spindle, and their interactions with chromosomes during chromosome segregation in the first cell division in C.elegans embryo. We focus on kinetochore MTs, or KMTs, which have one end attached to a chromosome. KMTs are thought to be a key mechanical component in chromosome segregation. Using exploratory simulations of MT growth, bending, hydrodynamic interactions, and attachment to chromosomes, we propose a mechanical model for KMT-chromosome interactions that reproduces observed KMT length and shape distributions from electron tomography. We find that including detailed hydrodynamic interactions between KMTs is essential for agreement with the experimental observations.

  5. Fe/starch nanoparticle - Pseudomonas aeruginosa: Bio-physiochemical and MD studies.

    PubMed

    Mofradnia, Soheil Rezazadeh; Tavakoli, Zahra; Yazdian, Fatemeh; Rashedi, Hamid; Rasekh, Behnam

    2018-05-03

    In this research, we attempt to study biosurfactant production by Pseudomonas aeruginosa using Fe/starch nanoparticles. Fe/starch showed no bacterial toxicity at 1 mg/ml and increased the growth rate and biosurfactant production up to 23.21 and 20.73%, respectively. Surface tension, dry weight cell, and emulsification indexes (E24) were measured. Biosurfactant production was considered via computational techniques and molecular dynamic (MD) simulation through flexible and periodic conditions (by material studio software) as well. The results of software predictions demonstrate by radial distribution function (RDF), density, energy and temperature graphs. According to the present experimental results, increased 30% growth of the bacterium has been observed and the subsequent production of biosurfactant. The difference between the experimental results and simulation data were achieved up to 0.17 g/cm 3 , which confirms the prediction of data by the software due to a difference of <14.5% (ideal error value is 20%). Copyright © 2017. Published by Elsevier B.V.

  6. Hydrology of the Bonneville Salt Flats, northwestern Utah, and simulation of ground-water flow and solute transport in the shallow-brine aquifer

    USGS Publications Warehouse

    Mason, James L.; Kipp, Kenneth L.

    1998-01-01

    This report describes the hydrologic system of the Bonneville Salt Flats with emphasis on the mechanisms of solute transport. Variable-density, three-dimensional computer simulations of the near-surface part of the ground-water system were done to quantify both the transport of salt dissolved in subsurface brine that leaves the salt-crust area and the salt dissolved and precipitated on the land surface. The study was designed to define the hydrology of the brine ground-water system and the natural and anthropogenic processes causing salt loss, and where feasible, to quantify these processes. Specific areas of study include the transport of salt in solution by ground-water flow and the transport of salt in solution by wind-driven ponds and the subsequent salt precipitation on the surface of the playa upon evaporation or seepage into the subsurface. In addition, hydraulic and chemical changes in the hydrologic system since previous studies were documented.

  7. Amplified effect of Brownian motion in bacterial near-surface swimming

    PubMed Central

    Li, Guanglai; Tam, Lick-Kong; Tang, Jay X.

    2008-01-01

    Brownian motion influences bacterial swimming by randomizing displacement and direction. Here, we report that the influence of Brownian motion is amplified when it is coupled to hydrodynamic interaction. We examine swimming trajectories of the singly flagellated bacterium Caulobacter crescentus near a glass surface with total internal reflection fluorescence microscopy and observe large fluctuations over time in the distance of the cell from the solid surface caused by Brownian motion. The observation is compared with computer simulation based on analysis of relevant physical factors, including electrostatics, van der Waals force, hydrodynamics, and Brownian motion. The simulation reproduces the experimental findings and reveals contribution from fluctuations of the cell orientation beyond the resolution of present observation. Coupled with hydrodynamic interaction between the bacterium and the boundary surface, the fluctuations in distance and orientation subsequently lead to variation of the swimming speed and local radius of curvature of swimming trajectory. These results shed light on the fundamental roles of Brownian motion in microbial motility, nutrient uptake, and adhesion. PMID:19015518

  8. Introduction and Highlights of the Workshop

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Venneri, Samuel L.

    1997-01-01

    Four generations of CAD/CAM systems can be identified, corresponding to changes in both modeling functionality and software architecture. The systems evolved from 2D and wireframes to solid modeling, to parametric/variational modelers to the current simulation-embedded systems. Recent developments have enabled design engineers to perform many of the complex analysis tasks, typically performed by analysis experts. Some of the characteristics of the current and emerging CAD/CAM/CAE systems are described in subsequent presentations. The focus of the workshop is on the potential of CAD/CAM/CAE systems for use in simulating the entire mission and life-cycle of future aerospace systems, and the needed development to realize this potential. First, the major features of the emerging computing, communication and networking environment are outlined; second, the characteristics and design drivers of future aerospace systems are identified; third, the concept of intelligent synthesis environment being planned by NASA, the UVA ACT Center and JPL is presented; and fourth, the objectives and format of the workshop are outlined.

  9. NBodyLab Simulation Experiments with GRAPE-6a AND MD-GRAPE2 Acceleration

    NASA Astrophysics Data System (ADS)

    Johnson, V.; Ates, A.

    2005-12-01

    NbodyLab is an astrophysical N-body simulation testbed for student research. It is accessible via a web interface and runs as a backend framework under Linux. NbodyLab can generate data models or perform star catalog lookups, transform input data sets, perform direct summation gravitational force calculations using a variety of integration schemes, and produce analysis and visualization output products. NEMO (Teuben 1994), a popular stellar dynamics toolbox, is used for some functions. NbodyLab integrators can optionally utilize two types of low-cost desktop supercomputer accelerators, the newly available GRAPE-6a (125 Gflops peak) and the MD-GRAPE2 (64-128 Gflops peak). The initial version of NBodyLab was presented at ADASS 2002. This paper summarizes software enhancements developed subsequently, focusing on GRAPE-6a related enhancements, and gives examples of computational experiments and astrophysical research, including star cluster and solar system studies, that can be conducted with the new testbed functionality.

  10. A new modelling and identification scheme for time-delay systems with experimental investigation: a relay feedback approach

    NASA Astrophysics Data System (ADS)

    Pandey, Saurabh; Majhi, Somanath; Ghorai, Prasenjit

    2017-07-01

    In this paper, the conventional relay feedback test has been modified for modelling and identification of a class of real-time dynamical systems in terms of linear transfer function models with time-delay. An ideal relay and unknown systems are connected through a negative feedback loop to bring the sustained oscillatory output around the non-zero setpoint. Thereafter, the obtained limit cycle information is substituted in the derived mathematical equations for accurate identification of unknown plants in terms of overdamped, underdamped, critically damped second-order plus dead time and stable first-order plus dead time transfer function models. Typical examples from the literature are included for the validation of the proposed identification scheme through computer simulations. Subsequently, the comparisons between estimated model and true system are drawn through integral absolute error criterion and frequency response plots. Finally, the obtained output responses through simulations are verified experimentally on real-time liquid level control system using Yokogawa Distributed Control System CENTUM CS3000 set up.

  11. Geostatistical Borehole Image-Based Mapping of Karst-Carbonate Aquifer Pores.

    PubMed

    Sukop, Michael C; Cunningham, Kevin J

    2016-03-01

    Quantification of the character and spatial distribution of porosity in carbonate aquifers is important as input into computer models used in the calculation of intrinsic permeability and for next-generation, high-resolution groundwater flow simulations. Digital, optical, borehole-wall image data from three closely spaced boreholes in the karst-carbonate Biscayne aquifer in southeastern Florida are used in geostatistical experiments to assess the capabilities of various methods to create realistic two-dimensional models of vuggy megaporosity and matrix-porosity distribution in the limestone that composes the aquifer. When the borehole image data alone were used as the model training image, multiple-point geostatistics failed to detect the known spatial autocorrelation of vuggy megaporosity and matrix porosity among the three boreholes, which were only 10 m apart. Variogram analysis and subsequent Gaussian simulation produced results that showed a realistic conceptualization of horizontal continuity of strata dominated by vuggy megaporosity and matrix porosity among the three boreholes. © 2015, National Ground Water Association.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Soohaeng; Xantheas, Sotiris S.

    Water's function as a universal solvent and its role in mediating several biological functions that are responsible for sustaining life has created tremendous interest in the understanding of its structure at the molecular level.1 Due to the size of the simulation cells and the sampling time needed to compute many macroscopic properties, most of the initial simulations are performed using a classical force field whereas several processes that involve chemistry are subsequently probed with electronic structure based methods. A significant effort has therefore been devoted towards the development of classical force fields for water.2 Clusters of water molecules are usefulmore » in probing the intermolecular interactions at the microscopic level as well as providing information about the subtle energy differences that are associated with different bonding arrangements within a hydrogen bonded network. They moreover render a quantitative picture of the nature and magnitude of the various components of the intermolecular interactions such as exchange, dispersion, induction etc. They can finally serve as a vehicle for the study of the convergence of properties with increasing size.« less

  13. Near real-time traffic routing

    NASA Technical Reports Server (NTRS)

    Yang, Chaowei (Inventor); Xie, Jibo (Inventor); Zhou, Bin (Inventor); Cao, Ying (Inventor)

    2012-01-01

    A near real-time physical transportation network routing system comprising: a traffic simulation computing grid and a dynamic traffic routing service computing grid. The traffic simulator produces traffic network travel time predictions for a physical transportation network using a traffic simulation model and common input data. The physical transportation network is divided into a multiple sections. Each section has a primary zone and a buffer zone. The traffic simulation computing grid includes multiple of traffic simulation computing nodes. The common input data includes static network characteristics, an origin-destination data table, dynamic traffic information data and historical traffic data. The dynamic traffic routing service computing grid includes multiple dynamic traffic routing computing nodes and generates traffic route(s) using the traffic network travel time predictions.

  14. Radiotherapy Monte Carlo simulation using cloud computing technology.

    PubMed

    Poole, C M; Cornelius, I; Trapp, J V; Langton, C M

    2012-12-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.

  15. On efficiency of fire simulation realization: parallelization with greater number of computational meshes

    NASA Astrophysics Data System (ADS)

    Valasek, Lukas; Glasa, Jan

    2017-12-01

    Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.

  16. Simulating complex intracellular processes using object-oriented computational modelling.

    PubMed

    Johnson, Colin G; Goldman, Jacki P; Gullick, William J

    2004-11-01

    The aim of this paper is to give an overview of computer modelling and simulation in cellular biology, in particular as applied to complex biochemical processes within the cell. This is illustrated by the use of the techniques of object-oriented modelling, where the computer is used to construct abstractions of objects in the domain being modelled, and these objects then interact within the computer to simulate the system and allow emergent properties to be observed. The paper also discusses the role of computer simulation in understanding complexity in biological systems, and the kinds of information which can be obtained about biology via simulation.

  17. Quantum chemistry simulation on quantum computers: theories and experiments.

    PubMed

    Lu, Dawei; Xu, Boruo; Xu, Nanyang; Li, Zhaokai; Chen, Hongwei; Peng, Xinhua; Xu, Ruixue; Du, Jiangfeng

    2012-07-14

    It has been claimed that quantum computers can mimic quantum systems efficiently in the polynomial scale. Traditionally, those simulations are carried out numerically on classical computers, which are inevitably confronted with the exponential growth of required resources, with the increasing size of quantum systems. Quantum computers avoid this problem, and thus provide a possible solution for large quantum systems. In this paper, we first discuss the ideas of quantum simulation, the background of quantum simulators, their categories, and the development in both theories and experiments. We then present a brief introduction to quantum chemistry evaluated via classical computers followed by typical procedures of quantum simulation towards quantum chemistry. Reviewed are not only theoretical proposals but also proof-of-principle experimental implementations, via a small quantum computer, which include the evaluation of the static molecular eigenenergy and the simulation of chemical reaction dynamics. Although the experimental development is still behind the theory, we give prospects and suggestions for future experiments. We anticipate that in the near future quantum simulation will become a powerful tool for quantum chemistry over classical computations.

  18. An analysis of the 70-meter antenna hydrostatic bearing by means of computer simulation

    NASA Technical Reports Server (NTRS)

    Bartos, R. D.

    1993-01-01

    Recently, the computer program 'A Computer Solution for Hydrostatic Bearings with Variable Film Thickness,' used to design the hydrostatic bearing of the 70-meter antennas, was modified to improve the accuracy with which the program predicts the film height profile and oil pressure distribution between the hydrostatic bearing pad and the runner. This article presents a description of the modified computer program, the theory upon which the computer program computations are based, computer simulation results, and a discussion of the computer simulation results.

  19. Comparative Implementation of High Performance Computing for Power System Dynamic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng

    Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less

  20. The effectiveness of interactive computer simulations on college engineering student conceptual understanding and problem-solving ability related to circular motion

    NASA Astrophysics Data System (ADS)

    Chien, Cheng-Chih

    In the past thirty years, the effectiveness of computer assisted learning was found varied by individual studies. Today, with drastic technical improvement, computers have been widely spread in schools and used in a variety of ways. In this study, a design model involving educational technology, pedagogy, and content domain is proposed for effective use of computers in learning. Computer simulation, constructivist and Vygotskian perspectives, and circular motion are the three elements of the specific Chain Model for instructional design. The goal of the physics course is to help students remove the ideas which are not consistent with the physics community and rebuild new knowledge. To achieve the learning goal, the strategies of using conceptual conflicts and using language to internalize specific tasks into mental functions were included. Computer simulations and accompanying worksheets were used to help students explore their own ideas and to generate questions for discussions. Using animated images to describe the dynamic processes involved in the circular motion may reduce the complexity and possible miscommunications resulting from verbal explanations. The effectiveness of the instructional material on student learning is evaluated. The results of problem solving activities show that students using computer simulations had significantly higher scores than students not using computer simulations. For conceptual understanding, on the pretest students in the non-simulation group had significantly higher score than students in the simulation group. There was no significant difference observed between the two groups in the posttest. The relations of gender, prior physics experience, and frequency of computer uses outside the course to student achievement were also studied. There were fewer female students than male students and fewer students using computer simulations than students not using computer simulations. These characteristics affect the statistical power for detecting differences. For the future research, more intervention of simulations may be introduced to explore the potential of computer simulation in helping students learning. A test for conceptual understanding with more problems and appropriate difficulty level may be needed.

  1. Future trends in computer waste generation in India.

    PubMed

    Dwivedy, Maheshwar; Mittal, R K

    2010-11-01

    The objective of this paper is to estimate the future projection of computer waste in India and to subsequently analyze their flow at the end of their useful phase. For this purpose, the study utilizes the logistic model-based approach proposed by Yang and Williams to forecast future trends in computer waste. The model estimates future projection of computer penetration rate utilizing their first lifespan distribution and historical sales data. A bounding analysis on the future carrying capacity was simulated using the three parameter logistic curve. The observed obsolete generation quantities from the extrapolated penetration rates are then used to model the disposal phase. The results of the bounding analysis indicate that in the year 2020, around 41-152 million units of computers will become obsolete. The obsolete computer generation quantities are then used to estimate the End-of-Life outflows by utilizing a time-series multiple lifespan model. Even a conservative estimate of the future recycling capacity of PCs will reach upwards of 30 million units during 2025. Apparently, more than 150 million units could be potentially recycled in the upper bound case. However, considering significant future investment in the e-waste recycling sector from all stakeholders in India, we propose a logistic growth in the recycling rate and estimate the requirement of recycling capacity between 60 and 400 million units for the lower and upper bound case during 2025. Finally, we compare the future obsolete PC generation amount of the US and India. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. SU-E-T-25: Real Time Simulator for Designing Electron Dual Scattering Foil Systems.

    PubMed

    Carver, R; Hogstrom, K; Price, M; Leblanc, J; Harris, G

    2012-06-01

    To create a user friendly, accurate, real time computer simulator to facilitate the design of dual foil scattering systems for electron beams on radiotherapy accelerators. The simulator should allow for a relatively quick, initial design that can be refined and verified with subsequent Monte Carlo (MC) calculations and measurements. The simulator consists of an analytical algorithm for calculating electron fluence and a graphical user interface (GUI) C++ program. The algorithm predicts electron fluence using Fermi-Eyges multiple Coulomb scattering theory with a refined Moliere formalism for scattering powers. The simulator also estimates central-axis x-ray dose contamination from the dual foil system. Once the geometry of the beamline is specified, the simulator allows the user to continuously vary primary scattering foil material and thickness, secondary scattering foil material and Gaussian shape (thickness and sigma), and beam energy. The beam profile and x-ray contamination are displayed in real time. The simulator was tuned by comparison of off-axis electron fluence profiles with those calculated using EGSnrc MC. Over the energy range 7-20 MeV and using present foils on the Elekta radiotherapy accelerator, the simulator profiles agreed to within 2% of MC profiles from within 20 cm of the central axis. The x-ray contamination predictions matched measured data to within 0.6%. The calculation time was approximately 100 ms using a single processor, which allows for real-time variation of foil parameters using sliding bars. A real time dual scattering foil system simulator has been developed. The tool has been useful in a project to redesign an electron dual scattering foil system for one of our radiotherapy accelerators. The simulator has also been useful as an instructional tool for our medical physics graduate students. © 2012 American Association of Physicists in Medicine.

  3. Subresolution Displacements in Finite Difference Simulations of Ultrasound Propagation and Imaging.

    PubMed

    Pinton, Gianmarco F

    2017-03-01

    Time domain finite difference simulations are used extensively to simulate wave propagation. They approximate the wave field on a discrete domain with a grid spacing that is typically on the order of a tenth of a wavelength. The smallest displacements that can be modeled by this type of simulation are thus limited to discrete values that are integer multiples of the grid spacing. This paper presents a method to represent continuous and subresolution displacements by varying the impedance of individual elements in a multielement scatterer. It is demonstrated that this method removes the limitations imposed by the discrete grid spacing by generating a continuum of displacements as measured by the backscattered signal. The method is first validated on an ideal perfect correlation case with a single scatterer. It is subsequently applied to a more complex case with a field of scatterers that model an acoustic radiation force-induced displacement used in ultrasound elasticity imaging. A custom finite difference simulation tool is used to simulate propagation from ultrasound imaging pulses in the scatterer field. These simulated transmit-receive events are then beamformed into images, which are tracked with a correlation-based algorithm to determine the displacement. A linear predictive model is developed to analytically describe the relationship between element impedance and backscattered phase shift. The error between model and simulation is λ/ 1364 , where λ is the acoustical wavelength. An iterative method is also presented that reduces the simulation error to λ/ 5556 over one iteration. The proposed technique therefore offers a computationally efficient method to model continuous subresolution displacements of a scattering medium in ultrasound imaging. This method has applications that include ultrasound elastography, blood flow, and motion tracking. This method also extends generally to finite difference simulations of wave propagation, such as electromagnetic or seismic waves.

  4. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system.

    PubMed

    Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Beavis, A W; Saunderson, J R

    2014-05-07

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  5. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system

    NASA Astrophysics Data System (ADS)

    Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Beavis, A. W.; Saunderson, J. R.

    2014-05-01

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  6. Paper simulation techniques in user requirements analysis for interactive computer systems

    NASA Technical Reports Server (NTRS)

    Ramsey, H. R.; Atwood, M. E.; Willoughby, J. K.

    1979-01-01

    This paper describes the use of a technique called 'paper simulation' in the analysis of user requirements for interactive computer systems. In a paper simulation, the user solves problems with the aid of a 'computer', as in normal man-in-the-loop simulation. In this procedure, though, the computer does not exist, but is simulated by the experimenters. This allows simulated problem solving early in the design effort, and allows the properties and degree of structure of the system and its dialogue to be varied. The technique, and a method of analyzing the results, are illustrated with examples from a recent paper simulation exercise involving a Space Shuttle flight design task

  7. CFD on hypersonic flow geometries with aeroheating

    NASA Astrophysics Data System (ADS)

    Sohail, Muhammad Amjad; Chao, Yan; Hui, Zhang Hui; Ullah, Rizwan

    2012-11-01

    The hypersonic flowfield around a blunted cone and cone-flare exhibits some of the major features of the flows around space vehicles, e.g. a detached bow shock in the stagnation region and the oblique shock wave/boundary layer interaction at the cone-flare junction. The shock wave/boundary layer interaction can produce a region of separated flow. This phenomenon may occur, for example, at the upstream-facing corner formed by a deflected control surface on a hypersonic entry vehicle, where the length of separation has implications for control effectiveness. Computational fluid-dynamics results are presented to show the flowfield around a blunted cone and cone-flare configurations in hypersonic flow with separation. This problem is of particular interest since it features most of the aspects of the hypersonic flow around planetary entry vehicles. The region between the cone and the flare is particularly critical with respect to the evaluation of the surface pressure and heat flux with aeroheating. Indeed, flow separation is induced by the shock wave boundary layer interaction, with subsequent flow reattachment, that can dramatically enhance the surface heat transfer. The exact determination of the extension of the recirculation zone is a particularly delicate task for numerical codes. Laminar flow and turbulent computations have been carried out using a full Navier-Stokes solver, with freestream conditions provided by the experimental data obtained at Mach 6, 8, and 16.34 wind tunnel. The numerical results are compared with the measured pressure and surface heat flux distributions in the wind tunnel and a good agreement is found, especially on the length of the recirculation region and location of shock waves. The critical physics of entropy layer, boundary layers, boundary layers and shock wave interaction and flow behind shock are properly captured and elaborated.. Hypersonic flows are characterized by high Mach number and high total enthalpy. An elevated temperature often results in thermo-chemical reactions in the gas, which play a major role in aero thermodynamic characterization of high-speed aerospace vehicles. Computational simulation of such flows, therefore, needs to account for a range of physical phenomena. Further, the numerical challenges involved in resolving strong gradients and discontinuities add to the complexity of computational fluid dynamics (CFD) simulation. In this article, physical modeling and numerical methodology-related issues involved in hypersonic flow simulation are highlighted. State-of-the-art CFD challenges are discussed in the context of many prominent applications of hypersonic flows. In the first part of paper, hypersonic flow is simulated and aerodynamics characteristics are calculated. Then aero heating with chemical reactions are added in the simulations and in the end part heat transfer with turbulence modeling is simulated. Results are compared with available data.

  8. SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model

    NASA Astrophysics Data System (ADS)

    Grelle, G.; Bonito, L.; Lampasi, A.; Revellino, P.; Guerriero, L.; Sappa, G.; Guadagno, F. M.

    2015-06-01

    SiSeRHMap is a computerized methodology capable of drawing up prediction maps of seismic response. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code-architecture composed of five interdependent modules. A GIS (Geographic Information System) Cubic Model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A metamodeling process confers a hybrid nature to the methodology. In this process, the one-dimensional linear equivalent analysis produces acceleration response spectra of shear wave velocity-thickness profiles, defined as trainers, which are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated Evolutionary Algorithm (EA) and the Levenberg-Marquardt Algorithm (LMA) as the final optimizer. In the final step, the GCM Maps Executor module produces a serial map-set of a stratigraphic seismic response at different periods, grid-solving the calibrated Spectra model. In addition, the spectra topographic amplification is also computed by means of a numerical prediction model. This latter is built to match the results of the numerical simulations related to isolate reliefs using GIS topographic attributes. In this way, different sets of seismic response maps are developed, on which, also maps of seismic design response spectra are defined by means of an enveloping technique.

  9. Advanced computational techniques for incompressible/compressible fluid-structure interactions

    NASA Astrophysics Data System (ADS)

    Kumar, Vinod

    2005-07-01

    Fluid-Structure Interaction (FSI) problems are of great importance to many fields of engineering and pose tremendous challenges to numerical analyst. This thesis addresses some of the hurdles faced for both 2D and 3D real life time-dependent FSI problems with particular emphasis on parachute systems. The techniques developed here would help improve the design of parachutes and are of direct relevance to several other FSI problems. The fluid system is solved using the Deforming-Spatial-Domain/Stabilized Space-Time (DSD/SST) finite element formulation for the Navier-Stokes equations of incompressible and compressible flows. The structural dynamics solver is based on a total Lagrangian finite element formulation. Newton-Raphson method is employed to linearize the otherwise nonlinear system resulting from the fluid and structure formulations. The fluid and structural systems are solved in decoupled fashion at each nonlinear iteration. While rigorous coupling methods are desirable for FSI simulations, the decoupled solution techniques provide sufficient convergence in the time-dependent problems considered here. In this thesis, common problems in the FSI simulations of parachutes are discussed and possible remedies for a few of them are presented. Further, the effects of the porosity model on the aerodynamic forces of round parachutes are analyzed. Techniques for solving compressible FSI problems are also discussed. Subsequently, a better stabilization technique is proposed to efficiently capture and accurately predict the shocks in supersonic flows. The numerical examples simulated here require high performance computing. Therefore, numerical tools using distributed memory supercomputers with message passing interface (MPI) libraries were developed.

  10. Spectral-element simulations of wave propagation in complex exploration-industry models: Mesh generation and forward simulations

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, T.; Luo, Y.; Morency, C.; Tromp, J.

    2008-12-01

    Seismic-wave propagation in exploration-industry settings has seen major research and development efforts for decades, yet large-scale applications have often been limited to 2D or 3D finite-difference, (visco- )acoustic wave propagation due to computational limitations. We explore the possibility of including all relevant physical signatures in the wavefield using the spectral- element method (SPECFEM3D, SPECFEM2D), thereby accounting for acoustic, (visco-)elastic, poroelastic, anisotropic wave propagation in meshes which honor all crucial discontinuities. Mesh design is the crux of the problem, and we use CUBIT (Sandia Laboratories) to generate unstructured quadrilateral 2D and hexahedral 3D meshes for these complex background models. While general hexahedral mesh generation is an unresolved problem, we are able to accommodate most of the relevant settings (e.g., layer-cake models, salt bodies, overthrusting faults, and strong topography) with respectively tailored workflows. 2D simulations show localized, characteristic wave effects due to these features that shall be helpful in designing survey acquisition geometries in a relatively economic fashion. We address some of the fundamental issues this comprehensive modeling approach faces regarding its feasibility: Assessing geological structures in terms of the necessity to honor the major structural units, appropriate velocity model interpolation, quality control of the resultant mesh, and computational cost for realistic settings up to frequencies of 40 Hz. The solution to this forward problem forms the basis for subsequent 2D and 3D adjoint tomography within this context, which is the subject of a companion paper.

  11. Molecular determinants for the thermodynamic and functional divergence of uniporter GLUT1 and proton symporter XylE

    PubMed Central

    Ke, Meng; Jiang, Xin; Yan, Nieng

    2017-01-01

    GLUT1 facilitates the down-gradient translocation of D-glucose across cell membrane in mammals. XylE, an Escherichia coli homolog of GLUT1, utilizes proton gradient as an energy source to drive uphill D-xylose transport. Previous studies of XylE and GLUT1 suggest that the variation between an acidic residue (Asp27 in XylE) and a neutral one (Asn29 in GLUT1) is a key element for their mechanistic divergence. In this work, we combined computational and biochemical approaches to investigate the mechanism of proton coupling by XylE and the functional divergence between GLUT1 and XylE. Using molecular dynamics simulations, we evaluated the free energy profiles of the transition between inward- and outward-facing conformations for the apo proteins. Our results revealed the correlation between the protonation state and conformational preference in XylE, which is supported by the crystal structures. In addition, our simulations suggested a thermodynamic difference between XylE and GLUT1 that cannot be explained by the single residue variation at the protonation site. To understand the molecular basis, we applied Bayesian network models to analyze the alteration in the architecture of the hydrogen bond networks during conformational transition. The models and subsequent experimental validation suggest that multiple residue substitutions are required to produce the thermodynamic and functional distinction between XylE and GLUT1. Despite the lack of simulation studies with substrates, these computational and biochemical characterizations provide unprecedented insight into the mechanistic difference between proton symporters and uniporters. PMID:28617850

  12. Acceleration of the matrix multiplication of Radiance three phase daylighting simulations with parallel computing on heterogeneous hardware of personal computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuo, Wangda; McNeil, Andrew; Wetter, Michael

    2013-05-23

    Building designers are increasingly relying on complex fenestration systems to reduce energy consumed for lighting and HVAC in low energy buildings. Radiance, a lighting simulation program, has been used to conduct daylighting simulations for complex fenestration systems. Depending on the configurations, the simulation can take hours or even days using a personal computer. This paper describes how to accelerate the matrix multiplication portion of a Radiance three-phase daylight simulation by conducting parallel computing on heterogeneous hardware of a personal computer. The algorithm was optimized and the computational part was implemented in parallel using OpenCL. The speed of new approach wasmore » evaluated using various daylighting simulation cases on a multicore central processing unit and a graphics processing unit. Based on the measurements and analysis of the time usage for the Radiance daylighting simulation, further speedups can be achieved by using fast I/O devices and storing the data in a binary format.« less

  13. Computational composite mechanics for aerospace propulsion structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1986-01-01

    Specialty methods are presented for the computational simulation of specific composite behavior. These methods encompass all aspects of composite mechanics, impact, progressive fracture and component specific simulation. Some of these methods are structured to computationally simulate, in parallel, the composite behavior and history from the initial fabrication through several missions and even to fracture. Select methods and typical results obtained from such simulations are described in detail in order to demonstrate the effectiveness of computationally simulating (1) complex composite structural behavior in general and (2) specific aerospace propulsion structural components in particular.

  14. Computational composite mechanics for aerospace propulsion structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1987-01-01

    Specialty methods are presented for the computational simulation of specific composite behavior. These methods encompass all aspects of composite mechanics, impact, progressive fracture and component specific simulation. Some of these methods are structured to computationally simulate, in parallel, the composite behavior and history from the initial frabrication through several missions and even to fracture. Select methods and typical results obtained from such simulations are described in detail in order to demonstrate the effectiveness of computationally simulating: (1) complex composite structural behavior in general, and (2) specific aerospace propulsion structural components in particular.

  15. Functional requirements for design of the Space Ultrareliable Modular Computer (SUMC) system simulator

    NASA Technical Reports Server (NTRS)

    Curran, R. T.; Hornfeck, W. A.

    1972-01-01

    The functional requirements for the design of an interpretive simulator for the space ultrareliable modular computer (SUMC) are presented. A review of applicable existing computer simulations is included along with constraints on the SUMC simulator functional design. Input requirements, output requirements, and language requirements for the simulator are discussed in terms of a SUMC configuration which may vary according to the application.

  16. MAGIC Computer Simulation. Volume 2: Analyst Manual, Part 1

    DTIC Science & Technology

    1971-05-01

    A review of the subject Magic Computer Simulation User and Analyst Manuals has been conducted based upon a request received from the US Army...1971 4. TITLE AND SUBTITLE MAGIC Computer Simulation Analyst Manual Part 1 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...14. ABSTRACT The MAGIC computer simulation generates target description data consisting of item-by-item listings of the target’s components and air

  17. Computational simulation of progressive fracture in fiber composites

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1986-01-01

    Computational methods for simulating and predicting progressive fracture in fiber composite structures are presented. These methods are integrated into a computer code of modular form. The modules include composite mechanics, finite element analysis, and fracture criteria. The code is used to computationally simulate progressive fracture in composite laminates with and without defects. The simulation tracks the fracture progression in terms of modes initiating fracture, damage growth, and imminent global (catastrophic) laminate fracture.

  18. Polarizable molecular mechanics studies of Cu(I)/Zn(II) superoxide dismutase: bimetallic binding site and structured waters.

    PubMed

    Gresh, Nohad; El Hage, Krystel; Perahia, David; Piquemal, Jean-Philip; Berthomieu, Catherine; Berthomieu, Dorothée

    2014-11-05

    The existence of a network of structured waters in the vicinity of the bimetallic site of Cu/Zn-superoxide dismutase (SOD) has been inferred from high-resolution X-ray crystallography. Long-duration molecular dynamics (MD) simulations could enable to quantify the lifetimes and possible interchanges of these waters between themselves as well as with a ligand diffusing toward the bimetallic site. The presence of several charged or polar ligands makes it necessary to resort to second-generation polarizable potentials. As a first step toward such simulations, we benchmark in this article the accuracy of one such potential, sum of interactions between fragments Ab initio computed (SIBFA), by comparisons with quantum mechanics (QM) computations. We first consider the bimetallic binding site of a Cu/Zn-SOD, in which three histidines and a water molecule are bound to Cu(I) and three histidines and one aspartate are bound to Zn(II). The comparisons are made for different His6 complexes with either one or both cations, and either with or without Asp and water. The total net charges vary from zero to three. We subsequently perform preliminary short-duration MD simulations of 296 waters solvating Cu/Zn-SOD. Six representative geometries are selected and energy-minimized. Single-point SIBFA and QM computations are then performed in parallel on model binding sites extracted from these six structures, each of which totals 301 atoms including the closest 28 waters from the Cu metal site. The ranking of their relative stabilities as given by SIBFA is identical to the QM one, and the relative energy differences by both approaches are fully consistent. In addition, the lowest-energy structure, from SIBFA and QM, has a close overlap with the crystallographic one. The SIBFA calculations enable to quantify the impact of polarization and charge transfer in the ranking of the six structures. Five structural waters, which connect Arg141 and Glu131, are endowed with very high dipole moments (2.7-3.0 Debye), equal and larger than the one computed by SIBFA in ice-like arrangements (2.7 D). Copyright © 2014 Wiley Periodicals, Inc.

  19. Using Microcomputers Simulations in the Classroom: Examples from Undergraduate and Faculty Computer Literacy Courses.

    ERIC Educational Resources Information Center

    Hart, Jeffrey A.

    1985-01-01

    Presents a discussion of how computer simulations are used in two undergraduate social science courses and a faculty computer literacy course on simulations and artificial intelligence. Includes a list of 60 simulations for use on mainframes and microcomputers. Entries include type of hardware required, publisher's address, and cost. Sample…

  20. Learning Oceanography from a Computer Simulation Compared with Direct Experience at Sea

    ERIC Educational Resources Information Center

    Winn, William; Stahr, Frederick; Sarason, Christian; Fruland, Ruth; Oppenheimer, Peter; Lee, Yen-Ling

    2006-01-01

    Considerable research has compared how students learn science from computer simulations with how they learn from "traditional" classes. Little research has compared how students learn science from computer simulations with how they learn from direct experience in the real environment on which the simulations are based. This study compared two…

  1. Effect of Computer Simulations at the Particulate and Macroscopic Levels on Students' Understanding of the Particulate Nature of Matter

    ERIC Educational Resources Information Center

    Tang, Hui; Abraham, Michael R.

    2016-01-01

    Computer-based simulations can help students visualize chemical representations and understand chemistry concepts, but simulations at different levels of representation may vary in effectiveness on student learning. This study investigated the influence of computer activities that simulate chemical reactions at different levels of representation…

  2. Fluid Aspects of Solar Wind Disturbances Driven by Coronal Mass Ejections. Appendix 3

    NASA Technical Reports Server (NTRS)

    Gosling, J. T.; Riley, Pete

    2001-01-01

    Transient disturbances in the solar wind initiated by coronal eruptions have been modeled for many years, beginning with the self-similar analytical models of Parker and Simon and Axford. The first numerical computer code (one-dimensional, gas dynamic) to study disturbance propagation in the solar wind was developed in the late 1960s, and a variety of other codes ranging from simple one-dimensional gas dynamic codes through three-dimensional gas dynamic and magnetohydrodynamic codes have been developed in subsequent years. For the most part, these codes have been applied to the problem of disturbances driven by fast CMEs propagating into a structureless solar wind. Pizzo provided an excellent summary of the level of understanding achieved from such simulation studies through about 1984, and other reviews have subsequently become available. More recently, some attention has been focused on disturbances generated by slow CMEs, on disturbances driven by CMEs having high internal pressures, and disturbance propagation effects associated with a structured ambient solar wind. Our purpose here is to provide a brief tutorial on fluid aspects of solar wind disturbances derived from numerical gas dynamic simulations. For the most part we illustrate disturbance evolution by propagating idealized perturbations, mimicking different types of CMEs, into a structureless solar wind using a simple one-dimensional, adiabatic (except at shocks), gas dynamic code. The simulations begin outside the critical point where the solar wind becomes supersonic and thus do not address questions of how the CMEs themselves are initiated. Limited to one dimension (the radial direction), the simulation code predicts too strong an interaction between newly ejected solar material and the ambient wind because it neglects azimuthal and meridional motions of the plasma that help relieve pressure stresses. Moreover, the code ignores magnetic forces and thus also underestimates the speed with which pressure disturbances propagate in the wind.

  3. Numerical simulation of water flow and Nitrate transport through variably saturated porous media in laboratory condition using HYDRUS 2D

    NASA Astrophysics Data System (ADS)

    Jahangeer, F.; Gupta, P. K.; Yadav, B. K.

    2017-12-01

    Due to the reducing availability of water resources and the growing competition for water between residential, industrial, and agricultural users, increasing irrigation efficiency, by several methods like drip irrigation, is a demanding concern for agricultural experts. The understanding of the water and contaminants flow through the subsurface is needed for the sustainable irrigation water management, pollution assessment, polluted site remediation and groundwater recharge. In this study, the Windows-based computer software package HYDRUS-2D, which numerically simulates water and solute movement in two-dimensional, variably-saturated porous media, was used to evaluate the distribution of water and Nitrate in the sand tank. The laboratory and simulation experiments were conducted to evaluate the role of drainage, recharge flux, and infiltration on subsurface flow condition and subsequently, on nitrate movement in the subsurface. The water flow in the unsaturated zone model by Richards' equation, which was highly nonlinear and its parameters were largely dependent on the moisture content and pressure head of the partially saturated zone. Following different cases to be considered to evaluate- a) applying drainage and recharge flux to study domains, b) transient infiltration in a vertical soil column and c) subsequently, nitrate transport in 2D sand tank setup. A single porosity model was used for the simulation of water and nitrate flow in the study domain. The results indicate the transient water table position decreases as the time increase significantly by applying drainage flux at the bottom. Similarly, the water table positions in study domains increasing in the domain by applying recharge flux. Likewise, the water flow profile shows the decreasing water table elevation with increasing water content in the vertical domain. Moreover, the nitrate movement was dominated by advective flux and highly affected by the recharge flux in the vertical direction. The findings of the study help to enhance the understanding of the sustainable soil-water resources management and agricultural practices.

  4. Computer-based simulation training in emergency medicine designed in the light of malpractice cases.

    PubMed

    Karakuş, Akan; Duran, Latif; Yavuz, Yücel; Altintop, Levent; Calişkan, Fatih

    2014-07-27

    Using computer-based simulation systems in medical education is becoming more and more common. Although the benefits of practicing with these systems in medical education have been demonstrated, advantages of using computer-based simulation in emergency medicine education are less validated. The aim of the present study was to assess the success rates of final year medical students in doing emergency medical treatment and evaluating the effectiveness of computer-based simulation training in improving final year medical students' knowledge. Twenty four Students trained with computer-based simulation and completed at least 4 hours of simulation-based education between the dates Feb 1, 2010 - May 1, 2010. Also a control group (traditionally trained, n =24) was chosen. After the end of training, students completed an examination about 5 randomized medical simulation cases. In 5 cases, an average of 3.9 correct medical approaches carried out by computer-based simulation trained students, an average of 2.8 correct medical approaches carried out by traditionally trained group (t = 3.90, p < 0.005). We found that the success of students trained with simulation training in cases which required complicated medical approach, was statistically higher than the ones who didn't take simulation training (p ≤ 0.05). Computer-based simulation training would be significantly effective in learning of medical treatment algorithms. We thought that these programs can improve the success rate of students especially in doing adequate medical approach to complex emergency cases.

  5. Modeling of Commercial Turbofan Engine With Ice Crystal Ingestion: Follow-On

    NASA Technical Reports Server (NTRS)

    Jorgenson, Philip C. E.; Veres, Joseph P.; Coennen, Ryan

    2014-01-01

    The occurrence of ice accretion within commercial high bypass aircraft turbine engines has been reported under certain atmospheric conditions. Engine anomalies have taken place at high altitudes that have been attributed to ice crystal ingestion, partially melting, and ice accretion on the compression system components. The result was degraded engine performance, and one or more of the following: loss of thrust control (roll back), compressor surge or stall, and flameout of the combustor. As ice crystals are ingested into the fan and low pressure compression system, the increase in air temperature causes a portion of the ice crystals to melt. It is hypothesized that this allows the ice-water mixture to cover the metal surfaces of the compressor stationary components which leads to ice accretion through evaporative cooling. Ice accretion causes a blockage which subsequently results in the deterioration in performance of the compressor and engine. The focus of this research is to apply an engine icing computational tool to simulate the flow through a turbofan engine and assess the risk of ice accretion. The tool is comprised of an engine system thermodynamic cycle code, a compressor flow analysis code, and an ice particle melt code that has the capability of determining the rate of sublimation, melting, and evaporation through the compressor flow path, without modeling the actual ice accretion. A commercial turbofan engine which has previously experienced icing events during operation in a high altitude ice crystal environment has been tested in the Propulsion Systems Laboratory (PSL) altitude test facility at NASA Glenn Research Center. The PSL has the capability to produce a continuous ice cloud which is ingested by the engine during operation over a range of altitude conditions. The PSL test results confirmed that there was ice accretion in the engine due to ice crystal ingestion, at the same simulated altitude operating conditions as experienced previously in flight. The computational tool was utilized to help guide a portion of the PSL testing, and was used to predict ice accretion could also occur at significantly lower altitudes. The predictions were qualitatively verified by subsequent testing of the engine in the PSL. In a previous study, analysis of select PSL test data points helped to calibrate the engine icing computational tool to assess the risk of ice accretion. This current study is a continuation of that data analysis effort. The study focused on tracking the variations in wet bulb temperature and ice particle melt ratio through the engine core flow path. The results from this study have identified trends, while also identifying gaps in understanding as to how the local wet bulb temperature and melt ratio affects the risk of ice accretion and subsequent engine behavior.

  6. Modeling of Commercial Turbofan Engine with Ice Crystal Ingestion; Follow-On

    NASA Technical Reports Server (NTRS)

    Jorgenson, Philip C. E.; Veres, Joseph P.; Coennen, Ryan

    2014-01-01

    The occurrence of ice accretion within commercial high bypass aircraft turbine engines has been reported under certain atmospheric conditions. Engine anomalies have taken place at high altitudes that have been attributed to ice crystal ingestion, partially melting, and ice accretion on the compression system components. The result was degraded engine performance, and one or more of the following: loss of thrust control (roll back), compressor surge or stall, and flameout of the combustor. As ice crystals are ingested into the fan and low pressure compression system, the increase in air temperature causes a portion of the ice crystals to melt. It is hypothesized that this allows the ice-water mixture to cover the metal surfaces of the compressor stationary components which leads to ice accretion through evaporative cooling. Ice accretion causes a blockage which subsequently results in the deterioration in performance of the compressor and engine. The focus of this research is to apply an engine icing computational tool to simulate the flow through a turbofan engine and assess the risk of ice accretion. The tool is comprised of an engine system thermodynamic cycle code, a compressor flow analysis code, and an ice particle melt code that has the capability of determining the rate of sublimation, melting, and evaporation through the compressor flow path, without modeling the actual ice accretion. A commercial turbofan engine which has previously experienced icing events during operation in a high altitude ice crystal environment has been tested in the Propulsion Systems Laboratory (PSL) altitude test facility at NASA Glenn Research Center. The PSL has the capability to produce a continuous ice cloud which is ingested by the engine during operation over a range of altitude conditions. The PSL test results confirmed that there was ice accretion in the engine due to ice crystal ingestion, at the same simulated altitude operating conditions as experienced previously in flight. The computational tool was utilized to help guide a portion of the PSL testing, and was used to predict ice accretion could also occur at significantly lower altitudes. The predictions were qualitatively verified by subsequent testing of the engine in the PSL. In a previous study, analysis of select PSL test data points helped to calibrate the engine icing computational tool to assess the risk of ice accretion. This current study is a continuation of that data analysis effort. The study focused on tracking the variations in wet bulb temperature and ice particle melt ratio through the engine core flow path. The results from this study have identified trends, while also identifying gaps in understanding as to how the local wet bulb temperature and melt ratio affects the risk of ice accretion and subsequent engine behavior.

  7. Entropic multirelaxation-time lattice Boltzmann method for moving and deforming geometries in three dimensions

    NASA Astrophysics Data System (ADS)

    Dorschner, B.; Chikatamarla, S. S.; Karlin, I. V.

    2017-06-01

    Entropic lattice Boltzmann methods have been developed to alleviate intrinsic stability issues of lattice Boltzmann models for under-resolved simulations. Its reliability in combination with moving objects was established for various laminar benchmark flows in two dimensions in our previous work [B. Dorschner, S. Chikatamarla, F. Bösch, and I. Karlin, J. Comput. Phys. 295, 340 (2015), 10.1016/j.jcp.2015.04.017] as well as for three-dimensional one-way coupled simulations of engine-type geometries in B . Dorschner, F. Bösch, S. Chikatamarla, K. Boulouchos, and I. Karlin [J. Fluid Mech. 801, 623 (2016), 10.1017/jfm.2016.448] for flat moving walls. The present contribution aims to fully exploit the advantages of entropic lattice Boltzmann models in terms of stability and accuracy and extends the methodology to three-dimensional cases, including two-way coupling between fluid and structure and then turbulence and deforming geometries. To cover this wide range of applications, the classical benchmark of a sedimenting sphere is chosen first to validate the general two-way coupling algorithm. Increasing the complexity, we subsequently consider the simulation of a plunging SD7003 airfoil in the transitional regime at a Reynolds number of Re =40 000 and, finally, to access the model's performance for deforming geometries, we conduct a two-way coupled simulation of a self-propelled anguilliform swimmer. These simulations confirm the viability of the new fluid-structure interaction lattice Boltzmann algorithm to simulate flows of engineering relevance.

  8. Simulation approach for the evaluation of tracking accuracy in radiotherapy: a preliminary study.

    PubMed

    Tanaka, Rie; Ichikawa, Katsuhiro; Mori, Shinichiro; Sanada, Sigeru

    2013-01-01

    Real-time tumor tracking in external radiotherapy can be achieved by diagnostic (kV) X-ray imaging with a dynamic flat-panel detector (FPD). It is important to keep the patient dose as low as possible while maintaining tracking accuracy. A simulation approach would be helpful to optimize the imaging conditions. This study was performed to develop a computer simulation platform based on a noise property of the imaging system for the evaluation of tracking accuracy at any noise level. Flat-field images were obtained using a direct-type dynamic FPD, and noise power spectrum (NPS) analysis was performed. The relationship between incident quantum number and pixel value was addressed, and a conversion function was created. The pixel values were converted into a map of quantum number using the conversion function, and the map was then input into the random number generator to simulate image noise. Simulation images were provided at different noise levels by changing the incident quantum numbers. Subsequently, an implanted marker was tracked automatically and the maximum tracking errors were calculated at different noise levels. The results indicated that the maximum tracking error increased with decreasing incident quantum number in flat-field images with an implanted marker. In addition, the range of errors increased with decreasing incident quantum number. The present method could be used to determine the relationship between image noise and tracking accuracy. The results indicated that the simulation approach would aid in determining exposure dose conditions according to the necessary tracking accuracy.

  9. Integrating neuroinformatics tools in TheVirtualBrain.

    PubMed

    Woodman, M Marmaduke; Pezard, Laurent; Domide, Lia; Knock, Stuart A; Sanz-Leon, Paula; Mersmann, Jochen; McIntosh, Anthony R; Jirsa, Viktor

    2014-01-01

    TheVirtualBrain (TVB) is a neuroinformatics Python package representing the convergence of clinical, systems, and theoretical neuroscience in the analysis, visualization and modeling of neural and neuroimaging dynamics. TVB is composed of a flexible simulator for neural dynamics measured across scales from local populations to large-scale dynamics measured by electroencephalography (EEG), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), and core analytic and visualization functions, all accessible through a web browser user interface. A datatype system modeling neuroscientific data ties together these pieces with persistent data storage, based on a combination of SQL and HDF5. These datatypes combine with adapters allowing TVB to integrate other algorithms or computational systems. TVB provides infrastructure for multiple projects and multiple users, possibly participating under multiple roles. For example, a clinician might import patient data to identify several potential lesion points in the patient's connectome. A modeler, working on the same project, tests these points for viability through whole brain simulation, based on the patient's connectome, and subsequent analysis of dynamical features. TVB also drives research forward: the simulator itself represents the culmination of several simulation frameworks in the modeling literature. The availability of the numerical methods, set of neural mass models and forward solutions allows for the construction of a wide range of brain-scale simulation scenarios. This paper briefly outlines the history and motivation for TVB, describing the framework and simulator, giving usage examples in the web UI and Python scripting.

  10. Integrating neuroinformatics tools in TheVirtualBrain

    PubMed Central

    Woodman, M. Marmaduke; Pezard, Laurent; Domide, Lia; Knock, Stuart A.; Sanz-Leon, Paula; Mersmann, Jochen; McIntosh, Anthony R.; Jirsa, Viktor

    2014-01-01

    TheVirtualBrain (TVB) is a neuroinformatics Python package representing the convergence of clinical, systems, and theoretical neuroscience in the analysis, visualization and modeling of neural and neuroimaging dynamics. TVB is composed of a flexible simulator for neural dynamics measured across scales from local populations to large-scale dynamics measured by electroencephalography (EEG), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), and core analytic and visualization functions, all accessible through a web browser user interface. A datatype system modeling neuroscientific data ties together these pieces with persistent data storage, based on a combination of SQL and HDF5. These datatypes combine with adapters allowing TVB to integrate other algorithms or computational systems. TVB provides infrastructure for multiple projects and multiple users, possibly participating under multiple roles. For example, a clinician might import patient data to identify several potential lesion points in the patient's connectome. A modeler, working on the same project, tests these points for viability through whole brain simulation, based on the patient's connectome, and subsequent analysis of dynamical features. TVB also drives research forward: the simulator itself represents the culmination of several simulation frameworks in the modeling literature. The availability of the numerical methods, set of neural mass models and forward solutions allows for the construction of a wide range of brain-scale simulation scenarios. This paper briefly outlines the history and motivation for TVB, describing the framework and simulator, giving usage examples in the web UI and Python scripting. PMID:24795617

  11. Empirical constrained Bayes predictors accounting for non-detects among repeated measures.

    PubMed

    Moore, Reneé H; Lyles, Robert H; Manatunga, Amita K

    2010-11-10

    When the prediction of subject-specific random effects is of interest, constrained Bayes predictors (CB) have been shown to reduce the shrinkage of the widely accepted Bayes predictor while still maintaining desirable properties, such as optimizing mean-square error subsequent to matching the first two moments of the random effects of interest. However, occupational exposure and other epidemiologic (e.g. HIV) studies often present a further challenge because data may fall below the measuring instrument's limit of detection. Although methodology exists in the literature to compute Bayes estimates in the presence of non-detects (Bayes(ND)), CB methodology has not been proposed in this setting. By combining methodologies for computing CBs and Bayes(ND), we introduce two novel CBs that accommodate an arbitrary number of observable and non-detectable measurements per subject. Based on application to real data sets (e.g. occupational exposure, HIV RNA) and simulation studies, these CB predictors are markedly superior to the Bayes predictor and to alternative predictors computed using ad hoc methods in terms of meeting the goal of matching the first two moments of the true random effects distribution. Copyright © 2010 John Wiley & Sons, Ltd.

  12. NeuroManager: a workflow analysis based simulation management engine for computational neuroscience

    PubMed Central

    Stockton, David B.; Santamaria, Fidel

    2015-01-01

    We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project. PMID:26528175

  13. NeuroManager: a workflow analysis based simulation management engine for computational neuroscience.

    PubMed

    Stockton, David B; Santamaria, Fidel

    2015-01-01

    We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project.

  14. Methodology of modeling and measuring computer architectures for plasma simulations

    NASA Technical Reports Server (NTRS)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  15. A Computer-Based Simulation of an Acid-Base Titration

    ERIC Educational Resources Information Center

    Boblick, John M.

    1971-01-01

    Reviews the advantages of computer simulated environments for experiments, referring in particular to acid-base titrations. Includes pre-lab instructions and a sample computer printout of a student's use of an acid-base simulation. Ten references. (PR)

  16. Software for Brain Network Simulations: A Comparative Study

    PubMed Central

    Tikidji-Hamburyan, Ruben A.; Narayana, Vikram; Bozkus, Zeki; El-Ghazawi, Tarek A.

    2017-01-01

    Numerical simulations of brain networks are a critical part of our efforts in understanding brain functions under pathological and normal conditions. For several decades, the community has developed many software packages and simulators to accelerate research in computational neuroscience. In this article, we select the three most popular simulators, as determined by the number of models in the ModelDB database, such as NEURON, GENESIS, and BRIAN, and perform an independent evaluation of these simulators. In addition, we study NEST, one of the lead simulators of the Human Brain Project. First, we study them based on one of the most important characteristics, the range of supported models. Our investigation reveals that brain network simulators may be biased toward supporting a specific set of models. However, all simulators tend to expand the supported range of models by providing a universal environment for the computational study of individual neurons and brain networks. Next, our investigations on the characteristics of computational architecture and efficiency indicate that all simulators compile the most computationally intensive procedures into binary code, with the aim of maximizing their computational performance. However, not all simulators provide the simplest method for module development and/or guarantee efficient binary code. Third, a study of their amenability for high-performance computing reveals that NEST can almost transparently map an existing model on a cluster or multicore computer, while NEURON requires code modification if the model developed for a single computer has to be mapped on a computational cluster. Interestingly, parallelization is the weakest characteristic of BRIAN, which provides no support for cluster computations and limited support for multicore computers. Fourth, we identify the level of user support and frequency of usage for all simulators. Finally, we carry out an evaluation using two case studies: a large network with simplified neural and synaptic models and a small network with detailed models. These two case studies allow us to avoid any bias toward a particular software package. The results indicate that BRIAN provides the most concise language for both cases considered. Furthermore, as expected, NEST mostly favors large network models, while NEURON is better suited for detailed models. Overall, the case studies reinforce our general observation that simulators have a bias in the computational performance toward specific types of the brain network models. PMID:28775687

  17. Computer simulation on the cooperation of functional molecules during the early stages of evolution.

    PubMed

    Ma, Wentao; Hu, Jiming

    2012-01-01

    It is very likely that life began with some RNA (or RNA-like) molecules, self-replicating by base-pairing and exhibiting enzyme-like functions that favored the self-replication. Different functional molecules may have emerged by favoring their own self-replication at different aspects. Then, a direct route towards complexity/efficiency may have been through the coexistence/cooperation of these molecules. However, the likelihood of this route remains quite unclear, especially because the molecules would be competing for limited common resources. By computer simulation using a Monte-Carlo model (with "micro-resolution" at the level of nucleotides and membrane components), we show that the coexistence/cooperation of these molecules can occur naturally, both in a naked form and in a protocell form. The results of the computer simulation also lead to quite a few deductions concerning the environment and history in the scenario. First, a naked stage (with functional molecules catalyzing template-replication and metabolism) may have occurred early in evolution but required high concentration and limited dispersal of the system (e.g., on some mineral surface); the emergence of protocells enabled a "habitat-shift" into bulk water. Second, the protocell stage started with a substage of "pseudo-protocells", with functional molecules catalyzing template-replication and metabolism, but still missing the function involved in the synthesis of membrane components, the emergence of which would lead to a subsequent "true-protocell" substage. Third, the initial unstable membrane, composed of prebiotically available fatty acids, should have been superseded quite early by a more stable membrane (e.g., composed of phospholipids, like modern cells). Additionally, the membrane-takeover probably occurred at the transition of the two substages of the protocells. The scenario described in the present study should correspond to an episode in early evolution, after the emergence of single "genes", but before the appearance of a "chromosome" with linked genes.

  18. Evaluation of the transport matrix method for simulation of ocean biogeochemical tracers

    NASA Astrophysics Data System (ADS)

    Kvale, Karin F.; Khatiwala, Samar; Dietze, Heiner; Kriest, Iris; Oschlies, Andreas

    2017-06-01

    Conventional integration of Earth system and ocean models can accrue considerable computational expenses, particularly for marine biogeochemical applications. Offline numerical schemes in which only the biogeochemical tracers are time stepped and transported using a pre-computed circulation field can substantially reduce the burden and are thus an attractive alternative. One such scheme is the transport matrix method (TMM), which represents tracer transport as a sequence of sparse matrix-vector products that can be performed efficiently on distributed-memory computers. While the TMM has been used for a variety of geochemical and biogeochemical studies, to date the resulting solutions have not been comprehensively assessed against their online counterparts. Here, we present a detailed comparison of the two. It is based on simulations of the state-of-the-art biogeochemical sub-model embedded within the widely used coarse-resolution University of Victoria Earth System Climate Model (UVic ESCM). The default, non-linear advection scheme was first replaced with a linear, third-order upwind-biased advection scheme to satisfy the linearity requirement of the TMM. Transport matrices were extracted from an equilibrium run of the physical model and subsequently used to integrate the biogeochemical model offline to equilibrium. The identical biogeochemical model was also run online. Our simulations show that offline integration introduces some bias to biogeochemical quantities through the omission of the polar filtering used in UVic ESCM and in the offline application of time-dependent forcing fields, with high latitudes showing the largest differences with respect to the online model. Differences in other regions and in the seasonality of nutrients and phytoplankton distributions are found to be relatively minor, giving confidence that the TMM is a reliable tool for offline integration of complex biogeochemical models. Moreover, while UVic ESCM is a serial code, the TMM can be run on a parallel machine with no change to the underlying biogeochemical code, thus providing orders of magnitude speed-up over the online model.

  19. Impacts of the Convective Transport Algorithm on Atmospheric Composition and Ozone-Climate Feedbacks in GEOS-CCM

    NASA Technical Reports Server (NTRS)

    Pawson, S.; Nielsen, Jon E.; Oman, L.; Douglass, A. R.; Duncan, B. N.; Zhu, Z.

    2012-01-01

    Convective transport is one of the dominant factors in determining the composition of the troposphere. It is the main mechanism for lofting constituents from near-surface source regions to the middle and upper troposphere, where they can subsequently be advected over large distances. Gases reaching the upper troposphere can also be injected through the tropopause and play a subsequent role in the lower stratospheric ozone balance. Convection codes in climate models remain a great source of uncertainty for both the energy balance of the general circulation and the transport of constituents. This study uses the Goddard Earth Observing System Chemistry-Climate Model (GEOS CCM) to perform a controlled experiment that isolates the impact of convective transport of constituents from the direct changes on the atmospheric energy balance. Two multi-year simulations are conducted. In the first, the thermodynamic variable, moisture, and all trace gases are transported using the multi-plume Relaxed-Arakawa-Schubert (RAS) convective parameterization. In the second simulation, RAS impacts the thermodynamic energy and moisture in this standard manner, but all other constituents are transported differently. The accumulated convective mass fluxes (including entrainment and detrainment) computed at each time step of the GCM are used with a diffusive (bulk) algorithm for the vertical transport, which above all is less efficient at transporting constituents from the lower to the upper troposphere. Initial results show the expected differences in vertical structure of trace gases such as carbon monoxide, but also show differences in lower stratospheric ozone, in a region where it can potentially impact the climate state of the model. This work will investigate in more detail the impact of convective transport changes by comparing the two simulations over many years (1996-2010), focusing on comparisons with observed constituent distributions and similarities and differences of patterns of inter-annual variability caused by the convective transport algorithm. In particular, the impact on lower stratospheric composition will be isolated and the subsequent feedbacks of ozone on the climate forcing and tropopause structure will be assessed.

  20. From chalkboard, slides, and paper to e-learning: How computing technologies have transformed anatomical sciences education.

    PubMed

    Trelease, Robert B

    2016-11-01

    Until the late-twentieth century, primary anatomical sciences education was relatively unenhanced by advanced technology and dependent on the mainstays of printed textbooks, chalkboard- and photographic projection-based classroom lectures, and cadaver dissection laboratories. But over the past three decades, diffusion of innovations in computer technology transformed the practices of anatomical education and research, along with other aspects of work and daily life. Increasing adoption of first-generation personal computers (PCs) in the 1980s paved the way for the first practical educational applications, and visionary anatomists foresaw the usefulness of computers for teaching. While early computers lacked high-resolution graphics capabilities and interactive user interfaces, applications with video discs demonstrated the practicality of programming digital multimedia linking descriptive text with anatomical imaging. Desktop publishing established that computers could be used for producing enhanced lecture notes, and commercial presentation software made it possible to give lectures using anatomical and medical imaging, as well as animations. Concurrently, computer processing supported the deployment of medical imaging modalities, including computed tomography, magnetic resonance imaging, and ultrasound, that were subsequently integrated into anatomy instruction. Following its public birth in the mid-1990s, the World Wide Web became the ubiquitous multimedia networking technology underlying the conduct of contemporary education and research. Digital video, structural simulations, and mobile devices have been more recently applied to education. Progressive implementation of computer-based learning methods interacted with waves of ongoing curricular change, and such technologies have been deemed crucial for continuing medical education reforms, providing new challenges and opportunities for anatomical sciences educators. Anat Sci Educ 9: 583-602. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galarraga, Haize; Warren, Robert J.; Lados, Diana A.

    Electron beam melting (EBM) is a metal powder bed fusion additive manufacturing (AM) technology that is used to fabricate three-dimensional near-net-shaped parts directly from computer models. Ti-6Al-4V is the most widely used and studied alloy for this technology and is the focus of this work in its ELI (Extra Low Interstitial) variation. The mechanisms of microstructure formation, evolution, and its subsequent influence on mechanical properties of the alloy in as-fabricated condition have been documented by various researchers. In the present work, the thermal history resulting in the formation of the as-fabricated microstructure was analyzed and studied by a thermal simulation.more » Subsequently different heat treatments were performed based on three approaches in order to study the effects of heat treatments on the singular and exclusive microstructure formed during the EBM fabrication process. In the first approach, the effect of cooling rate after the solutionizing process was studied. In the second approach, the variation of α lath thickness during annealing treatment and correlation with mechanical properties was established. In the last approach, several solutionizing and aging experiments were conducted.« less

  2. Neuronal correlates of decisions to speak and act: Spontaneous emergence and dynamic topographies in a computational model of frontal and temporal areas

    PubMed Central

    Garagnani, Max; Pulvermüller, Friedemann

    2013-01-01

    The neural mechanisms underlying the spontaneous, stimulus-independent emergence of intentions and decisions to act are poorly understood. Using a neurobiologically realistic model of frontal and temporal areas of the brain, we simulated the learning of perception–action circuits for speech and hand-related actions and subsequently observed their spontaneous behaviour. Noise-driven accumulation of reverberant activity in these circuits leads to their spontaneous ignition and partial-to-full activation, which we interpret, respectively, as model correlates of action intention emergence and action decision-and-execution. Importantly, activity emerged first in higher-association prefrontal and temporal cortices, subsequently spreading to secondary and finally primary sensorimotor model-areas, hence reproducing the dynamics of cortical correlates of voluntary action revealed by readiness-potential and verb-generation experiments. This model for the first time explains the cortical origins and topography of endogenous action decisions, and the natural emergence of functional specialisation in the cortex, as mechanistic consequences of neurobiological principles, anatomical structure and sensorimotor experience. PMID:23489583

  3. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmalz, Mark S

    2011-07-24

    Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less

  4. In silico analysis of antibiotic-induced Clostridium difficile infection: Remediation techniques and biological adaptations

    PubMed Central

    Carlson, Jean M.

    2018-01-01

    In this paper we study antibiotic-induced C. difficile infection (CDI), caused by the toxin-producing C. difficile (CD), and implement clinically-inspired simulated treatments in a computational framework that synthesizes a generalized Lotka-Volterra (gLV) model with SIR modeling techniques. The gLV model uses parameters derived from an experimental mouse model, in which the mice are administered antibiotics and subsequently dosed with CD. We numerically identify which of the experimentally measured initial conditions are vulnerable to CD colonization, then formalize the notion of CD susceptibility analytically. We simulate fecal transplantation, a clinically successful treatment for CDI, and discover that both the transplant timing and transplant donor are relevant to the the efficacy of the treatment, a result which has clinical implications. We incorporate two nongeneric yet dangerous attributes of CD into the gLV model, sporulation and antibiotic-resistant mutation, and for each identify relevant SIR techniques that describe the desired attribute. Finally, we rely on the results of our framework to analyze an experimental study of fecal transplants in mice, and are able to explain observed experimental results, validate our simulated results, and suggest model-motivated experiments. PMID:29451873

  5. In silico analysis of antibiotic-induced Clostridium difficile infection: Remediation techniques and biological adaptations.

    PubMed

    Jones, Eric W; Carlson, Jean M

    2018-02-01

    In this paper we study antibiotic-induced C. difficile infection (CDI), caused by the toxin-producing C. difficile (CD), and implement clinically-inspired simulated treatments in a computational framework that synthesizes a generalized Lotka-Volterra (gLV) model with SIR modeling techniques. The gLV model uses parameters derived from an experimental mouse model, in which the mice are administered antibiotics and subsequently dosed with CD. We numerically identify which of the experimentally measured initial conditions are vulnerable to CD colonization, then formalize the notion of CD susceptibility analytically. We simulate fecal transplantation, a clinically successful treatment for CDI, and discover that both the transplant timing and transplant donor are relevant to the the efficacy of the treatment, a result which has clinical implications. We incorporate two nongeneric yet dangerous attributes of CD into the gLV model, sporulation and antibiotic-resistant mutation, and for each identify relevant SIR techniques that describe the desired attribute. Finally, we rely on the results of our framework to analyze an experimental study of fecal transplants in mice, and are able to explain observed experimental results, validate our simulated results, and suggest model-motivated experiments.

  6. The Hydrodynamics and Odorant Transport Phenomena of Olfaction in the Hammerhead Shark

    NASA Astrophysics Data System (ADS)

    Rygg, Alex; Craven, Brent

    2013-11-01

    The hammerhead shark possesses a unique head morphology that is thought to facilitate enhanced olfactory performance. The olfactory organs, located at the distal ends of the cephalofoil, contain numerous lamellae that increase the surface area for olfaction. Functionally, for the shark to detect chemical stimuli, water-borne odors must reach the olfactory sensory epithelium that lines these lamellae. Thus, odorant transport from the aquatic environment to the sensory epithelium is the first critical step in olfaction. Here we investigate the hydrodynamics and odorant transport phenomena of olfaction in the hammerhead shark based on an anatomically-accurate reconstruction of the head and olfactory chamber from high-resolution micro-CT and MRI scans of a cadaver specimen. Computational fluid dynamics (CFD) simulations of water flow in the reconstructed model reveal the external and internal hydrodynamics of olfaction during swimming. Odorant transport in the olfactory organ is investigated using a multi-scale approach, whereby molecular dynamics (MD) simulations are used to calculate odorant partition coefficients that are subsequently utilized in macro-scale CFD simulations of odorant deposition. The hydrodynamic and odorant transport results are used to elucidate several important features of olfactory function in the hammerhead shark.

  7. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Chen, Yousu; Wu, Di

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less

  8. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    PubMed

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  9. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.

    PubMed

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-08

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.

  10. Computational Fluid Dynamics Demonstration of Rigid Bodies in Motion

    NASA Technical Reports Server (NTRS)

    Camarena, Ernesto; Vu, Bruce T.

    2011-01-01

    The Design Analysis Branch (NE-Ml) at the Kennedy Space Center has not had the ability to accurately couple Rigid Body Dynamics (RBD) and Computational Fluid Dynamics (CFD). OVERFLOW-D is a flow solver that has been developed by NASA to have the capability to analyze and simulate dynamic motions with up to six Degrees of Freedom (6-DOF). Two simulations were prepared over the course of the internship to demonstrate 6DOF motion of rigid bodies under aerodynamic loading. The geometries in the simulations were based on a conceptual Space Launch System (SLS). The first simulation that was prepared and computed was the motion of a Solid Rocket Booster (SRB) as it separates from its core stage. To reduce computational time during the development of the simulation, only half of the physical domain with respect to the symmetry plane was simulated. Then a full solution was prepared and computed. The second simulation was a model of the SLS as it departs from a launch pad under a 20 knot crosswind. This simulation was reduced to Two Dimensions (2D) to reduce both preparation and computation time. By allowing 2-DOF for translations and 1-DOF for rotation, the simulation predicted unrealistic rotation. The simulation was then constrained to only allow translations.

  11. Technology, Pedagogy, and Epistemology: Opportunities and Challenges of Using Computer Modeling and Simulation Tools in Elementary Science Methods

    ERIC Educational Resources Information Center

    Schwarz, Christina V.; Meyer, Jason; Sharma, Ajay

    2007-01-01

    This study infused computer modeling and simulation tools in a 1-semester undergraduate elementary science methods course to advance preservice teachers' understandings of computer software use in science teaching and to help them learn important aspects of pedagogy and epistemology. Preservice teachers used computer modeling and simulation tools…

  12. A Vectorial Model to Compute Terrain Parameters, Local and Remote Sheltering, Scattering and Albedo using TIN Domains for Hydrologic Modeling.

    NASA Astrophysics Data System (ADS)

    Moreno, H. A.; Ogden, F. L.; Steinke, R. C.; Alvarez, L. V.

    2015-12-01

    Triangulated Irregular Networks (TINs) are increasingly popular for terrain representation in high performance surface and hydrologic modeling by their skill to capture significant changes in surface forms such as topographical summits, slope breaks, ridges, valley floors, pits and cols. This work presents a methodology for estimating slope, aspect and the components of the incoming solar radiation by using a vectorial approach within a topocentric coordinate system by establishing geometric relations between groups of TIN elements and the sun position. A normal vector to the surface of each TIN element describes slope and aspect while spherical trigonometry allows computing a unit vector defining the position of the sun at each hour and DOY. Thus, a dot product determines the radiation flux at each TIN element. Remote shading is computed by scanning the projection of groups of TIN elements in the direction of the closest perpendicular plane to the sun vector. Sky view fractions are computed by a simplified scanning algorithm in prescribed directions and are useful to determine diffuse radiation. Finally, remote radiation scattering is computed from the sky view factor complementary functions for prescribed albedo values of the surrounding terrain only for significant angles above the horizon. This methodology represents an improvement on the current algorithms to compute terrain and radiation parameters on TINs in an efficient manner. All terrain features (e.g. slope, aspect, sky view factors and remote sheltering) can be pre-computed and stored for easy access for a subsequent ground surface or hydrologic simulation.

  13. Simulation Accelerator

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Under a NASA SBIR (Small Business Innovative Research) contract, (NAS5-30905), EAI Simulation Associates, Inc., developed a new digital simulation computer, Starlight(tm). With an architecture based on the analog model of computation, Starlight(tm) outperforms all other computers on a wide range of continuous system simulation. This system is used in a variety of applications, including aerospace, automotive, electric power and chemical reactors.

  14. Using Computational Simulations to Confront Students' Mental Models

    ERIC Educational Resources Information Center

    Rodrigues, R.; Carvalho, P. Simeão

    2014-01-01

    In this paper we show an example of how to use a computational simulation to obtain visual feedback for students' mental models, and compare their predictions with the simulated system's behaviour. Additionally, we use the computational simulation to incrementally modify the students' mental models in order to accommodate new data,…

  15. The Role of Computer Simulation in an Inquiry-Based Learning Environment: Reconstructing Geological Events as Geologists

    ERIC Educational Resources Information Center

    Lin, Li-Fen; Hsu, Ying-Shao; Yeh, Yi-Fen

    2012-01-01

    Several researchers have investigated the effects of computer simulations on students' learning. However, few have focused on how simulations with authentic contexts influences students' inquiry skills. Therefore, for the purposes of this study, we developed a computer simulation (FossilSim) embedded in an authentic inquiry lesson. FossilSim…

  16. Computer-generated forces in distributed interactive simulation

    NASA Astrophysics Data System (ADS)

    Petty, Mikel D.

    1995-04-01

    Distributed Interactive Simulation (DIS) is an architecture for building large-scale simulation models from a set of independent simulator nodes communicating via a common network protocol. DIS is most often used to create a simulated battlefield for military training. Computer Generated Forces (CGF) systems control large numbers of autonomous battlefield entities in a DIS simulation using computer equipment and software rather than humans in simulators. CGF entities serve as both enemy forces and supplemental friendly forces in a DIS exercise. Research into various aspects of CGF systems is ongoing. Several CGF systems have been implemented.

  17. SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.

    PubMed

    Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman

    2017-03-01

    We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).

  18. The evolution of structural and chemical heterogeneity during rapid solidification at gas atomization

    NASA Astrophysics Data System (ADS)

    Golod, V. M.; Sufiiarov, V. Sh

    2017-04-01

    Gas atomization is a high-performance process for manufacturing superfine metal powders. Formation of the powder particles takes place primarily through the fragmentation of alloy melt flow with high-pressure inert gas, which leads to the formation of non-uniform sized micron-scale particles and subsequent their rapid solidification due to heat exchange with gas environment. The article presents results of computer modeling of crystallization process, simulation and experimental studies of the cellular-dendrite structure formation and microsegregation in different size particles. It presents results of adaptation of the approach for local nonequilibrium solidification to conditions of crystallization at gas atomization, detected border values of the particle size at which it is possible a manifestation of diffusionless crystallization.

  19. Improvements, testing and development of the ADM-τ sub-grid surface tension model for two-phase LES

    NASA Astrophysics Data System (ADS)

    Aniszewski, Wojciech

    2016-12-01

    In this paper, a specific subgrid term occurring in Large Eddy Simulation (LES) of two-phase flows is investigated. This and other subgrid terms are presented, we subsequently elaborate on the existing models for those and re-formulate the ADM-τ model for sub-grid surface tension previously published by these authors. This paper presents a substantial, conceptual simplification over the original model version, accompanied by a decrease in its computational cost. At the same time, it addresses the issues the original model version faced, e.g. introduces non-isotropic applicability criteria based on resolved interface's principal curvature radii. Additionally, this paper introduces more throughout testing of the ADM-τ, in both simple and complex flows.

  20. Computer Simulations to Support Science Instruction and Learning: A critical review of the literature

    NASA Astrophysics Data System (ADS)

    Smetana, Lara Kathleen; Bell, Randy L.

    2012-06-01

    Researchers have explored the effectiveness of computer simulations for supporting science teaching and learning during the past four decades. The purpose of this paper is to provide a comprehensive, critical review of the literature on the impact of computer simulations on science teaching and learning, with the goal of summarizing what is currently known and providing guidance for future research. We report on the outcomes of 61 empirical studies dealing with the efficacy of, and implications for, computer simulations in science instruction. The overall findings suggest that simulations can be as effective, and in many ways more effective, than traditional (i.e. lecture-based, textbook-based and/or physical hands-on) instructional practices in promoting science content knowledge, developing process skills, and facilitating conceptual change. As with any other educational tool, the effectiveness of computer simulations is dependent upon the ways in which they are used. Thus, we outline specific research-based guidelines for best practice. Computer simulations are most effective when they (a) are used as supplements; (b) incorporate high-quality support structures; (c) encourage student reflection; and (d) promote cognitive dissonance. Used appropriately, computer simulations involve students in inquiry-based, authentic science explorations. Additionally, as educational technologies continue to evolve, advantages such as flexibility, safety, and efficiency deserve attention.

  1. A computer-controlled scintiscanning system and associated computer graphic techniques for study of regional distribution of blood flow.

    NASA Technical Reports Server (NTRS)

    Coulam, C. M.; Dunnette, W. H.; Wood, E. H.

    1970-01-01

    Two methods whereby a digital computer may be used to regulate a scintiscanning process are discussed from the viewpoint of computer input-output software. The computer's function, in this case, is to govern the data acquisition and storage, and to display the results to the investigator in a meaningful manner, both during and subsequent to the scanning process. Several methods (such as three-dimensional maps, contour plots, and wall-reflection maps) have been developed by means of which the computer can graphically display the data on-line, for real-time monitoring purposes, during the scanning procedure and subsequently for detailed analysis of the data obtained. A computer-governed method for converting scintiscan data recorded over the dorsal or ventral surfaces of the thorax into fractions of pulmonary blood flow traversing the right and left lungs is presented.

  2. All Roads Lead to Computing: Making, Participatory Simulations, and Social Computing as Pathways to Computer Science

    ERIC Educational Resources Information Center

    Brady, Corey; Orton, Kai; Weintrop, David; Anton, Gabriella; Rodriguez, Sebastian; Wilensky, Uri

    2017-01-01

    Computer science (CS) is becoming an increasingly diverse domain. This paper reports on an initiative designed to introduce underrepresented populations to computing using an eclectic, multifaceted approach. As part of a yearlong computing course, students engage in Maker activities, participatory simulations, and computing projects that…

  3. The effect of distributed virtual reality simulation training on cognitive load during subsequent dissection training.

    PubMed

    Andersen, Steven Arild Wuyts; Konge, Lars; Sørensen, Mads Sølvsten

    2018-05-07

    Complex tasks such as surgical procedures can induce excessive cognitive load (CL), which can have a negative effect on learning, especially for novices. To investigate if repeated and distributed virtual reality (VR) simulation practice induces a lower CL and higher performance in subsequent cadaveric dissection training. In a prospective, controlled cohort study, 37 residents in otorhinolaryngology received VR simulation training either as additional distributed practice prior to course participation (intervention) (9 participants) or as standard practice during the course (control) (28 participants). Cognitive load was estimated as the relative change in secondary-task reaction time during VR simulation and cadaveric procedures. Structured distributed VR simulation practice resulted in lower mean reaction times (32% vs. 47% for the intervention and control group, respectively, p < 0.01) as well as a superior final-product performance during subsequent cadaveric dissection training. Repeated and distributed VR simulation causes a lower CL to be induced when the learning situation is increased in complexity. A suggested mechanism is the formation of mental schemas and reduction of the intrinsic CL. This has potential implications for surgical skills training and suggests that structured, distributed training be systematically implemented in surgical training curricula.

  4. Characterizing the role of the hippocampus during episodic simulation and encoding.

    PubMed

    Thakral, Preston P; Benoit, Roland G; Schacter, Daniel L

    2017-12-01

    The hippocampus has been consistently associated with episodic simulation (i.e., the mental construction of a possible future episode). In a recent study, we identified an anterior-posterior temporal dissociation within the hippocampus during simulation. Specifically, transient simulation-related activity occurred in relatively posterior portions of the hippocampus and sustained activity occurred in anterior portions. In line with previous theoretical proposals of hippocampal function during simulation, the posterior hippocampal activity was interpreted as reflecting a transient retrieval process for the episodic details necessary to construct an episode. In contrast, the sustained anterior hippocampal activity was interpreted as reflecting the continual recruitment of encoding and/or relational processing associated with a simulation. In the present study, we provide a direct test of these interpretations by conducting a subsequent memory analysis of our previously published data to assess whether successful encoding during episodic simulation is associated with the anterior hippocampus. Analyses revealed a subsequent memory effect (i.e., later remembered > later forgotten simulations) in the anterior hippocampus. The subsequent memory effect was transient and not sustained. Taken together, the current findings provide further support for a component process model of hippocampal function during simulation. That is, unique regions of the hippocampus support dissociable processes during simulation, which include the transient retrieval of episodic information, the sustained binding of such information into a coherent episode, and the transient encoding of that episode for later retrieval. © 2017 Wiley Periodicals, Inc.

  5. Self-consistent semi-analytic models of the first stars

    NASA Astrophysics Data System (ADS)

    Visbal, Eli; Haiman, Zoltán; Bryan, Greg L.

    2018-04-01

    We have developed a semi-analytic framework to model the large-scale evolution of the first Population III (Pop III) stars and the transition to metal-enriched star formation. Our model follows dark matter haloes from cosmological N-body simulations, utilizing their individual merger histories and three-dimensional positions, and applies physically motivated prescriptions for star formation and feedback from Lyman-Werner (LW) radiation, hydrogen ionizing radiation, and external metal enrichment due to supernovae winds. This method is intended to complement analytic studies, which do not include clustering or individual merger histories, and hydrodynamical cosmological simulations, which include detailed physics, but are computationally expensive and have limited dynamic range. Utilizing this technique, we compute the cumulative Pop III and metal-enriched star formation rate density (SFRD) as a function of redshift at z ≥ 20. We find that varying the model parameters leads to significant qualitative changes in the global star formation history. The Pop III star formation efficiency and the delay time between Pop III and subsequent metal-enriched star formation are found to have the largest impact. The effect of clustering (i.e. including the three-dimensional positions of individual haloes) on various feedback mechanisms is also investigated. The impact of clustering on LW and ionization feedback is found to be relatively mild in our fiducial model, but can be larger if external metal enrichment can promote metal-enriched star formation over large distances.

  6. Studies of Flame Structure in Microgravity

    NASA Technical Reports Server (NTRS)

    Law, C. K.; Sung, C. J.; Zhu, D. L.

    1997-01-01

    The present research endeavor is concerned with gaining fundamental understanding of the configuration, structure, and dynamics of laminar premixed and diffusion flames under conditions of negligible effects of gravity. Of particular interest is the potential to establish and hence study the properties of spherically- and cylindrically-symmetric flames and their response to external forces not related to gravity. For example, in an earlier experimental study of the burner-stabilized cylindrical premixed flames, the possibility of flame stabilization through flow divergence was established, while the resulting one-dimensional, adiabatic, stretchless flame also allowed an accurate means of determining the laminar flame speeds of combustible mixtures. We have recently extended our studies of the flame structure in microgravity along the following directions: (1) Analysis of the dynamics of spherical premixed flames; (2) Analysis of the spreading of cylindrical diffusion flames; (3) Experimental observation of an interesting dual luminous zone structure of a steady-state, microbuoyancy, spherical diffusion flame of air burning in a hydrogen/methane mixture environment, and its subsequent quantification through computational simulation with detailed chemistry and transport; (4) Experimental quantification of the unsteady growth of a spherical diffusion flame; and (5) Computational simulation of stretched, diffusionally-imbalanced premixed flames near and beyond the conventional limits of flammability, and the substantiation of the concept of extended limits of flammability. Motivation and results of these investigations are individually discussed.

  7. Computational fluid dynamic modeling of a medium-sized surface mine blasthole drill shroud

    PubMed Central

    Zheng, Y.; Reed, W.R.; Zhou, L.; Rider, J.P.

    2016-01-01

    The Pittsburgh Mining Research Division of the U.S. National Institute for Occupational Safety and Health (NIOSH) recently developed a series of models using computational fluid dynamics (CFD) to study airflows and respirable dust distribution associated with a medium-sized surface blasthole drill shroud with a dry dust collector system. Previously run experiments conducted in NIOSH’s full-scale drill shroud laboratory were used to validate the models. The setup values in the CFD models were calculated from experimental data obtained from the drill shroud laboratory and measurements of test material particle size. Subsequent simulation results were compared with the experimental data for several test scenarios, including 0.14 m3/s (300 cfm) and 0.24 m3/s (500 cfm) bailing airflow with 2:1, 3:1 and 4:1 dust collector-to-bailing airflow ratios. For the 2:1 and 3:1 ratios, the calculated dust concentrations from the CFD models were within the 95 percent confidence intervals of the experimental data. This paper describes the methodology used to develop the CFD models, to calculate the model input and to validate the models based on the experimental data. Problem regions were identified and revealed by the study. The simulation results could be used for future development of dust control methods for a surface mine blasthole drill shroud. PMID:27932851

  8. Pore-Scale X-ray Micro-CT Imaging and Analysis of Oil Shales

    NASA Astrophysics Data System (ADS)

    Saif, T.

    2015-12-01

    The pore structure and the connectivity of the pore space during the pyrolysis of oil shales are important characteristics which determine hydrocarbon flow behaviour and ultimate recovery. We study the effect of temperature on the evolution of pore space and subsequent permeability on five oil shale samples: (1) Vernal Utah United States, (2) El Lajjun Al Karak Jordan, (3) Gladstone Queensland Australia (4) Fushun China and (5) Kimmerdige United Kingdom. Oil Shale cores of 5mm in diameter were pyrolized at 300, 400 and 500 °C. 3D imaging of 5mm diameter core samples was performed at 1μm voxel resolution using X-ray micro computed tomography (CT) and the evolution of the pore structures were characterized. The experimental results indicate that the thermal decomposition of kerogen at high temperatures is a major factor causing micro-scale changes in the internal structure of oil shales. At the early stage of pyrolysis, micron-scale heterogeneous pores were formed and with a further increase in temperature, the pores expanded and became interconnected by fractures. Permeability for each oil shale sample at each temperature was computed by simulation directly on the image voxels and by pore network extraction and simulation. Future work will investigate different samples and pursue insitu micro-CT imaging of oil shale pyrolysis to characterize the time evolution of the pore space.

  9. An Algorithm and R Program for Fitting and Simulation of Pharmacokinetic and Pharmacodynamic Data.

    PubMed

    Li, Jijie; Yan, Kewei; Hou, Lisha; Du, Xudong; Zhu, Ping; Zheng, Li; Zhu, Cairong

    2017-06-01

    Pharmacokinetic/pharmacodynamic link models are widely used in dose-finding studies. By applying such models, the results of initial pharmacokinetic/pharmacodynamic studies can be used to predict the potential therapeutic dose range. This knowledge can improve the design of later comparative large-scale clinical trials by reducing the number of participants and saving time and resources. However, the modeling process can be challenging, time consuming, and costly, even when using cutting-edge, powerful pharmacological software. Here, we provide a freely available R program for expediently analyzing pharmacokinetic/pharmacodynamic data, including data importation, parameter estimation, simulation, and model diagnostics. First, we explain the theory related to the establishment of the pharmacokinetic/pharmacodynamic link model. Subsequently, we present the algorithms used for parameter estimation and potential therapeutic dose computation. The implementation of the R program is illustrated by a clinical example. The software package is then validated by comparing the model parameters and the goodness-of-fit statistics generated by our R package with those generated by the widely used pharmacological software WinNonlin. The pharmacokinetic and pharmacodynamic parameters as well as the potential recommended therapeutic dose can be acquired with the R package. The validation process shows that the parameters estimated using our package are satisfactory. The R program developed and presented here provides pharmacokinetic researchers with a simple and easy-to-access tool for pharmacokinetic/pharmacodynamic analysis on personal computers.

  10. Evaluation of Computer Simulations for Teaching Apparel Merchandising Concepts.

    ERIC Educational Resources Information Center

    Jolly, Laura D.; Sisler, Grovalynn

    1988-01-01

    The study developed and evaluated computer simulations for teaching apparel merchandising concepts. Evaluation results indicated that teaching method (computer simulation versus case study) does not significantly affect cognitive learning. Student attitudes varied, however, according to topic (profitable merchandising analysis versus retailing…

  11. Encapsulating model complexity and landscape-scale analyses of state-and-transition simulation models: an application of ecoinformatics and juniper encroachment in sagebrush steppe ecosystems

    USGS Publications Warehouse

    O'Donnell, Michael

    2015-01-01

    State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http://aimspress.com/aimses/ch/reader/view_abstract.aspx?file_no=Environ2015030&flag=1#sthash.p1XKDtF8.dpuf

  12. Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Herrick, Gregory P.; Chen, Jen-Ping

    2012-01-01

    This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.

  13. Monte Carlo simulation of chemistry following radiolysis with TOPAS-nBio

    NASA Astrophysics Data System (ADS)

    Ramos-Méndez, J.; Perl, J.; Schuemann, J.; McNamara, A.; Paganetti, H.; Faddegon, B.

    2018-05-01

    Simulation of water radiolysis and the subsequent chemistry provides important information on the effect of ionizing radiation on biological material. The Geant4 Monte Carlo toolkit has added chemical processes via the Geant4-DNA project. The TOPAS tool simplifies the modeling of complex radiotherapy applications with Geant4 without requiring advanced computational skills, extending the pool of users. Thus, a new extension to TOPAS, TOPAS-nBio, is under development to facilitate the configuration of track-structure simulations as well as water radiolysis simulations with Geant4-DNA for radiobiological studies. In this work, radiolysis simulations were implemented in TOPAS-nBio. Users may now easily add chemical species and their reactions, and set parameters including branching ratios, dissociation schemes, diffusion coefficients, and reaction rates. In addition, parameters for the chemical stage were re-evaluated and updated from those used by default in Geant4-DNA to improve the accuracy of chemical yields. Simulation results of time-dependent and LET-dependent primary yields Gx (chemical species per 100 eV deposited) produced at neutral pH and 25 °C by short track-segments of charged particles were compared to published measurements. The LET range was 0.05–230 keV µm‑1. The calculated Gx values for electrons satisfied the material balance equation within 0.3%, similar for protons albeit with long calculation time. A smaller geometry was used to speed up proton and alpha simulations, with an acceptable difference in the balance equation of 1.3%. Available experimental data of time-dependent G-values for agreed with simulated results within 7%  ±  8% over the entire time range; for over the full time range within 3%  ±  4% for H2O2 from 49%  ±  7% at earliest stages and 3%  ±  12% at saturation. For the LET-dependent Gx, the mean ratios to the experimental data were 1.11  ±  0.98, 1.21  ±  1.11, 1.05  ±  0.52, 1.23  ±  0.59 and 1.49  ±  0.63 (1 standard deviation) for , , H2, H2O2 and , respectively. In conclusion, radiolysis and subsequent chemistry with Geant4-DNA has been successfully incorporated in TOPAS-nBio. Results are in reasonable agreement with published measured and simulated data.

  14. Longitudinal train dynamics: an overview

    NASA Astrophysics Data System (ADS)

    Wu, Qing; Spiryagin, Maksym; Cole, Colin

    2016-12-01

    This paper discusses the evolution of longitudinal train dynamics (LTD) simulations, which covers numerical solvers, vehicle connection systems, air brake systems, wagon dumper systems and locomotives, resistance forces and gravitational components, vehicle in-train instabilities, and computing schemes. A number of potential research topics are suggested, such as modelling of friction, polymer, and transition characteristics for vehicle connection simulations, studies of wagon dumping operations, proper modelling of vehicle in-train instabilities, and computing schemes for LTD simulations. Evidence shows that LTD simulations have evolved with computing capabilities. Currently, advanced component models that directly describe the working principles of the operation of air brake systems, vehicle connection systems, and traction systems are available. Parallel computing is a good solution to combine and simulate all these advanced models. Parallel computing can also be used to conduct three-dimensional long train dynamics simulations.

  15. Simulation Framework for Intelligent Transportation Systems

    DOT National Transportation Integrated Search

    1996-10-01

    A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System. The simulator is designed for running on parellel computers and distributed (networked) computer systems, but ca...

  16. Thermodynamic and transport properties of nitrogen fluid: Molecular theory and computer simulations

    NASA Astrophysics Data System (ADS)

    Eskandari Nasrabad, A.; Laghaei, R.

    2018-04-01

    Computer simulations and various theories are applied to compute the thermodynamic and transport properties of nitrogen fluid. To model the nitrogen interaction, an existing potential in the literature is modified to obtain a close agreement between the simulation results and experimental data for the orthobaric densities. We use the Generic van der Waals theory to calculate the mean free volume and apply the results within the modified Cohen-Turnbull relation to obtain the self-diffusion coefficient. Compared to experimental data, excellent results are obtained via computer simulations for the orthobaric densities, the vapor pressure, the equation of state, and the shear viscosity. We analyze the results of the theory and computer simulations for the various thermophysical properties.

  17. Development of an Output-based Adaptive Method for Multi-Dimensional Euler and Navier-Stokes Simulations

    NASA Technical Reports Server (NTRS)

    Darmofal, David L.

    2003-01-01

    The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.

  18. GATE Monte Carlo simulation in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  19. Simulating Laboratory Procedures.

    ERIC Educational Resources Information Center

    Baker, J. E.; And Others

    1986-01-01

    Describes the use of computer assisted instruction in a medical microbiology course. Presents examples of how computer assisted instruction can present case histories in which the laboratory procedures are simulated. Discusses an authoring system used to prepare computer simulations and provides one example of a case history dealing with fractured…

  20. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units.

    PubMed

    Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi

    2011-11-01

    Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Inflight IFR procedures simulator

    NASA Technical Reports Server (NTRS)

    Parker, L. C. (Inventor)

    1984-01-01

    An inflight IFR procedures simulator for generating signals and commands to conventional instruments provided in an airplane is described. The simulator includes a signal synthesizer which generates predetermined simulated signals corresponding to signals normally received from remote sources upon being activated. A computer is connected to the signal synthesizer and causes the signal synthesizer to produce simulated signals responsive to programs fed into the computer. A switching network is connected to the signal synthesizer, the antenna of the aircraft, and navigational instruments and communication devices for selectively connecting instruments and devices to the synthesizer and disconnecting the antenna from the navigational instruments and communication device. Pressure transducers are connected to the altimeter and speed indicator for supplying electrical signals to the computer indicating the altitude and speed of the aircraft. A compass is connected for supply electrical signals for the computer indicating the heading of the airplane. The computer upon receiving signals from the pressure transducer and compass, computes the signals that are fed to the signal synthesizer which, in turn, generates simulated navigational signals.

  2. Fast polyenergetic forward projection for image formation using OpenCL on a heterogeneous parallel computing platform.

    PubMed

    Zhou, Lili; Clifford Chao, K S; Chang, Jenghwa

    2012-11-01

    Simulated projection images of digital phantoms constructed from CT scans have been widely used for clinical and research applications but their quality and computation speed are not optimal for real-time comparison with the radiography acquired with an x-ray source of different energies. In this paper, the authors performed polyenergetic forward projections using open computing language (OpenCL) in a parallel computing ecosystem consisting of CPU and general purpose graphics processing unit (GPGPU) for fast and realistic image formation. The proposed polyenergetic forward projection uses a lookup table containing the NIST published mass attenuation coefficients (μ∕ρ) for different tissue types and photon energies ranging from 1 keV to 20 MeV. The CT images of interested sites are first segmented into different tissue types based on the CT numbers and converted to a three-dimensional attenuation phantom by linking each voxel to the corresponding tissue type in the lookup table. The x-ray source can be a radioisotope or an x-ray generator with a known spectrum described as weight w(n) for energy bin E(n). The Siddon method is used to compute the x-ray transmission line integral for E(n) and the x-ray fluence is the weighted sum of the exponential of line integral for all energy bins with added Poisson noise. To validate this method, a digital head and neck phantom constructed from the CT scan of a Rando head phantom was segmented into three (air, gray∕white matter, and bone) regions for calculating the polyenergetic projection images for the Mohan 4 MV energy spectrum. To accelerate the calculation, the authors partitioned the workloads using the task parallelism and data parallelism and scheduled them in a parallel computing ecosystem consisting of CPU and GPGPU (NVIDIA Tesla C2050) using OpenCL only. The authors explored the task overlapping strategy and the sequential method for generating the first and subsequent DRRs. A dispatcher was designed to drive the high-degree parallelism of the task overlapping strategy. Numerical experiments were conducted to compare the performance of the OpenCL∕GPGPU-based implementation with the CPU-based implementation. The projection images were similar to typical portal images obtained with a 4 or 6 MV x-ray source. For a phantom size of 512 × 512 × 223, the time for calculating the line integrals for a 512 × 512 image panel was 16.2 ms on GPGPU for one energy bin in comparison to 8.83 s on CPU. The total computation time for generating one polyenergetic projection image of 512 × 512 was 0.3 s (141 s for CPU). The relative difference between the projection images obtained with the CPU-based and OpenCL∕GPGPU-based implementations was on the order of 10(-6) and was virtually indistinguishable. The task overlapping strategy was 5.84 and 1.16 times faster than the sequential method for the first and the subsequent digitally reconstruction radiographies, respectively. The authors have successfully built digital phantoms using anatomic CT images and NIST μ∕ρ tables for simulating realistic polyenergetic projection images and optimized the processing speed with parallel computing using GPGPU∕OpenCL-based implementation. The computation time was fast (0.3 s per projection image) enough for real-time IGRT (image-guided radiotherapy) applications.

  3. Processing data communications events by awakening threads in parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.

    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for themore » context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, David H.

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, althoughmore » the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage over vector supercomputers, and, if so, which of the parallel offerings would be most useful in real-world scientific computation. In part to draw attention to some of the performance reporting abuses prevalent at the time, the present author wrote a humorous essay 'Twelve Ways to Fool the Masses,' which described in a light-hearted way a number of the questionable ways in which both vendor marketing people and scientists were inflating and distorting their performance results. All of this underscored the need for an objective and scientifically defensible measure to compare performance on these systems.« less

  5. ReaxFF Grand Canonical Monte Carlo simulation of adsorption and dissociation of oxygen on platinum (111)

    NASA Astrophysics Data System (ADS)

    Valentini, Paolo; Schwartzentruber, Thomas E.; Cozmuta, Ioana

    2011-12-01

    Atomic-level Grand Canonical Monte Carlo (GCMC) simulations equipped with a reactive force field (ReaxFF) are used to study atomic oxygen adsorption on a Pt(111) surface. The off-lattice GCMC calculations presented here rely solely on the interatomic potential and do not necessitate the pre-computation of surface adlayer structures and their interpolation. As such, they provide a predictive description of adsorbate phases. In this study, validation is obtained with experimental evidence (steric heats of adsorption and isotherms) as well as DFT-based state diagrams available in the literature. The ReaxFF computed steric heats of adsorption agree well with experimental data, and this study clearly shows that indirect dissociative adsorption of O2 on Pt(111) is an activated process at non-zero coverages, with an activation energy that monotonically increases with coverage. At a coverage of 0.25 ML, a highly ordered p(2 × 2) adlayer is found, in agreement with several low-energy electron diffraction observations. Isotherms obtained from the GCMC simulations compare qualitatively and quantitatively well with previous DFT-based state diagrams, but are in disagreement with the experimental data sets available. ReaxFF GCMC simulations at very high coverages show that O atoms prefer to bind in fcc hollow sites, at least up to 0.8 ML considered in the present work. At moderate coverages, little to no disorder appears in the Pt lattice. At high coverages, some Pt atoms markedly protrude out of the surface plane. This observation is in qualitative agreement with recent STM images of an oxygen covered Pt surface. The use of the GCMC technique based on a transferable potential is particularly valuable to produce more realistic systems (adsorbent and adsorbate) to be used in subsequent dynamical simulations (Molecular Dynamics) to address recombination reactions (via either Eley-Rideal or Langmuir-Hinshelwood mechanisms) on variously covered surfaces. By using GCMC and Molecular Dynamics simulations, the ReaxFF force field can be a valuable tool for understanding heterogeneous catalysis on a solid surface. Finally, the use of a reactive potential is a necessary requirement to investigate problems where dissociative adsorption occurs, as typical of many important catalytic processes.

  6. Computational structural mechanics engine structures computational simulator

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1989-01-01

    The Computational Structural Mechanics (CSM) program at Lewis encompasses: (1) fundamental aspects for formulating and solving structural mechanics problems, and (2) development of integrated software systems to computationally simulate the performance/durability/life of engine structures.

  7. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing

    PubMed Central

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-01

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4× speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration. PMID:28075343

  8. How Effective Is Instructional Support for Learning with Computer Simulations?

    ERIC Educational Resources Information Center

    Eckhardt, Marc; Urhahne, Detlef; Conrad, Olaf; Harms, Ute

    2013-01-01

    The study examined the effects of two different instructional interventions as support for scientific discovery learning using computer simulations. In two well-known categories of difficulty, data interpretation and self-regulation, instructional interventions for learning with computer simulations on the topic "ecosystem water" were developed…

  9. SIGMA--A Graphical Approach to Teaching Simulation.

    ERIC Educational Resources Information Center

    Schruben, Lee W.

    1992-01-01

    SIGMA (Simulation Graphical Modeling and Analysis) is a computer graphics environment for building, testing, and experimenting with discrete event simulation models on personal computers. It uses symbolic representations (computer animation) to depict the logic of large, complex discrete event systems for easier understanding and has proven itself…

  10. Application of technology developed for flight simulation at NASA. Langley Research Center

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1991-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations including mathematical model computation and data input/output to the simulators must be deterministic and be completed in as short a time as possible. Personnel at NASA's Langley Research Center are currently developing the use of supercomputers for simulation mathematical model computation for real-time simulation. This, coupled with the use of an open systems software architecture, will advance the state-of-the-art in real-time flight simulation.

  11. Computers for real time flight simulation: A market survey

    NASA Technical Reports Server (NTRS)

    Bekey, G. A.; Karplus, W. J.

    1977-01-01

    An extensive computer market survey was made to determine those available systems suitable for current and future flight simulation studies at Ames Research Center. The primary requirement is for the computation of relatively high frequency content (5 Hz) math models representing powered lift flight vehicles. The Rotor Systems Research Aircraft (RSRA) was used as a benchmark vehicle for computation comparison studies. The general nature of helicopter simulations and a description of the benchmark model are presented, and some of the sources of simulation difficulties are examined. A description of various applicable computer architectures is presented, along with detailed discussions of leading candidate systems and comparisons between them.

  12. Turbulent Flow Simulation at the Exascale: Opportunities and Challenges Workshop: August 4-5, 2015, Washington, D.C.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprague, Michael A.; Boldyrev, Stanislav; Fischer, Paul

    This report details the impact exascale will bring to turbulent-flow simulations in applied science and technology. The need for accurate simulation of turbulent flows is evident across the DOE applied-science and engineering portfolios, including combustion, plasma physics, nuclear-reactor physics, wind energy, and atmospheric science. The workshop brought together experts in turbulent-flow simulation, computational mathematics, and high-performance computing. Building upon previous ASCR workshops on exascale computing, participants defined a research agenda and path forward that will enable scientists and engineers to continually leverage, engage, and direct advances in computational systems on the path to exascale computing.

  13. Computer Based Simulation of Laboratory Experiments.

    ERIC Educational Resources Information Center

    Edward, Norrie S.

    1997-01-01

    Examines computer based simulations of practical laboratory experiments in engineering. Discusses the aims and achievements of lab work (cognitive, process, psychomotor, and affective); types of simulations (model building and behavioral); and the strengths and weaknesses of simulations. Describes the development of a centrifugal pump simulation,…

  14. CPU SIM: A Computer Simulator for Use in an Introductory Computer Organization-Architecture Class.

    ERIC Educational Resources Information Center

    Skrein, Dale

    1994-01-01

    CPU SIM, an interactive low-level computer simulation package that runs on the Macintosh computer, is described. The program is designed for instructional use in the first or second year of undergraduate computer science, to teach various features of typical computer organization through hands-on exercises. (MSE)

  15. Development of an E-Prime Based Computer Simulation of an Interactive Human Rights Violation Negotiation Script (Developpement d’un Programme de Simulation par Ordinateur Fonde sur le Logiciel E Prime pour la Negociation Interactive en cas de Violation des Droits de la Personne)

    DTIC Science & Technology

    2010-12-01

    Base ( CFB ) Kingston. The computer simulation developed in this project is intended to be used for future research and as a possible training platform...DRDC Toronto No. CR 2010-055 Development of an E-Prime based computer simulation of an interactive Human Rights Violation negotiation script...Abstract This report describes the method of developing an E-Prime computer simulation of an interactive Human Rights Violation (HRV) negotiation. An

  16. Outcomes from the DOE Workshop on Turbulent Flow Simulation at the Exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprague, Michael; Boldyrev, Stanislav; Chang, Choong-Seock

    This paper summarizes the outcomes from the Turbulent Flow Simulation at the Exascale: Opportunities and Challenges Workshop, which was held 4-5 August 2015, and was sponsored by the U.S. Department of Energy Office of Advanced Scientific Computing Research. The workshop objective was to define and describe the challenges and opportunities that computing at the exascale will bring to turbulent-flow simulations in applied science and technology. The need for accurate simulation of turbulent flows is evident across the U.S. Department of Energy applied-science and engineering portfolios, including combustion, plasma physics, nuclear-reactor physics, wind energy, and atmospheric science. The workshop brought togethermore » experts in turbulent-flow simulation, computational mathematics, and high-performance computing. Building upon previous ASCR workshops on exascale computing, participants defined a research agenda and path forward that will enable scientists and engineers to continually leverage, engage, and direct advances in computational systems on the path to exascale computing.« less

  17. DISCRETE EVENT SIMULATION OF OPTICAL SWITCH MATRIX PERFORMANCE IN COMPUTER NETWORKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imam, Neena; Poole, Stephen W

    2013-01-01

    In this paper, we present application of a Discrete Event Simulator (DES) for performance modeling of optical switching devices in computer networks. Network simulators are valuable tools in situations where one cannot investigate the system directly. This situation may arise if the system under study does not exist yet or the cost of studying the system directly is prohibitive. Most available network simulators are based on the paradigm of discrete-event-based simulation. As computer networks become increasingly larger and more complex, sophisticated DES tool chains have become available for both commercial and academic research. Some well-known simulators are NS2, NS3, OPNET,more » and OMNEST. For this research, we have applied OMNEST for the purpose of simulating multi-wavelength performance of optical switch matrices in computer interconnection networks. Our results suggest that the application of DES to computer interconnection networks provides valuable insight in device performance and aids in topology and system optimization.« less

  18. Advanced manned space flight simulation and training: An investigation of simulation host computer system concepts

    NASA Technical Reports Server (NTRS)

    Montag, Bruce C.; Bishop, Alfred M.; Redfield, Joe B.

    1989-01-01

    The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power.

  19. Outcomes and challenges of global high-resolution non-hydrostatic atmospheric simulations using the K computer

    NASA Astrophysics Data System (ADS)

    Satoh, Masaki; Tomita, Hirofumi; Yashiro, Hisashi; Kajikawa, Yoshiyuki; Miyamoto, Yoshiaki; Yamaura, Tsuyoshi; Miyakawa, Tomoki; Nakano, Masuo; Kodama, Chihiro; Noda, Akira T.; Nasuno, Tomoe; Yamada, Yohei; Fukutomi, Yoshiki

    2017-12-01

    This article reviews the major outcomes of a 5-year (2011-2016) project using the K computer to perform global numerical atmospheric simulations based on the non-hydrostatic icosahedral atmospheric model (NICAM). The K computer was made available to the public in September 2012 and was used as a primary resource for Japan's Strategic Programs for Innovative Research (SPIRE), an initiative to investigate five strategic research areas; the NICAM project fell under the research area of climate and weather simulation sciences. Combining NICAM with high-performance computing has created new opportunities in three areas of research: (1) higher resolution global simulations that produce more realistic representations of convective systems, (2) multi-member ensemble simulations that are able to perform extended-range forecasts 10-30 days in advance, and (3) multi-decadal simulations for climatology and variability. Before the K computer era, NICAM was used to demonstrate realistic simulations of intra-seasonal oscillations including the Madden-Julian oscillation (MJO), merely as a case study approach. Thanks to the big leap in computational performance of the K computer, we could greatly increase the number of cases of MJO events for numerical simulations, in addition to integrating time and horizontal resolution. We conclude that the high-resolution global non-hydrostatic model, as used in this five-year project, improves the ability to forecast intra-seasonal oscillations and associated tropical cyclogenesis compared with that of the relatively coarser operational models currently in use. The impacts of the sub-kilometer resolution simulation and the multi-decadal simulations using NICAM are also reviewed.

  20. A study of workstation computational performance for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey M.; Cleveland, Jeff I., II

    1995-01-01

    With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.

  1. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME.

    PubMed

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2016-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.

  2. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    PubMed

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  3. Computer animation challenges for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Vines, Mauricio; Lee, Won-Sook; Mavriplis, Catherine

    2012-07-01

    Computer animation requirements differ from those of traditional computational fluid dynamics (CFD) investigations in that visual plausibility and rapid frame update rates trump physical accuracy. We present an overview of the main techniques for fluid simulation in computer animation, starting with Eulerian grid approaches, the Lattice Boltzmann method, Fourier transform techniques and Lagrangian particle introduction. Adaptive grid methods, precomputation of results for model reduction, parallelisation and computation on graphical processing units (GPUs) are reviewed in the context of accelerating simulation computations for animation. A survey of current specific approaches for the application of these techniques to the simulation of smoke, fire, water, bubbles, mixing, phase change and solid-fluid coupling is also included. Adding plausibility to results through particle introduction, turbulence detail and concentration on regions of interest by level set techniques has elevated the degree of accuracy and realism of recent animations. Basic approaches are described here. Techniques to control the simulation to produce a desired visual effect are also discussed. Finally, some references to rendering techniques and haptic applications are mentioned to provide the reader with a complete picture of the challenges of simulating fluids in computer animation.

  4. Computer Simulation in Undergraduate Instruction: A Symposium.

    ERIC Educational Resources Information Center

    Street, Warren R.; And Others

    These symposium papers discuss the instructional use of computers in psychology, with emphasis on computer-produced simulations. The first, by Rich Edwards, briefly outlines LABSIM, a general purpose system of FORTRAN programs which simulate data collection in more than a dozen experimental models in psychology and are designed to train students…

  5. Overview of Computer Simulation Modeling Approaches and Methods

    Treesearch

    Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett

    2005-01-01

    The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...

  6. New Pedagogies on Teaching Science with Computer Simulations

    ERIC Educational Resources Information Center

    Khan, Samia

    2011-01-01

    Teaching science with computer simulations is a complex undertaking. This case study examines how an experienced science teacher taught chemistry using computer simulations and the impact of his teaching on his students. Classroom observations over 3 semesters, teacher interviews, and student surveys were collected. The data was analyzed for (1)…

  7. Computer modeling and simulation of human movement. Applications in sport and rehabilitation.

    PubMed

    Neptune, R R

    2000-05-01

    Computer modeling and simulation of human movement plays an increasingly important role in sport and rehabilitation, with applications ranging from sport equipment design to understanding pathologic gait. The complex dynamic interactions within the musculoskeletal and neuromuscular systems make analyzing human movement with existing experimental techniques difficult but computer modeling and simulation allows for the identification of these complex interactions and causal relationships between input and output variables. This article provides an overview of computer modeling and simulation and presents an example application in the field of rehabilitation.

  8. Fiber Composite Sandwich Thermostructural Behavior: Computational Simulation

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Aiello, R. A.; Murthy, P. L. N.

    1986-01-01

    Several computational levels of progressive sophistication/simplification are described to computationally simulate composite sandwich hygral, thermal, and structural behavior. The computational levels of sophistication include: (1) three-dimensional detailed finite element modeling of the honeycomb, the adhesive and the composite faces; (2) three-dimensional finite element modeling of the honeycomb assumed to be an equivalent continuous, homogeneous medium, the adhesive and the composite faces; (3) laminate theory simulation where the honeycomb (metal or composite) is assumed to consist of plies with equivalent properties; and (4) derivations of approximate, simplified equations for thermal and mechanical properties by simulating the honeycomb as an equivalent homogeneous medium. The approximate equations are combined with composite hygrothermomechanical and laminate theories to provide a simple and effective computational procedure for simulating the thermomechanical/thermostructural behavior of fiber composite sandwich structures.

  9. Generalized dynamic engine simulation techniques for the digital computer

    NASA Technical Reports Server (NTRS)

    Sellers, J.; Teren, F.

    1974-01-01

    Recently advanced simulation techniques have been developed for the digital computer and used as the basis for development of a generalized dynamic engine simulation computer program, called DYNGEN. This computer program can analyze the steady state and dynamic performance of many kinds of aircraft gas turbine engines. Without changes to the basic program, DYNGEN can analyze one- or two-spool turbofan engines. The user must supply appropriate component performance maps and design-point information. Examples are presented to illustrate the capabilities of DYNGEN in the steady state and dynamic modes of operation. The analytical techniques used in DYNGEN are briefly discussed, and its accuracy is compared with a comparable simulation using the hybrid computer. The impact of DYNGEN and similar all-digital programs on future engine simulation philosophy is also discussed.

  10. Generalized dynamic engine simulation techniques for the digital computer

    NASA Technical Reports Server (NTRS)

    Sellers, J.; Teren, F.

    1974-01-01

    Recently advanced simulation techniques have been developed for the digital computer and used as the basis for development of a generalized dynamic engine simulation computer program, called DYNGEN. This computer program can analyze the steady state and dynamic performance of many kinds of aircraft gas turbine engines. Without changes to the basic program DYNGEN can analyze one- or two-spool turbofan engines. The user must supply appropriate component performance maps and design-point information. Examples are presented to illustrate the capabilities of DYNGEN in the steady state and dynamic modes of operation. The analytical techniques used in DYNGEN are briefly discussed, and its accuracy is compared with a comparable simulation using the hybrid computer. The impact of DYNGEN and similar all-digital programs on future engine simulation philosophy is also discussed.

  11. Generalized dynamic engine simulation techniques for the digital computers

    NASA Technical Reports Server (NTRS)

    Sellers, J.; Teren, F.

    1975-01-01

    Recently advanced simulation techniques have been developed for the digital computer and used as the basis for development of a generalized dynamic engine simulation computer program, called DYNGEN. This computer program can analyze the steady state and dynamic performance of many kinds of aircraft gas turbine engines. Without changes to the basic program, DYNGEN can analyze one- or two-spool turbofan engines. The user must supply appropriate component performance maps and design point information. Examples are presented to illustrate the capabilities of DYNGEN in the steady state and dynamic modes of operation. The analytical techniques used in DYNGEN are briefly discussed, and its accuracy is compared with a comparable simulation using the hybrid computer. The impact of DYNGEN and similar digital programs on future engine simulation philosophy is also discussed.

  12. Fast Learning for Immersive Engagement in Energy Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian W; Bugbee, Bruce; Gruchalla, Kenny M

    The fast computation which is critical for immersive engagement with and learning from energy simulations would be furthered by developing a general method for creating rapidly computed simplified versions of NREL's computation-intensive energy simulations. Created using machine learning techniques, these 'reduced form' simulations can provide statistically sound estimates of the results of the full simulations at a fraction of the computational cost with response times - typically less than one minute of wall-clock time - suitable for real-time human-in-the-loop design and analysis. Additionally, uncertainty quantification techniques can document the accuracy of the approximate models and their domain of validity. Approximationmore » methods are applicable to a wide range of computational models, including supply-chain models, electric power grid simulations, and building models. These reduced-form representations cannot replace or re-implement existing simulations, but instead supplement them by enabling rapid scenario design and quality assurance for large sets of simulations. We present an overview of the framework and methods we have implemented for developing these reduced-form representations.« less

  13. A Primer on Simulation and Gaming.

    ERIC Educational Resources Information Center

    Barton, Richard F.

    In a primer intended for the administrative professions, for the behavioral sciences, and for education, simulation and its various aspects are defined, illustrated, and explained. Man-model simulation, man-computer simulation, all-computer simulation, and analysis are discussed as techniques for studying object systems (parts of the "real…

  14. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    NASA Astrophysics Data System (ADS)

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. This work was presented in part at the 2010 Annual Meeting of the American Association of Physicists in Medicine (AAPM), Philadelphia, PA.

  15. A scalable parallel black oil simulator on distributed memory parallel computers

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Liu, Hui; Chen, Zhangxin

    2015-11-01

    This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.

  16. Effect of computer game playing on baseline laparoscopic simulator skills.

    PubMed

    Halvorsen, Fredrik H; Cvancarova, Milada; Fosse, Erik; Mjåland, Odd

    2013-08-01

    Studies examining the possible association between computer game playing and laparoscopic performance in general have yielded conflicting results and neither has a relationship between computer game playing and baseline performance on laparoscopic simulators been established. The aim of this study was to examine the possible association between previous and present computer game playing and baseline performance on a virtual reality laparoscopic performance in a sample of potential future medical students. The participating students completed a questionnaire covering the weekly amount and type of computer game playing activity during the previous year and 3 years ago. They then performed 2 repetitions of 2 tasks ("gallbladder dissection" and "traverse tube") on a virtual reality laparoscopic simulator. Performance on the simulator were then analyzed for association to their computer game experience. Local high school, Norway. Forty-eight students from 2 high school classes volunteered to participate in the study. No association between prior and present computer game playing and baseline performance was found. The results were similar both for prior and present action game playing and prior and present computer game playing in general. Our results indicate that prior and present computer game playing may not affect baseline performance in a virtual reality simulator.

  17. A Computer Simulation of Bacterial Growth During Food-Processing

    DTIC Science & Technology

    1974-11-01

    1 AD A TECHNICAL REPORT A COMPUTER SIMULATION OF BACTERIAL GROWTH DURING FOOD PROCESSING =r= by Edward W. Ross, Jr. Approved for public...COMPUTER SIMULATION OF BACTERIAL GROWTH DURING FOOD - PROCESSING Edward W. Ross, Jr. Army Natick Laboratories Natick, Massachusetts Novembe...CATALOG NUMBER 4. TITLE fand SubtKUJ "A Computer Sinulatlon of Bacterial Growth During Food - Processing " 5. TYPE OF REPORT A PERIOD COVERED 6

  18. Using spatial principles to optimize distributed computing for enabling the physical science discoveries

    PubMed Central

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-01-01

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779

  19. Using spatial principles to optimize distributed computing for enabling the physical science discoveries.

    PubMed

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-04-05

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.

  20. Filters for Improvement of Multiscale Data from Atomistic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Reynolds, Daniel R.

    Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less

Top