Sample records for core simulation system

  1. Testing of an Integrated Reactor Core Simulator and Power Conversion System with Simulated Reactivity Feedback

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.; Hervol, David S.; Godfroy, Thomas J.

    2009-01-01

    A Direct Drive Gas-Cooled (DDG) reactor core simulator has been coupled to a Brayton Power Conversion Unit (BPCU) for integrated system testing at NASA Glenn Research Center (GRC) in Cleveland, OH. This is a closed-cycle system that incorporates an electrically heated reactor core module, turbo alternator, recuperator, and gas cooler. Nuclear fuel elements in the gas-cooled reactor design are replaced with electric resistance heaters to simulate the heat from nuclear fuel in the corresponding fast spectrum nuclear reactor. The thermodynamic transient behavior of the integrated system was the focus of this test series. In order to better mimic the integrated response of the nuclear-fueled system, a simulated reactivity feedback control loop was implemented. Core power was controlled by a point kinetics model in which the reactivity feedback was based on core temperature measurements; the neutron generation time and the temperature feedback coefficient are provided as model inputs. These dynamic system response tests demonstrate the overall capability of a non-nuclear test facility in assessing system integration issues and characterizing integrated system response times and response characteristics.

  2. Testing of an Integrated Reactor Core Simulator and Power Conversion System with Simulated Reactivity Feedback

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.; Hervol, David S.; Godfroy, Thomas J.

    2010-01-01

    A Direct Drive Gas-Cooled (DDG) reactor core simulator has been coupled to a Brayton Power Conversion Unit (BPCU) for integrated system testing at NASA Glenn Research Center (GRC) in Cleveland, Ohio. This is a closed-cycle system that incorporates an electrically heated reactor core module, turboalternator, recuperator, and gas cooler. Nuclear fuel elements in the gas-cooled reactor design are replaced with electric resistance heaters to simulate the heat from nuclear fuel in the corresponding fast spectrum nuclear reactor. The thermodynamic transient behavior of the integrated system was the focus of this test series. In order to better mimic the integrated response of the nuclear-fueled system, a simulated reactivity feedback control loop was implemented. Core power was controlled by a point kinetics model in which the reactivity feedback was based on core temperature measurements; the neutron generation time and the temperature feedback coefficient are provided as model inputs. These dynamic system response tests demonstrate the overall capability of a non-nuclear test facility in assessing system integration issues and characterizing integrated system response times and response characteristics.

  3. SLS Core Stage Simulator

    NASA Image and Video Library

    2015-02-02

    CHRISTOPHER CRUMBLY, MANAGER OF THE SPACECRAFT PAYLOAD INTEGRATION AND EVOLUTION OFFICE, GAVE VISITORS AN INSIDER'S PERSPECTIVE ON THE CORE STAGE SIMULATOR AT MARSHALL AND ITS IMPORTANCE TO DEVELOPMENT OF THE SPACE LAUNCH SYSTEM. CHRISTOPHER CRUMBLY, MANAGER OF THE SPACECRAFT PAYLOAD INTEGRATION AND EVOLUTION OFFICE, GAVE VISITORS AN INSIDER'S PERSPECTIVE ON THE CORE STAGE SIMULATOR AT MARSHALL AND ITS IMPORTANCE TO DEVELOPMENT OF THE SPACE LAUNCH SYSTEM.

  4. A hybrid algorithm for parallel molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Mangiardi, Chris M.; Meyer, R.

    2017-10-01

    This article describes algorithms for the hybrid parallelization and SIMD vectorization of molecular dynamics simulations with short-range forces. The parallelization method combines domain decomposition with a thread-based parallelization approach. The goal of the work is to enable efficient simulations of very large (tens of millions of atoms) and inhomogeneous systems on many-core processors with hundreds or thousands of cores and SIMD units with large vector sizes. In order to test the efficiency of the method, simulations of a variety of configurations with up to 74 million atoms have been performed. Results are shown that were obtained on multi-core systems with Sandy Bridge and Haswell processors as well as systems with Xeon Phi many-core processors.

  5. Monte Carlo simulation of dynamic phase transitions and frequency dispersions of hysteresis curves in core/shell ferrimagnetic cubic nanoparticle

    NASA Astrophysics Data System (ADS)

    Vatansever, Erol

    2017-05-01

    By means of Monte Carlo simulation method with Metropolis algorithm, we elucidate the thermal and magnetic phase transition behaviors of a ferrimagnetic core/shell nanocubic system driven by a time dependent magnetic field. The particle core is composed of ferromagnetic spins, and it is surrounded by an antiferromagnetic shell. At the interface of the core/shell particle, we use antiferromagnetic spin-spin coupling. We simulate the nanoparticle using classical Heisenberg spins. After a detailed analysis, our Monte Carlo simulation results suggest that present system exhibits unusual and interesting magnetic behaviors. For example, at the relatively lower temperature regions, an increment in the amplitude of the external field destroys the antiferromagnetism in the shell part of the nanoparticle, leading to a ground state with ferromagnetic character. Moreover, particular attention has been dedicated to the hysteresis behaviors of the system. For the first time, we show that frequency dispersions can be categorized into three groups for a fixed temperature for finite core/shell systems, as in the case of the conventional bulk systems under the influence of an oscillating magnetic field.

  6. Adaptive Core Simulation Employing Discrete Inverse Theory - Part II: Numerical Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-07-15

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. The companion paper, ''Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory,'' describes in detail the theoretical background of the proposed adaptive techniques. This paper, Part II, demonstrates several computational experiments conducted to assess the fidelity and robustness of the proposed techniques. The intentmore » is to check the ability of the adapted core simulator model to predict future core observables that are not included in the adaption or core observables that are recorded at core conditions that differ from those at which adaption is completed. Also, this paper demonstrates successful utilization of an efficient sensitivity analysis approach to calculate the sensitivity information required to perform the adaption for millions of input core parameters. Finally, this paper illustrates a useful application for adaptive simulation - reducing the inconsistencies between two different core simulator code systems, where the multitudes of input data to one code are adjusted to enhance the agreement between both codes for important core attributes, i.e., core reactivity and power distribution. Also demonstrated is the robustness of such an application.« less

  7. Testing Numerical Models of Cool Core Galaxy Cluster Formation with X-Ray Observations

    NASA Astrophysics Data System (ADS)

    Henning, Jason W.; Gantner, Brennan; Burns, Jack O.; Hallman, Eric J.

    2009-12-01

    Using archival Chandra and ROSAT data along with numerical simulations, we compare the properties of cool core and non-cool core galaxy clusters, paying particular attention to the region beyond the cluster cores. With the use of single and double β-models, we demonstrate a statistically significant difference in the slopes of observed cluster surface brightness profiles while the cluster cores remain indistinguishable between the two cluster types. Additionally, through the use of hardness ratio profiles, we find evidence suggesting cool core clusters are cooler beyond their cores than non-cool core clusters of comparable mass and temperature, both in observed and simulated clusters. The similarities between real and simulated clusters supports a model presented in earlier work by the authors describing differing merger histories between cool core and non-cool core clusters. Discrepancies between real and simulated clusters will inform upcoming numerical models and simulations as to new ways to incorporate feedback in these systems.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salko, Robert K; Sung, Yixing; Kucukboyaci, Vefa

    The Virtual Environment for Reactor Applications core simulator (VERA-CS) being developed by the Consortium for the Advanced Simulation of Light Water Reactors (CASL) includes coupled neutronics, thermal-hydraulics, and fuel temperature components with an isotopic depletion capability. The neutronics capability employed is based on MPACT, a three-dimensional (3-D) whole core transport code. The thermal-hydraulics and fuel temperature models are provided by the COBRA-TF (CTF) subchannel code. As part of the CASL development program, the VERA-CS (MPACT/CTF) code system was applied to model and simulate reactor core response with respect to departure from nucleate boiling ratio (DNBR) at the limiting time stepmore » of a postulated pressurized water reactor (PWR) main steamline break (MSLB) event initiated at the hot zero power (HZP), either with offsite power available and the reactor coolant pumps in operation (high-flow case) or without offsite power where the reactor core is cooled through natural circulation (low-flow case). The VERA-CS simulation was based on core boundary conditions from the RETRAN-02 system transient calculations and STAR-CCM+ computational fluid dynamics (CFD) core inlet distribution calculations. The evaluation indicated that the VERA-CS code system is capable of modeling and simulating quasi-steady state reactor core response under the steamline break (SLB) accident condition, the results are insensitive to uncertainties in the inlet flow distributions from the CFD simulations, and the high-flow case is more DNB limiting than the low-flow case.« less

  9. Multi-Sensor Systems and Data Fusion for Telecommunications, Remote Sensing and Radar (les Systemes multi-senseurs et le fusionnement des donnees pour les telecommunications, la teledetection et les radars)

    DTIC Science & Technology

    1998-04-01

    The result of the project is a demonstration of the fusion process, the sensors management and the real-time capabilities using simulated sensors...demonstrator (TAD) is a system that demonstrates the core ele- ment of a battlefield ground surveillance system by simulation in near real-time. The core...Management and Sensor/Platform simulation . The surveillance system observes the real world through a non-collocated heterogene- ous multisensory system

  10. Laser anemometry measurements of natural circulation flow in a scale model PWR reactor system. [Pressurized Water Reactor

    NASA Technical Reports Server (NTRS)

    Kadambi, J. R.; Schneider, S. J.; Stewart, W. A.

    1986-01-01

    The natural circulation of a single phase fluid in a scale model of a pressurized water reactor system during a postulated grade core accident is analyzed. The fluids utilized were water and SF6. The design of the reactor model and the similitude requirements are described. Four LDA tests were conducted: water with 28 kW of heat in the simulated core, with and without the participation of simulated steam generators; water with 28 kW of heat in the simulated core, with the participation of simulated steam generators and with cold upflow of 12 lbm/min from the lower plenum; and SF6 with 0.9 kW of heat in the simulated core and without the participation of the simulated steam generators. For the water tests, the velocity of the water in the center of the core increases with vertical height and continues to increase in the upper plenum. For SF6, it is observed that the velocities are an order of magnitude higher than those of water; however, the velocity patterns are similar.

  11. Evolution dynamics modeling and simulation of logistics enterprise's core competence based on service innovation

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Tong, Yuting

    2017-04-01

    With the rapid development of economy, the development of logistics enterprises in China is also facing a huge challenge, especially the logistics enterprises generally lack of core competitiveness, and service innovation awareness is not strong. Scholars in the process of studying the core competitiveness of logistics enterprises are mainly from the perspective of static stability, not from the perspective of dynamic evolution to explore. So the author analyzes the influencing factors and the evolution process of the core competence of logistics enterprises, using the method of system dynamics to study the cause and effect of the evolution of the core competence of logistics enterprises, construct a system dynamics model of evolution of core competence logistics enterprises, which can be simulated by vensim PLE. The analysis for the effectiveness and sensitivity of simulation model indicates the model can be used as the fitting of the evolution process of the core competence of logistics enterprises and reveal the process and mechanism of the evolution of the core competence of logistics enterprises, and provide management strategies for improving the core competence of logistics enterprises. The construction and operation of computer simulation model offers a kind of effective method for studying the evolution of logistics enterprise core competence.

  12. Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2011-01-01

    Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrison, Cyrus; Larsen, Matt; Brugger, Eric

    Strawman is a system designed to explore the in situ visualization and analysis needs of simulation code teams running multi-physics calculations on many-core HPC architectures. It porvides rendering pipelines that can leverage both many-core CPUs and GPUs to render images of simulation meshes.

  14. Isotope heat source simulator for testing of space power systems

    NASA Technical Reports Server (NTRS)

    Prok, G. M.; Smith, R. B.

    1973-01-01

    A reliable isotope heat source simulator was designed for use in a Brayton power system. This simulator is composed of an electrically heated tungsten wire which is wound around a boron nitride core and enclosed in a graphite jacket. Simulator testing was performed at the expected operating temperature of the Brayton power system. Endurance testing for 5012 hours was followed by cycling the simulator temperature. The integrity of this simulator was maintained throughout testing. Alumina beads served as a diffusion barrier to prevent interaction between the tungsten heater and boron nitride core. The simulator was designed to maintain a surface temperature of 1311 to 1366 K (1900 to 2000 F) with a power input of approximately 400 watts. The design concept and the materials used in the simulator make possible man different geometries. This flexibility increases its potential use.

  15. Theoretical Models of Protostellar Binary and Multiple Systems with AMR Simulations

    NASA Astrophysics Data System (ADS)

    Matsumoto, Tomoaki; Tokuda, Kazuki; Onishi, Toshikazu; Inutsuka, Shu-ichiro; Saigo, Kazuya; Takakuwa, Shigehisa

    2017-05-01

    We present theoretical models for protostellar binary and multiple systems based on the high-resolution numerical simulation with an adaptive mesh refinement (AMR) code, SFUMATO. The recent ALMA observations have revealed early phases of the binary and multiple star formation with high spatial resolutions. These observations should be compared with theoretical models with high spatial resolutions. We present two theoretical models for (1) a high density molecular cloud core, MC27/L1521F, and (2) a protobinary system, L1551 NE. For the model for MC27, we performed numerical simulations for gravitational collapse of a turbulent cloud core. The cloud core exhibits fragmentation during the collapse, and dynamical interaction between the fragments produces an arc-like structure, which is one of the prominent structures observed by ALMA. For the model for L1551 NE, we performed numerical simulations of gas accretion onto protobinary. The simulations exhibit asymmetry of a circumbinary disk. Such asymmetry has been also observed by ALMA in the circumbinary disk of L1551 NE.

  16. On efficiency of fire simulation realization: parallelization with greater number of computational meshes

    NASA Astrophysics Data System (ADS)

    Valasek, Lukas; Glasa, Jan

    2017-12-01

    Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.

  17. COOL CORE CLUSTERS FROM COSMOLOGICAL SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rasia, E.; Borgani, S.; Murante, G.

    2015-11-01

    We present results obtained from a set of cosmological hydrodynamic simulations of galaxy clusters, aimed at comparing predictions with observational data on the diversity between cool-core (CC) and non-cool-core (NCC) clusters. Our simulations include the effects of stellar and active galactic nucleus (AGN) feedback and are based on an improved version of the smoothed particle hydrodynamics code GADGET-3, which ameliorates gas mixing and better captures gas-dynamical instabilities by including a suitable artificial thermal diffusion. In this Letter, we focus our analysis on the entropy profiles, the primary diagnostic we used to classify the degree of cool-coreness of clusters, and themore » iron profiles. In keeping with observations, our simulated clusters display a variety of behaviors in entropy profiles: they range from steadily decreasing profiles at small radii, characteristic of CC systems, to nearly flat core isentropic profiles, characteristic of NCC systems. Using observational criteria to distinguish between the two classes of objects, we find that they occur in similar proportions in both simulations and observations. Furthermore, we also find that simulated CC clusters have profiles of iron abundance that are steeper than those of NCC clusters, which is also in agreement with observational results. We show that the capability of our simulations to generate a realistic CC structure in the cluster population is due to AGN feedback and artificial thermal diffusion: their combined action allows us to naturally distribute the energy extracted from super-massive black holes and to compensate for the radiative losses of low-entropy gas with short cooling time residing in the cluster core.« less

  18. Cool Core Clusters from Cosmological Simulations

    NASA Astrophysics Data System (ADS)

    Rasia, E.; Borgani, S.; Murante, G.; Planelles, S.; Beck, A. M.; Biffi, V.; Ragone-Figueroa, C.; Granato, G. L.; Steinborn, L. K.; Dolag, K.

    2015-11-01

    We present results obtained from a set of cosmological hydrodynamic simulations of galaxy clusters, aimed at comparing predictions with observational data on the diversity between cool-core (CC) and non-cool-core (NCC) clusters. Our simulations include the effects of stellar and active galactic nucleus (AGN) feedback and are based on an improved version of the smoothed particle hydrodynamics code GADGET-3, which ameliorates gas mixing and better captures gas-dynamical instabilities by including a suitable artificial thermal diffusion. In this Letter, we focus our analysis on the entropy profiles, the primary diagnostic we used to classify the degree of cool-coreness of clusters, and the iron profiles. In keeping with observations, our simulated clusters display a variety of behaviors in entropy profiles: they range from steadily decreasing profiles at small radii, characteristic of CC systems, to nearly flat core isentropic profiles, characteristic of NCC systems. Using observational criteria to distinguish between the two classes of objects, we find that they occur in similar proportions in both simulations and observations. Furthermore, we also find that simulated CC clusters have profiles of iron abundance that are steeper than those of NCC clusters, which is also in agreement with observational results. We show that the capability of our simulations to generate a realistic CC structure in the cluster population is due to AGN feedback and artificial thermal diffusion: their combined action allows us to naturally distribute the energy extracted from super-massive black holes and to compensate for the radiative losses of low-entropy gas with short cooling time residing in the cluster core.

  19. Numerical Simulations of Close and Contact Binary Systems Having Bipolytropic Equation of State

    NASA Astrophysics Data System (ADS)

    Kadam, Kundan; Clayton, Geoffrey C.; Motl, Patrick M.; Marcello, Dominic; Frank, Juhan

    2017-01-01

    I present the results of the numerical simulations of the mass transfer in close and contact binary systems with both stars having a bipolytropic (composite polytropic) equation of state. The initial binary systems are obtained by a modifying Hachisu’s self-consistent field technique. Both the stars have fully resolved cores with a molecular weight jump at the core-envelope interface. The initial properties of these simulations are chosen such that they satisfy the mass-radius relation, composition and period of a late W-type contact binary system. The simulations are carried out using two different Eulerian hydrocodes, Flow-ER with a fixed cylindrical grid, and Octo-tiger with an AMR capable cartesian grid. The detailed comparison of the simulations suggests an agreement between the results obtained from the two codes at different resolutions. The set of simulations can be treated as a benchmark, enabling us to reliably simulate mass transfer and merger scenarios of binary systems involving bipolytropic components.

  20. Mechatronic Materials and Systems. Design and Demonstration of High Aughtority Shape Morphing Structures

    DTIC Science & Technology

    2005-09-01

    thermal expansion of these truss elements. One side of the structure is fully clamped, while the other is free to displace. As in prior assessments [6...levels, by using the finite element package ABAQUS . To simulate the complete system, the core and the Kagome face members are modeled using linear...code ABAQUS . To simulate the complete actuation system, the core and Kagome members are modeled using linear Timoshenko-type beams, while the solid

  1. The systems biology simulation core algorithm

    PubMed Central

    2013-01-01

    Background With the increasing availability of high dimensional time course data for metabolites, genes, and fluxes, the mathematical description of dynamical systems has become an essential aspect of research in systems biology. Models are often encoded in formats such as SBML, whose structure is very complex and difficult to evaluate due to many special cases. Results This article describes an efficient algorithm to solve SBML models that are interpreted in terms of ordinary differential equations. We begin our consideration with a formal representation of the mathematical form of the models and explain all parts of the algorithm in detail, including several preprocessing steps. We provide a flexible reference implementation as part of the Systems Biology Simulation Core Library, a community-driven project providing a large collection of numerical solvers and a sophisticated interface hierarchy for the definition of custom differential equation systems. To demonstrate the capabilities of the new algorithm, it has been tested with the entire SBML Test Suite and all models of BioModels Database. Conclusions The formal description of the mathematics behind the SBML format facilitates the implementation of the algorithm within specifically tailored programs. The reference implementation can be used as a simulation backend for Java™-based programs. Source code, binaries, and documentation can be freely obtained under the terms of the LGPL version 3 from http://simulation-core.sourceforge.net. Feature requests, bug reports, contributions, or any further discussion can be directed to the mailing list simulation-core-development@lists.sourceforge.net. PMID:23826941

  2. Driven and decaying turbulence simulations of low–mass star formation: From clumps to cores to protostars

    DOE PAGES

    Offner, Stella S. R.; Klein, Richard I.; McKee, Christopher F.

    2008-10-20

    Molecular clouds are observed to be turbulent, but the origin of this turbulence is not well understood. As a result, there are two different approaches to simulating molecular clouds, one in which the turbulence is allowed to decay after it is initialized, and one in which it is driven. We use the adaptive mesh refinement (AMR) code, Orion, to perform high-resolution simulations of molecular cloud cores and protostars in environments with both driven and decaying turbulence. We include self-gravity, use a barotropic equation of state, and represent regions exceeding the maximum grid resolution with sink particles. We analyze the propertiesmore » of bound cores such as size, shape, line width, and rotational energy, and we find reasonable agreement with observation. At high resolution the different rates of core accretion in the two cases have a significant effect on protostellar system development. Clumps forming in a decaying turbulence environment produce high-multiplicity protostellar systems with Toomre Q unstable disks that exhibit characteristics of the competitive accretion model for star formation. In contrast, cores forming in the context of continuously driven turbulence and virial equilibrium form smaller protostellar systems with fewer low-mass members. Furthermore, our simulations of driven and decaying turbulence show some statistically significant differences, particularly in the production of brown dwarfs and core rotation, but the uncertainties are large enough that we are not able to conclude whether observations favor one or the other.« less

  3. Best estimate plus uncertainty analysis of departure from nucleate boiling limiting case with CASL core simulator VERA-CS in response to PWR main steam line break event

    DOE PAGES

    Brown, Cameron S.; Zhang, Hongbin; Kucukboyaci, Vefa; ...

    2016-09-07

    VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was used to simulate a typical pressurized water reactor (PWR) full core response with 17x17 fuel assemblies for a main steam line break (MSLB) accident scenario with the most reactive rod cluster control assembly stuck out of the core. The accident scenario was initiated at the hot zero power (HZP) at the end of the first fuel cycle with return to power state points that were determined by amore » system analysis code and the most limiting state point was chosen for core analysis. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this way, 59 full core simulations were performed to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. The results show that this typical PWR core remains within MDNBR safety limits for the MSLB accident.« less

  4. Best estimate plus uncertainty analysis of departure from nucleate boiling limiting case with CASL core simulator VERA-CS in response to PWR main steam line break event

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Cameron S.; Zhang, Hongbin; Kucukboyaci, Vefa

    VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was used to simulate a typical pressurized water reactor (PWR) full core response with 17x17 fuel assemblies for a main steam line break (MSLB) accident scenario with the most reactive rod cluster control assembly stuck out of the core. The accident scenario was initiated at the hot zero power (HZP) at the end of the first fuel cycle with return to power state points that were determined by amore » system analysis code and the most limiting state point was chosen for core analysis. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this way, 59 full core simulations were performed to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. The results show that this typical PWR core remains within MDNBR safety limits for the MSLB accident.« less

  5. Kinetic turbulence simulations at extreme scale on leadership-class systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bei; Ethier, Stephane; Tang, William

    2013-01-01

    Reliable predictive simulation capability addressing confinement properties in magnetically confined fusion plasmas is critically-important for ITER, a 20 billion dollar international burning plasma device under construction in France. The complex study of kinetic turbulence, which can severely limit the energy confinement and impact the economic viability of fusion systems, requires simulations at extreme scale for such an unprecedented device size. Our newly optimized, global, ab initio particle-in-cell code solving the nonlinear equations underlying gyrokinetic theory achieves excellent performance with respect to "time to solution" at the full capacity of the IBM Blue Gene/Q on 786,432 cores of Mira at ALCFmore » and recently of the 1,572,864 cores of Sequoia at LLNL. Recent multithreading and domain decomposition optimizations in the new GTC-P code represent critically important software advances for modern, low memory per core systems by enabling routine simulations at unprecedented size (130 million grid points ITER-scale) and resolution (65 billion particles).« less

  6. Head-on collision of multistate ultralight BEC dark matter configurations

    NASA Astrophysics Data System (ADS)

    Guzmán, F. S.; Avilez, Ana A.

    2018-06-01

    Density profiles of ultralight Bose-condensate dark matter inferred from numerical simulations of structure formation, ruled by the Gross-Pitaevskii-Poisson (GPP) system of equations, have a core-tail structure. Multistate equilibrium configurations of the GPP system, on the other hand, have a similar core-tail density profile. We now submit these multistate configurations to highly dynamical scenarios and show their potential as providers of appropriate density profiles of structures. We present the simulation of head-on collisions between two equilibrium configurations of the GPP system of equations, including the collision of ground state with multistate configurations. We study the regimes of solitonic and merger behavior and show generic properties of the dynamics of the system, including the relaxation process and attractor density profiles. We show that the merger of multistate configurations has the potential to produce core-tail density profiles, with the core dominated by the ground state and the halo dominated by an additional state.

  7. Cosmological simulations of dwarf galaxies with cosmic ray feedback

    NASA Astrophysics Data System (ADS)

    Chen, Jingjing; Bryan, Greg L.; Salem, Munier

    2016-08-01

    We perform zoom-in cosmological simulations of a suite of dwarf galaxies, examining the impact of cosmic rays (CRs) generated by supernovae, including the effect of diffusion. We first look at the effect of varying the uncertain CR parameters by repeatedly simulating a single galaxy. Then we fix the comic ray model and simulate five dwarf systems with virial masses range from 8 to 30 × 1010 M⊙. We find that including CR feedback (with diffusion) consistently leads to disc-dominated systems with relatively flat rotation curves and constant star formation rates. In contrast, our purely thermal feedback case results in a hot stellar system and bursty star formation. The CR simulations very well match the observed baryonic Tully-Fisher relation, but have a lower gas fraction than in real systems. We also find that the dark matter cores of the CR feedback galaxies are cuspy, while the purely thermal feedback case results in a substantial core.

  8. Towards Reconfigurable, Separable and Hard Real-Time Hybrid Simulation and Test Systems

    NASA Astrophysics Data System (ADS)

    Quartier, F.; Delatte, B.; Joubert, M.

    2009-05-01

    Formation flight needs several new technologies, new disciplines, new approaches and above all, more concurrent engineering by more players. One of the problems to be addressed are more complex simulation and test systems that are easy to re-configure to include parts of the target hardware and that can provide sufficient power to handle simulation cores that are requiring one to two orders of magnitude more processing power than the current technology provides. Critical technologies that are already addressed by CNES and Spacebel are study model reuse and simulator reconfigurability (Basiles), model portability (SMP2) and the federation of several simulators using HLA. Two more critical issues are addressed in ongoing R&D work by CNES and Spacebel and are covered by this paper and concern the time engineering and management. The first issue concerns separability (characterisation, identification and handling of separable subsystems) and the consequences on practical systems. Experiments on the Pleiades operational simulator have shown that adding precise simulation of instruments such as Doris and the Star Tracker can be added without significantly impacting overall performance. Improved time analysis leads to better system understanding and testability. The second issue concerns architectures for distributed hybrid simulators systems that provide hard real-time capabilities and can react with a relative time precision and jitter that is in the 10 to 50 µsecond range using mainstream PC's and mainstream Operating Systems. This opens a way to make smaller economic hardware test systems that can be reconfigured to make large hardware test systems without restarting development. Although such systems were considered next to impossible till now, distributed hard real-time systems are getting in reach when modern but mainstream electronics are used and when processor cores can be isolated and reserved for real-time cores. This requires a complete rethinking of the overall system, but needs very little overall changes. Automated identification of potential parallel simulation capability might become possible in a not so distant future.

  9. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less

  10. Design and implementation of a simple nuclear power plant simulator

    NASA Astrophysics Data System (ADS)

    Miller, William H.

    1983-02-01

    A simple PWR nuclear power plant simulator has been designed and implemented on a minicomputer system. The system is intended for students use in understanding the power operation of a nuclear power plant. A PDP-11 minicomputer calculates reactor parameters in real time, uses a graphics terminal to display the results and a keyboard and joystick for control functions. Plant parameters calculated by the model include the core reactivity (based upon control rod positions, soluble boron concentration and reactivity feedback effects), the total core power, the axial core power distribution, the temperature and pressure in the primary and secondary coolant loops, etc.

  11. Toward a more efficient and scalable checkpoint/restart mechanism in the Community Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Anantharaj, Valentine

    2015-04-01

    The number of cores (both CPU as well as accelerator) in large-scale systems has been increasing rapidly over the past several years. In 2008, there were only 5 systems in the Top500 list that had over 100,000 total cores (including accelerator cores) whereas the number of system with such capability has jumped to 31 in Nov 2014. This growth however has also increased the risk of hardware failure rates, necessitating the implementation of fault tolerance mechanism in applications. The checkpoint and restart (C/R) approach is commonly used to save the state of the application and restart at a later time either after failure or to continue execution of experiments. The implementation of an efficient C/R mechanism will make it more affordable to output the necessary C/R files more frequently. The availability of larger systems (more nodes, memory and cores) has also facilitated the scaling of applications. Nowadays, it is more common to conduct coupled global climate simulation experiments at 1 deg horizontal resolution (atmosphere), often requiring about 103 cores. At the same time, a few climate modeling teams that have access to a dedicated cluster and/or large scale systems are involved in modeling experiments at 0.25 deg horizontal resolution (atmosphere) and 0.1 deg resolution for the ocean. These ultrascale configurations require the order of 104 to 105 cores. It is not only necessary for the numerical algorithms to scale efficiently but the input/output (IO) mechanism must also scale accordingly. An ongoing series of ultrascale climate simulations, using the Titan supercomputer at the Oak Ridge Leadership Computing Facility (ORNL), is based on the spectral element dynamical core of the Community Atmosphere Model (CAM-SE), which is a component of the Community Earth System Model and the DOE Accelerated Climate Model for Energy (ACME). The CAM-SE dynamical core for a 0.25 deg configuration has been shown to scale efficiently across 100,000 cpu cores. At this scale, there is an increased risk that the simulation could be terminated due to hardware failures, resulting in a loss that could be as high as 105 - 106 titan core hours. Increasing the frequency of the output of C/R files could mitigate this loss but at the cost of additional C/R overhead. We are testing a more efficient C/R mechanism in CAM-SE. Our early implementation has demonstrated a nearly 3X performance improvement for a 1 deg CAM-SE (with CAM5 physics and MOZART chemistry) configuration using nearly 103 cores. We are in the process of scaling our implementation to 105 cores. This would allow us to run ultra scale simulations with more sophisticated physics and chemistry options while making better utilization of resources.

  12. Melting and solidification behavior of Cu/Al and Ti/Al bimetallic core/shell nanoparticles during additive manufacturing by molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Rahmani, Farzin; Jeon, Jungmin; Jiang, Shan; Nouranian, Sasan

    2018-05-01

    Molecular dynamics (MD) simulations were performed to investigate the role of core volume fraction and number of fusing nanoparticles (NPs) on the melting and solidification of Cu/Al and Ti/Al bimetallic core/shell NPs during a superfast heating and slow cooling process, roughly mimicking the conditions of selective laser melting (SLM). One recent trend in the SLM process is the rapid prototyping of nanoscopically heterogeneous alloys, wherein the precious core metal maintains its particulate nature in the final manufactured part. With this potential application in focus, the current work reveals the fundamental role of the interface in the two-stage melting of the core/shell alloy NPs. For a two-NP system, the melting zone gets broader as the core volume fraction increases. This effect is more pronounced for the Ti/Al system than the Cu/Al system because of a larger difference between the melting temperatures of the shell and core metals in the former than the latter. In a larger six-NP system (more nanoscopically heterogeneous), the melting and solidification temperatures of the shell Al roughly coincide, irrespective of the heating or cooling rate, implying that in the SLM process, the part manufacturing time can be reduced due to solidification taking place at higher temperatures. The nanostructure evolution during the cooling of six-NP systems is further investigated. [Figure not available: see fulltext.

  13. A method of Modelling and Simulating the Back-to-Back Modular Multilevel Converter HVDC Transmission System

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Fan, Youping; Zhang, Dai; Ge, Mengxin; Zou, Xianbin; Li, Jingjiao

    2017-09-01

    This paper proposes a method to simulate a back-to-back modular multilevel converter (MMC) HVDC transmission system. In this paper we utilize an equivalent networks to simulate the dynamic power system. Moreover, to account for the performance of converter station, core components of model of the converter station gives a basic model of simulation. The proposed method is applied to an equivalent real power system.

  14. Etude et simulation du protocole TTEthernet sur un sous-systeme de gestion de vols et adaptation de la planification des tâches a des fins de simulation

    NASA Astrophysics Data System (ADS)

    Abidi, Dhafer

    TTEthernet is a deterministic network technology that makes enhancements to Layer 2 Quality-of-Service (QoS) for Ethernet. The components that implement its services enrich the Ethernet functionality with distributed fault-tolerant synchronization, robust temporal partitioning bandwidth and synchronous communication with fixed latency and low jitter. TTEthernet services can facilitate the design of scalable, robust, less complex distributed systems and architectures tolerant to faults. Simulation is nowadays an essential step in critical systems design process and represents a valuable support for validation and performance evaluation. CoRE4INET is a project bringing together all TTEthernet simulation models currently available. It is based on the extension of models of OMNeT ++ INET framework. Our objective is to study and simulate the TTEthernet protocol on a flight management subsystem (FMS). The idea is to use CoRE4INET to design the simulation model of the target system. The problem is that CoRE4INET does not offer a task scheduling tool for TTEthernet network. To overcome this problem we propose an adaptation for simulation purposes of a task scheduling approach based on formal specification of network constraints. The use of Yices solver allowed the translation of the formal specification into an executable program to generate the desired transmission plan. A case study allowed us at the end to assess the impact of the arrangement of Time-Triggered frames offsets on the performance of each type of the system traffic.

  15. VLBI-resolution radio-map algorithms: Performance analysis of different levels of data-sharing on multi-socket, multi-core architectures

    NASA Astrophysics Data System (ADS)

    Tabik, S.; Romero, L. F.; Mimica, P.; Plata, O.; Zapata, E. L.

    2012-09-01

    A broad area in astronomy focuses on simulating extragalactic objects based on Very Long Baseline Interferometry (VLBI) radio-maps. Several algorithms in this scope simulate what would be the observed radio-maps if emitted from a predefined extragalactic object. This work analyzes the performance and scaling of this kind of algorithms on multi-socket, multi-core architectures. In particular, we evaluate a sharing approach, a privatizing approach and a hybrid approach on systems with complex memory hierarchy that includes shared Last Level Cache (LLC). In addition, we investigate which manual processes can be systematized and then automated in future works. The experiments show that the data-privatizing model scales efficiently on medium scale multi-socket, multi-core systems (up to 48 cores) while regardless of algorithmic and scheduling optimizations, the sharing approach is unable to reach acceptable scalability on more than one socket. However, the hybrid model with a specific level of data-sharing provides the best scalability over all used multi-socket, multi-core systems.

  16. Scaling up Planetary Dynamo Modeling to Massively Parallel Computing Systems: The Rayleigh Code at ALCF

    NASA Astrophysics Data System (ADS)

    Featherstone, N. A.; Aurnou, J. M.; Yadav, R. K.; Heimpel, M. H.; Soderlund, K. M.; Matsui, H.; Stanley, S.; Brown, B. P.; Glatzmaier, G.; Olson, P.; Buffett, B. A.; Hwang, L.; Kellogg, L. H.

    2017-12-01

    In the past three years, CIG's Dynamo Working Group has successfully ported the Rayleigh Code to the Argonne Leadership Computer Facility's Mira BG/Q device. In this poster, we present some our first results, showing simulations of 1) convection in the solar convection zone; 2) dynamo action in Earth's core and 3) convection in the jovian deep atmosphere. These simulations have made efficient use of 131 thousand cores, 131 thousand cores and 232 thousand cores, respectively, on Mira. In addition to our novel results, the joys and logistical challenges of carrying out such large runs will also be discussed.

  17. The Core Avionics System for the DLR Compact-Satellite Series

    NASA Astrophysics Data System (ADS)

    Montenegro, S.; Dittrich, L.

    2008-08-01

    The Standard Satellite Bus's core avionics system is a further step in the development line of the software and hardware architecture which was first used in the bispectral infrared detector mission (BIRD). The next step improves dependability, flexibility and simplicity of the whole core avionics system. Important aspects of this concept were already implemented, simulated and tested in other ESA and industrial projects. Therefore we can say the basic concept is proven. This paper deals with different aspects of core avionics development and proposes an extension to the existing core avionics system of BIRD to meet current and future requirements regarding flexibility, availability, reliability of small satellite and the continuous increasing demand of mass memory and computational power.

  18. An End-To-End Test of A Simulated Nuclear Electric Propulsion System

    NASA Technical Reports Server (NTRS)

    VanDyke, Melissa; Hrbud, Ivana; Goddfellow, Keith; Rodgers, Stephen L. (Technical Monitor)

    2002-01-01

    The Safe Affordable Fission Engine (SAFE) test series addresses Phase I Space Fission Systems issues in it particular non-nuclear testing and system integration issues leading to the testing and non-nuclear demonstration of a 400-kW fully integrated flight unit. The first part of the SAFE 30 test series demonstrated operation of the simulated nuclear core and heat pipe system. Experimental data acquired in a number of different test scenarios will validate existing computational models, demonstrated system flexibility (fast start-ups, multiple start-ups/shut downs), simulate predictable failure modes and operating environments. The objective of the second part is to demonstrate an integrated propulsion system consisting of a core, conversion system and a thruster where the system converts thermal heat into jet power. This end-to-end system demonstration sets a precedent for ground testing of nuclear electric propulsion systems. The paper describes the SAFE 30 end-to-end system demonstration and its subsystems.

  19. A review of training research and virtual reality simulators for the da Vinci surgical system.

    PubMed

    Liu, May; Curet, Myriam

    2015-01-01

    PHENOMENON: Virtual reality simulators are the subject of several recent studies of skills training for robot-assisted surgery. Yet no consensus exists regarding what a core skill set comprises or how to measure skill performance. Defining a core skill set and relevant metrics would help surgical educators evaluate different simulators. This review draws from published research to propose a core technical skill set for using the da Vinci surgeon console. Publications on three commercial simulators were used to evaluate the simulators' content addressing these skills and associated metrics. An analysis of published research suggests that a core technical skill set for operating the surgeon console includes bimanual wristed manipulation, camera control, master clutching to manage hand position, use of third instrument arm, activating energy sources, appropriate depth perception, and awareness of forces applied by instruments. Validity studies of three commercial virtual reality simulators for robot-assisted surgery suggest that all three have comparable content and metrics. However, none have comprehensive content and metrics for all core skills. INSIGHTS: Virtual reality simulation remains a promising tool to support skill training for robot-assisted surgery, yet existing commercial simulator content is inadequate for performing and assessing a comprehensive basic skill set. The results of this evaluation help identify opportunities and challenges that exist for future developments in virtual reality simulation for robot-assisted surgery. Specifically, the inclusion of educational experts in the development cycle alongside clinical and technological experts is recommended.

  20. Design of elliptical-core mode-selective photonic lanterns with six modes for MIMO-free mode division multiplexing systems.

    PubMed

    Sai, Xiaowei; Li, Yan; Yang, Chen; Li, Wei; Qiu, Jifang; Hong, Xiaobin; Zuo, Yong; Guo, Hongxiang; Tong, Weijun; Wu, Jian

    2017-11-01

    Elliptical-core few mode fiber (EC-FMF) is used in a mode division multiplexing (MDM) transmission system to release multiple-input-multiple-output (MIMO) digital-signal-processing, which reduces the cost and the complexity of the receiver. However, EC-FMF does not match with conventional multiplexers/de-multiplexers (MUXs/DeMUXs) such as a photonic lantern, leading to extra mode coupling loss and crosstalk. We design elliptical-core mode-selective photonic lanterns (EC-MSPLs) with six modes, which can match well with EC-FMF in MIMO-free MDM systems. Simulation of the EC-MSPL using the beam propagation method was demonstrated employing a combination of either step-index or graded-index fibers with six different sizes of cores, and the taper transition length of 8 cm or 4 cm. Through numerical simulations and optimizations, both types of photonic lanterns can realize low loss transmission and low crosstalk of below -20.0  dB for all modes.

  1. The TAVERNS emulator: An Ada simulation of the space station data communications network and software development environment

    NASA Technical Reports Server (NTRS)

    Howes, Norman R.

    1986-01-01

    The Space Station DMS (Data Management System) is the onboard component of the Space Station Information System (SSIS) that includes the computers, networks and software that support the various core and payload subsystems of the Space Station. TAVERNS (Test And Validation Environment for Remote Networked Systems) is a distributed approach for development and validation of application software for Space Station. The TAVERNS concept assumes that the different subsystems will be developed by different contractors who may be geographically separated. The TAVERNS Emulator is an Ada simulation of a TAVERNS on the ASD VAX. The software services described in the DMS Test Bed User's Manual are being emulated on the VAX together with simulations of some of the core subsystems and a simulation of the DCN. The TAVERNS Emulator will be accessible remotely from any VAX that can communicate with the ASD VAX.

  2. Running climate model on a commercial cloud computing environment: A case study using Community Earth System Model (CESM) on Amazon AWS

    NASA Astrophysics Data System (ADS)

    Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock

    2017-01-01

    The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.

  3. DYNAMICO, an atmospheric dynamical core for high-performance climate modeling

    NASA Astrophysics Data System (ADS)

    Dubos, Thomas; Meurdesoif, Yann; Spiga, Aymeric; Millour, Ehouarn; Fita, Lluis; Hourdin, Frédéric; Kageyama, Masa; Traore, Abdoul-Khadre; Guerlet, Sandrine; Polcher, Jan

    2017-04-01

    Institut Pierre Simon Laplace has developed a very scalable atmospheric dynamical core, DYNAMICO, based on energy-conserving finite-difference/finite volume numerics on a quasi-uniform icosahedral-hexagonal mesh. Scalability is achieved by combining hybrid MPI/OpenMP parallelism to asynchronous I/O. This dynamical core has been coupled to radiative transfer physics tailored to the atmosphere of Saturn, allowing unprecedented simulations of the climate of this giant planet. For terrestrial climate studies DYNAMICO is being integrated into the IPSL Earth System Model IPSL-CM. Preliminary aquaplanet and AMIP-style simulations yield reasonable results when compared to outputs from IPSL-CM5. The observed performance suggests that an order of magnitude may be gained with respect to IPSL-CM CMIP5 simulations either on the duration of simulations or on their resolution. Longer simulations would be of interest for the study of paleoclimate, while higher resolution could improve certain aspects of the modeled climate such as extreme events, as will be explored in the HighResMIP project. Following IPSL's strategic vision of building a unified global-regional modelling system, a fully-compressible, non-hydrostatic prototype of DYNAMICO has been developed, enabling future convection-resolving simulations. Work supported by ANR project "HEAT", grant number CE23_2014_HEAT Dubos, T., Dubey, S., Tort, M., Mittal, R., Meurdesoif, Y., and Hourdin, F.: DYNAMICO-1.0, an icosahedral hydrostatic dynamical core designed for consistency and versatility, Geosci. Model Dev., 8, 3131-3150, doi:10.5194/gmd-8-3131-2015, 2015.

  4. An MPI-based MoSST core dynamics model

    NASA Astrophysics Data System (ADS)

    Jiang, Weiyuan; Kuang, Weijia

    2008-09-01

    Distributed systems are among the main cost-effective and expandable platforms for high-end scientific computing. Therefore scalable numerical models are important for effective use of such systems. In this paper, we present an MPI-based numerical core dynamics model for simulation of geodynamo and planetary dynamos, and for simulation of core-mantle interactions. The model is developed based on MPI libraries. Two algorithms are used for node-node communication: a "master-slave" architecture and a "divide-and-conquer" architecture. The former is easy to implement but not scalable in communication. The latter is scalable in both computation and communication. The model scalability is tested on Linux PC clusters with up to 128 nodes. This model is also benchmarked with a published numerical dynamo model solution.

  5. Proton core-beam system in the expanding solar wind: Hybrid simulations

    NASA Astrophysics Data System (ADS)

    Hellinger, Petr; Trávníček, Pavel M.

    2011-11-01

    Results of a two-dimensional hybrid expanding box simulation of a proton beam-core system in the solar wind are presented. The expansion with a strictly radial magnetic field leads to a decrease of the ratio between the proton perpendicular and parallel temperatures as well as to an increase of the ratio between the beam-core differential velocity and the local Alfvén velocity creating a free energy for many different instabilities. The system is indeed most of the time marginally stable with respect to the parallel magnetosonic, oblique Alfvén, proton cyclotron and parallel fire hose instabilities which determine the system evolution counteracting some effects of the expansion and interacting with each other. Nonlinear evolution of these instabilities leads to large modifications of the proton velocity distribution function. The beam and core protons are slowed with respect to each other and heated, and at later stages of the evolution the two populations are not clearly distinguishable. On the macroscopic level the instabilities cause large departures from the double adiabatic prediction leading to an efficient isotropization of effective proton temperatures in agreement with Helios observations.

  6. Prospect of Using Numerical Dynamo Model for Prediction of Geomagnetic Secular Variation

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Tangborn, Andrew

    2003-01-01

    Modeling of the Earth's core has reached a level of maturity to where the incorporation of observations into the simulations through data assimilation has become feasible. Data assimilation is a method by which observations of a system are combined with a model output (or forecast) to obtain a best guess of the state of the system, called the analysis. The analysis is then used as an initial condition for the next forecast. By doing assimilation, not only we shall be able to predict partially secular variation of the core field, we could also use observations to further our understanding of dynamical states in the Earth's core. One of the first steps in the development of an assimilation system is a comparison between the observations and the model solution. The highly turbulent nature of core dynamics, along with the absence of any regular external forcing and constraint (which occurs in atmospheric dynamics, for example) means that short time comparisons (approx. 1000 years) cannot be made between model and observations. In order to make sensible comparisons, a direct insertion assimilation method has been implemented. In this approach, magnetic field observations at the Earth's surface have been substituted into the numerical model, such that the ratio of the multiple components and the dipole component from observation is adjusted at the core-mantle boundary and extended to the interior of the core, while the total magnetic energy remains unchanged. This adjusted magnetic field is then used as the initial field for a new simulation. In this way, a time tugged simulation is created which can then be compared directly with observations. We present numerical solutions with and without data insertion and discuss their implications for the development of a more rigorous assimilation system.

  7. Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over themore » $$\\mu$$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.« less

  8. Towards asteroseismology of core-collapse supernovae with gravitational-wave observations - I. Cowling approximation

    NASA Astrophysics Data System (ADS)

    Torres-Forné, Alejandro; Cerdá-Durán, Pablo; Passamonti, Andrea; Font, José A.

    2018-03-01

    Gravitational waves from core-collapse supernovae are produced by the excitation of different oscillation modes in the protoneutron star (PNS) and its surroundings, including the shock. In this work we study the relationship between the post-bounce oscillation spectrum of the PNS-shock system and the characteristic frequencies observed in gravitational-wave signals from core-collapse simulations. This is a fundamental first step in order to develop a procedure to infer astrophysical parameters of the PNS formed in core-collapse supernovae. Our method combines information from the oscillation spectrum of the PNS, obtained through linear perturbation analysis in general relativity of a background physical system, with information from the gravitational-wave spectrum of the corresponding non-linear, core-collapse simulation. Using results from the simulation of the collapse of a 35 M⊙ pre-supernova progenitor we show that both types of spectra are indeed related and we are able to identify the modes of oscillation of the PNS, namely g-modes, p-modes, hybrid modes, and standing accretion shock instability (SASI) modes, obtaining a remarkably close correspondence with the time-frequency distribution of the gravitational-wave modes. The analysis presented in this paper provides a proof of concept that asteroseismology is indeed possible in the core-collapse scenario, and it may serve as a basis for future work on PNS parameter inference based on gravitational-wave observations.

  9. Assessment of the Neutronic and Fuel Cycle Performance of the Transatomic Power Molten Salt Reactor Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Sean; Dewan, Leslie; Massie, Mark

    This report presents results from a collaboration between Transatomic Power Corporation (TAP) and Oak Ridge National Laboratory (ORNL) to provide neutronic and fuel cycle analysis of the TAP core design through the Department of Energy Gateway for Accelerated Innovation in Nuclear (GAIN) Nuclear Energy Voucher program. The TAP concept is a molten salt reactor using configurable zirconium hydride moderator rod assemblies to shift the neutron spectrum in the core from mostly epithermal at beginning of life to thermal at end of life. Additional developments in the ChemTriton modeling and simulation tool provide the critical moderator-to-fuel ratio searches and time-dependent parametersmore » necessary to simulate the continuously changing physics in this complex system. The implementation of continuous-energy Monte Carlo transport and depletion tools in ChemTriton provide for full-core three-dimensional modeling and simulation. Results from simulations with these tools show agreement with TAP-calculated performance metrics for core lifetime, discharge burnup, and salt volume fraction, verifying the viability of reducing actinide waste production with this concept. Additional analyses of mass feed rates and enrichments, isotopic removals, tritium generation, core power distribution, core vessel helium generation, moderator rod heat deposition, and reactivity coeffcients provide additional information to make informed design decisions. This work demonstrates capabilities of ORNL modeling and simulation tools for neutronic and fuel cycle analysis of molten salt reactor concepts.« less

  10. Highly Efficient Parallel Multigrid Solver For Large-Scale Simulation of Grain Growth Using the Structural Phase Field Crystal Model

    NASA Astrophysics Data System (ADS)

    Guan, Zhen; Pekurovsky, Dmitry; Luce, Jason; Thornton, Katsuyo; Lowengrub, John

    The structural phase field crystal (XPFC) model can be used to model grain growth in polycrystalline materials at diffusive time-scales while maintaining atomic scale resolution. However, the governing equation of the XPFC model is an integral-partial-differential-equation (IPDE), which poses challenges in implementation onto high performance computing (HPC) platforms. In collaboration with the XSEDE Extended Collaborative Support Service, we developed a distributed memory HPC solver for the XPFC model, which combines parallel multigrid and P3DFFT. The performance benchmarking on the Stampede supercomputer indicates near linear strong and weak scaling for both multigrid and transfer time between multigrid and FFT modules up to 1024 cores. Scalability of the FFT module begins to decline at 128 cores, but it is sufficient for the type of problem we will be examining. We have demonstrated simulations using 1024 cores, and we expect to achieve 4096 cores and beyond. Ongoing work involves optimization of MPI/OpenMP-based codes for the Intel KNL Many-Core Architecture. This optimizes the code for coming pre-exascale systems, in particular many-core systems such as Stampede 2.0 and Cori 2 at NERSC, without sacrificing efficiency on other general HPC systems.

  11. Deflection Measurements of a Thermally Simulated Nuclear Core Using a High-Resolution CCD-Camera

    NASA Technical Reports Server (NTRS)

    Stanojev, B. J.; Houts, M.

    2004-01-01

    Space fission systems under consideration for near-term missions all use compact. fast-spectrum reactor cores. Reactor dimensional change with increasing temperature, which affects neutron leakage. is the dominant source of reactivity feedback in these systems. Accurately measuring core dimensional changes during realistic non-nuclear testing is therefore necessary in predicting the system nuclear equivalent behavior. This paper discusses one key technique being evaluated for measuring such changes. The proposed technique is to use a Charged Couple Device (CCD) sensor to obtain deformation readings of electrically heated prototypic reactor core geometry. This paper introduces a technique by which a single high spatial resolution CCD camera is used to measure core deformation in Real-Time (RT). Initial system checkout results are presented along with a discussion on how additional cameras could be used to achieve a three- dimensional deformation profile of the core during test.

  12. Toward GEOS-6, A Global Cloud System Resolving Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Putman, William M.

    2010-01-01

    NASA is committed to observing and understanding the weather and climate of our home planet through the use of multi-scale modeling systems and space-based observations. Global climate models have evolved to take advantage of the influx of multi- and many-core computing technologies and the availability of large clusters of multi-core microprocessors. GEOS-6 is a next-generation cloud system resolving atmospheric model that will place NASA at the forefront of scientific exploration of our atmosphere and climate. Model simulations with GEOS-6 will produce a realistic representation of our atmosphere on the scale of typical satellite observations, bringing a visual comprehension of model results to a new level among the climate enthusiasts. In preparation for GEOS-6, the agency's flagship Earth System Modeling Framework [JDl] has been enhanced to support cutting-edge high-resolution global climate and weather simulations. Improvements include a cubed-sphere grid that exposes parallelism; a non-hydrostatic finite volume dynamical core, and algorithm designed for co-processor technologies, among others. GEOS-6 represents a fundamental advancement in the capability of global Earth system models. The ability to directly compare global simulations at the resolution of spaceborne satellite images will lead to algorithm improvements and better utilization of space-based observations within the GOES data assimilation system

  13. Multicore Education through Simulation

    ERIC Educational Resources Information Center

    Ozturk, O.

    2011-01-01

    A project-oriented course for advanced undergraduate and graduate students is described for simulating multiple processor cores. Simics, a free simulator for academia, was utilized to enable students to explore computer architecture, operating systems, and hardware/software cosimulation. Motivation for including this course in the curriculum is…

  14. Tailoring the response of Autonomous Reactivity Control (ARC) systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qvist, Staffan A.; Hellesen, Carl; Gradecka, Malwina

    The Autonomous Reactivity Control (ARC) system was developed to ensure inherent safety of Generation IV reactors while having a minimal impact on reactor performance and economic viability. In this study we present the transient response of fast reactor cores to postulated accident scenarios with and without ARC systems installed. Using a combination of analytical methods and numerical simulation, the principles of ARC system design that assure stability and avoids oscillatory behavior have been identified. A comprehensive transient analysis study for ARC-equipped cores, including a series of Unprotected Loss of Flow (ULOF) and Unprotected Loss of Heat Sink (ULOHS) simulations, weremore » performed for Argonne National Laboratory (ANL) Advanced Burner Reactor (ABR) designs. With carefully designed ARC-systems installed in the fuel assemblies, the cores exhibit a smooth non-oscillatory transition to stabilization at acceptable temperatures following all postulated transients. To avoid oscillations in power and temperature, the reactivity introduced per degree of temperature change in the ARC system needs to be kept below a certain threshold the value of which is system dependent, the temperature span of actuation needs to be as large as possible.« less

  15. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas

    2009-01-01

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors,more » other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million atom biological systems scale well up to 30k cores, producing 30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.« less

  16. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer.

    PubMed

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas; Smith, Jeremy C

    2009-10-13

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors, other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million-atom biological systems scale well up to ∼30k cores, producing ∼30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.

  17. Necroplanetology: Disrupted Planetary Material Transiting WD 1145+017

    NASA Astrophysics Data System (ADS)

    Manideep Duvvuri, Girish; Redfield, Seth; Veras, Dimitri

    2018-06-01

    The WD 1145+017 system shows irregular transit features that are consistent with the tidal disruption of differentiated asteroids with bulk densities < 4 g cm-3 and bulk masses < 1021 kg. We use the open-source N-body code REBOUND to simulate this disruption with different internal structures: varying the core volume fraction, mantle/core density ratio, and the presence/absence of a thin low-density crust. We show that these parameters have observationally distinguishable effects on the transit light curve as the asteroid is disrupted and fit the simulation-generated lightcurves to data. We find that an asteroid with a low core fraction, low mantle/density ratio, and without a crust is most consistent with the A1 feature present for multiple weeks circa April 2017. This combination of observations and simulations to study the interior structure and chemistry of exoplanetary bodies via their destruction in action is an early example of necroplanetology, a field that will hopefully grow with the discovery of other systems like WD 1145+017.

  18. Core analysis of heterogeneous rocks using experimental observations and digital whole core simulation

    NASA Astrophysics Data System (ADS)

    Jackson, S. J.; Krevor, S. C.; Agada, S.

    2017-12-01

    A number of studies have demonstrated the prevalent impact that small-scale rock heterogeneity can have on larger scale flow in multiphase flow systems including petroleum production and CO2sequestration. Larger scale modeling has shown that this has a significant impact on fluid flow and is possibly a significant source of inaccuracy in reservoir simulation. Yet no core analysis protocol has been developed that faithfully represents the impact of these heterogeneities on flow functions used in modeling. Relative permeability is derived from core floods performed at conditions with high flow potential in which the impact of capillary heterogeneity is voided. A more accurate representation would be obtained if measurements were made at flow conditions where the impact of capillary heterogeneity on flow is scaled to be representative of the reservoir system. This, however, is generally impractical due to laboratory constraints and the role of the orientation of the rock heterogeneity. We demonstrate a workflow of combined observations and simulations, in which the impact of capillary heterogeneity may be faithfully represented in the derivation of upscaled flow properties. Laboratory measurements that are a variation of conventional protocols are used for the parameterization of an accurate digital rock model for simulation. The relative permeability at the range of capillary numbers relevant to flow in the reservoir is derived primarily from numerical simulations of core floods that include capillary pressure heterogeneity. This allows flexibility in the orientation of the heterogeneity and in the range of flow rates considered. We demonstrate the approach in which digital rock models have been developed alongside core flood observations for three applications: (1) A Bentheimer sandstone with a simple axial heterogeneity to demonstrate the validity and limitations of the approach, (2) a set of reservoir rocks from the Captain sandstone in the UK North Sea targeted for CO2 storage, and for which the use of capillary pressure hysteresis is necessary, and (3) a secondary CO2-EOR production of residual oil from a Berea sandstone with layered heterogeneities. In all cases the incorporation of heterogeneity is shown to be key to the ultimate derivation of flow properties representative of the reservoir system.

  19. Postcollapse Evolution of Globular Clusters

    NASA Astrophysics Data System (ADS)

    Makino, Junichiro

    1996-11-01

    A number of globular clusters appear to have undergone core collapse, in the sense that their predicted collapse times are much shorter than their current ages. Simulations with gas models and the Fokker-Planck approximation have shown that the central density of a globular cluster after the collapse undergoes nonlinear oscillation with a large amplitude (gravothermal oscillation). However, the question whether such an oscillation actually takes place in real N-body systems has remained unsolved because an N-body simulation with a sufficiently high resolution would have required computing resources of the order of several GFLOPS-yr. In the present paper, we report the results of such a simulation performed on a dedicated special-purpose computer, GRAPE-4. We have simulated the evolution of isolated point-mass systems with up to 32,768 particles. The largest number of particles reported previously is 10,000. We confirm that gravothermal oscillation takes place in an N-body system. The expansion phase shows all the signatures that are considered to be evidence of the gravothermal nature of the oscillation. At the maximum expansion, the core radius is ˜1% of the half-mass radius for the run with 32,768 particles. The maximum core size, rc, depends on N as ∝ N-1/3.

  20. Analysis of C/E results of fission rate ratio measurements in several fast lead VENUS-F cores

    NASA Astrophysics Data System (ADS)

    Kochetkov, Anatoly; Krása, Antonín; Baeten, Peter; Vittiglio, Guido; Wagemans, Jan; Bécares, Vicente; Bianchini, Giancarlo; Fabrizio, Valentina; Carta, Mario; Firpo, Gabriele; Fridman, Emil; Sarotto, Massimo

    2017-09-01

    During the GUINEVERE FP6 European project (2006-2011), the zero-power VENUS water-moderated reactor was modified into VENUS-F, a mock-up of a lead cooled fast spectrum system with solid components that can be operated in both critical and subcritical mode. The Fast Reactor Experiments for hybrid Applications (FREYA) FP7 project was launched in 2011 to support the designs of the MYRRHA Accelerator Driven System (ADS) and the ALFRED Lead Fast Reactor (LFR). Three VENUS-F critical core configurations, simulating the complex MYRRHA core design and one configuration devoted to the LFR ALFRED core conditions were investigated in 2015. The MYRRHA related cores simulated step by step design peculiarities like the BeO reflector and in pile sections. For all of these cores the fuel assemblies were of a simple design consisting of 30% enriched metallic uranium, lead rodlets to simulate the coolant and Al2O3 rodlets to simulate the oxide fuel. Fission rate ratios of minor actinides such as Np-237, Am-241 as well as Pu-239, Pu-240, Pu-242 and U-238 to U-235 were measured in these VENUS-F critical assemblies with small fission chambers in specially designed locations, to determine the spectral indices in the different neutron spectrum conditions. The measurements have been analyzed using advanced computational tools including deterministic and stochastic codes and different nuclear data sets like JEFF-3.1, JEFF-3.2, ENDF/B7.1 and JENDL-4.0. The analysis of the C/E discrepancies will help to improve the nuclear data in the specific energy region of fast neutron reactor spectra.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Epiney, A.; Canepa, S.; Zerkak, O.

    The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less

  2. Core-to-core uniformity improvement in multi-core fiber Bragg gratings

    NASA Astrophysics Data System (ADS)

    Lindley, Emma; Min, Seong-Sik; Leon-Saval, Sergio; Cvetojevic, Nick; Jovanovic, Nemanja; Bland-Hawthorn, Joss; Lawrence, Jon; Gris-Sanchez, Itandehui; Birks, Tim; Haynes, Roger; Haynes, Dionne

    2014-07-01

    Multi-core fiber Bragg gratings (MCFBGs) will be a valuable tool not only in communications but also various astronomical, sensing and industry applications. In this paper we address some of the technical challenges of fabricating effective multi-core gratings by simulating improvements to the writing method. These methods allow a system designed for inscribing single-core fibers to cope with MCFBG fabrication with only minor, passive changes to the writing process. Using a capillary tube that was polished on one side, the field entering the fiber was flattened which improved the coverage and uniformity of all cores.

  3. Taming Wild Horses: The Need for Virtual Time-based Scheduling of VMs in Network Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J

    2012-01-01

    The next generation of scalable network simulators employ virtual machines (VMs) to act as high-fidelity models of traffic producer/consumer nodes in simulated networks. However, network simulations could be inaccurate if VMs are not scheduled according to virtual time, especially when many VMs are hosted per simulator core in a multi-core simulator environment. Since VMs are by default free-running, on the outset, it is not clear if, and to what extent, their untamed execution affects the results in simulated scenarios. Here, we provide the first quantitative basis for establishing the need for generalized virtual time scheduling of VMs in network simulators,more » based on an actual prototyped implementations. To exercise breadth, our system is tested with multiple disparate applications: (a) a set of message passing parallel programs, (b) a computer worm propagation phenomenon, and (c) a mobile ad-hoc wireless network simulation. We define and use error metrics and benchmarks in scaled tests to empirically report the poor match of traditional, fairness-based VM scheduling to VM-based network simulation, and also clearly show the better performance of our simulation-specific scheduler, with up to 64 VMs hosted on a 12-core simulator node.« less

  4. Moment analysis method as applied to the 2S --> 2P transition in cryogenic alkali metal/rare gas matrices.

    PubMed

    Terrill Vosbein, Heidi A; Boatz, Jerry A; Kenney, John W

    2005-12-22

    The moment analysis method (MA) has been tested for the case of 2S --> 2P ([core]ns1 --> [core]np1) transitions of alkali metal atoms (M) doped into cryogenic rare gas (Rg) matrices using theoretically validated simulations. Theoretical/computational M/Rg system models are constructed with precisely defined parameters that closely mimic known M/Rg systems. Monte Carlo (MC) techniques are then employed to generate simulated absorption and magnetic circular dichroism (MCD) spectra of the 2S --> 2P M/Rg transition to which the MA method can be applied with the goal of seeing how effective the MA method is in re-extracting the M/Rg system parameters from these known simulated systems. The MA method is summarized in general, and an assessment is made of the use of the MA method in the rigid shift approximation typically used to evaluate M/Rg systems. The MC-MCD simulation technique is summarized, and validating evidence is presented. The simulation results and the assumptions used in applying MA to M/Rg systems are evaluated. The simulation results on Na/Ar demonstrate that the MA method does successfully re-extract the 2P spin-orbit coupling constant and Landé g-factor values initially used to build the simulations. However, assigning physical significance to the cubic and noncubic Jahn-Teller (JT) vibrational mode parameters in cryogenic M/Rg systems is not supported.

  5. Biomechanical Evaluation of a Tooth Restored with High Performance Polymer PEKK Post-Core System: A 3D Finite Element Analysis.

    PubMed

    Lee, Ki-Sun; Shin, Joo-Hee; Kim, Jong-Eun; Kim, Jee-Hwan; Lee, Won-Chang; Shin, Sang-Wan; Lee, Jeong-Yol

    2017-01-01

    The aim of this study was to evaluate the biomechanical behavior and long-term safety of high performance polymer PEKK as an intraradicular dental post-core material through comparative finite element analysis (FEA) with other conventional post-core materials. A 3D FEA model of a maxillary central incisor was constructed. A cyclic loading force of 50 N was applied at an angle of 45° to the longitudinal axis of the tooth at the palatal surface of the crown. For comparison with traditionally used post-core materials, three materials (gold, fiberglass, and PEKK) were simulated to determine their post-core properties. PEKK, with a lower elastic modulus than root dentin, showed comparably high failure resistance and a more favorable stress distribution than conventional post-core material. However, the PEKK post-core system showed a higher probability of debonding and crown failure under long-term cyclic loading than the metal or fiberglass post-core systems.

  6. Biomechanical Evaluation of a Tooth Restored with High Performance Polymer PEKK Post-Core System: A 3D Finite Element Analysis

    PubMed Central

    Shin, Joo-Hee; Kim, Jong-Eun; Kim, Jee-Hwan; Lee, Won-Chang; Shin, Sang-Wan

    2017-01-01

    The aim of this study was to evaluate the biomechanical behavior and long-term safety of high performance polymer PEKK as an intraradicular dental post-core material through comparative finite element analysis (FEA) with other conventional post-core materials. A 3D FEA model of a maxillary central incisor was constructed. A cyclic loading force of 50 N was applied at an angle of 45° to the longitudinal axis of the tooth at the palatal surface of the crown. For comparison with traditionally used post-core materials, three materials (gold, fiberglass, and PEKK) were simulated to determine their post-core properties. PEKK, with a lower elastic modulus than root dentin, showed comparably high failure resistance and a more favorable stress distribution than conventional post-core material. However, the PEKK post-core system showed a higher probability of debonding and crown failure under long-term cyclic loading than the metal or fiberglass post-core systems. PMID:28386547

  7. Adaptive control method for core power control in TRIGA Mark II reactor

    NASA Astrophysics Data System (ADS)

    Sabri Minhat, Mohd; Selamat, Hazlina; Subha, Nurul Adilla Mohd

    2018-01-01

    The 1MWth Reactor TRIGA PUSPATI (RTP) Mark II type has undergone more than 35 years of operation. The existing core power control uses feedback control algorithm (FCA). It is challenging to keep the core power stable at the desired value within acceptable error bands to meet the safety demand of RTP due to the sensitivity of nuclear research reactor operation. Currently, the system is not satisfied with power tracking performance and can be improved. Therefore, a new design core power control is very important to improve the current performance in tracking and regulate reactor power by control the movement of control rods. In this paper, the adaptive controller and focus on Model Reference Adaptive Control (MRAC) and Self-Tuning Control (STC) were applied to the control of the core power. The model for core power control was based on mathematical models of the reactor core, adaptive controller model, and control rods selection programming. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The adaptive control model was presented using Lyapunov method to ensure stable close loop system and STC Generalised Minimum Variance (GMV) Controller was not necessary to know the exact plant transfer function in designing the core power control. The performance between proposed adaptive control and FCA will be compared via computer simulation and analysed the simulation results manifest the effectiveness and the good performance of the proposed control method for core power control.

  8. Qualification of CASMO5 / SIMULATE-3K against the SPERT-III E-core cold start-up experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grandi, G.; Moberg, L.

    SIMULATE-3K is a three-dimensional kinetic code applicable to LWR Reactivity Initiated Accidents. S3K has been used to calculate several international recognized benchmarks. However, the feedback models in the benchmark exercises are different from the feedback models that SIMULATE-3K uses for LWR reactors. For this reason, it is worth comparing the SIMULATE-3K capabilities for Reactivity Initiated Accidents against kinetic experiments. The Special Power Excursion Reactor Test III was a pressurized-water, nuclear-research facility constructed to analyze the reactor kinetic behavior under initial conditions similar to those of commercial LWRs. The SPERT III E-core resembles a PWR in terms of fuel type, moderator,more » coolant flow rate, and system pressure. The initial test conditions (power, core flow, system pressure, core inlet temperature) are representative of cold start-up, hot start-up, hot standby, and hot full power. The qualification of S3K against the SPERT III E-core measurements is an ongoing work at Studsvik. In this paper, the results for the 30 cold start-up tests are presented. The results show good agreement with the experiments for the reactivity initiated accident main parameters: peak power, energy release and compensated reactivity. Predicted and measured peak powers differ at most by 13%. Measured and predicted reactivity compensations at the time of the peak power differ less than 0.01 $. Predicted and measured energy release differ at most by 13%. All differences are within the experimental uncertainty. (authors)« less

  9. PyNEST: A Convenient Interface to the NEST Simulator.

    PubMed

    Eppler, Jochen Martin; Helias, Moritz; Muller, Eilif; Diesmann, Markus; Gewaltig, Marc-Oliver

    2008-01-01

    The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 10(4) neurons and 10(7) to 10(9) synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NEST's efficient simulation kernel with the simplicity and flexibility of Python. Compared to NEST's native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used.

  10. PyNEST: A Convenient Interface to the NEST Simulator

    PubMed Central

    Eppler, Jochen Martin; Helias, Moritz; Muller, Eilif; Diesmann, Markus; Gewaltig, Marc-Oliver

    2008-01-01

    The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 104 neurons and 107 to 109 synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NEST's efficient simulation kernel with the simplicity and flexibility of Python. Compared to NEST's native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used. PMID:19198667

  11. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less

  12. Preparing for Exascale: Towards convection-permitting, global atmospheric simulations with the Model for Prediction Across Scales (MPAS)

    NASA Astrophysics Data System (ADS)

    Heinzeller, Dominikus; Duda, Michael G.; Kunstmann, Harald

    2017-04-01

    With strong financial and political support from national and international initiatives, exascale computing is projected for the end of this decade. Energy requirements and physical limitations imply the use of accelerators and the scaling out to orders of magnitudes larger numbers of cores then today to achieve this milestone. In order to fully exploit the capabilities of these Exascale computing systems, existing applications need to undergo significant development. The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric core, an ocean core, a land-ice core and a sea-ice core. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. Here, we present work towards the application of the atmospheric core (MPAS-A) on current and future high performance computing systems for problems at extreme scale. In particular, we address the issue of massively parallel I/O by extending the model to support the highly scalable SIONlib library. Using global uniform meshes with a convection-permitting resolution of 2-3km, we demonstrate the ability of MPAS-A to scale out to half a million cores while maintaining a high parallel efficiency. We also demonstrate the potential benefit of a hybrid parallelisation of the code (MPI/OpenMP) on the latest generation of Intel's Many Integrated Core Architecture, the Intel Xeon Phi Knights Landing.

  13. MDGRAPE-4: a special-purpose computer system for molecular dynamics simulations.

    PubMed

    Ohmura, Itta; Morimoto, Gentaro; Ohno, Yousuke; Hasegawa, Aki; Taiji, Makoto

    2014-08-06

    We are developing the MDGRAPE-4, a special-purpose computer system for molecular dynamics (MD) simulations. MDGRAPE-4 is designed to achieve strong scalability for protein MD simulations through the integration of general-purpose cores, dedicated pipelines, memory banks and network interfaces (NIFs) to create a system on chip (SoC). Each SoC has 64 dedicated pipelines that are used for non-bonded force calculations and run at 0.8 GHz. Additionally, it has 65 Tensilica Xtensa LX cores with single-precision floating-point units that are used for other calculations and run at 0.6 GHz. At peak performance levels, each SoC can evaluate 51.2 G interactions per second. It also has 1.8 MB of embedded shared memory banks and six network units with a peak bandwidth of 7.2 GB s(-1) for the three-dimensional torus network. The system consists of 512 (8×8×8) SoCs in total, which are mounted on 64 node modules with eight SoCs. The optical transmitters/receivers are used for internode communication. The expected maximum power consumption is 50 kW. While MDGRAPE-4 software has still been improved, we plan to run MD simulations on MDGRAPE-4 in 2014. The MDGRAPE-4 system will enable long-time molecular dynamics simulations of small systems. It is also useful for multiscale molecular simulations where the particle simulation parts often become bottlenecks.

  14. MDGRAPE-4: a special-purpose computer system for molecular dynamics simulations

    PubMed Central

    Ohmura, Itta; Morimoto, Gentaro; Ohno, Yousuke; Hasegawa, Aki; Taiji, Makoto

    2014-01-01

    We are developing the MDGRAPE-4, a special-purpose computer system for molecular dynamics (MD) simulations. MDGRAPE-4 is designed to achieve strong scalability for protein MD simulations through the integration of general-purpose cores, dedicated pipelines, memory banks and network interfaces (NIFs) to create a system on chip (SoC). Each SoC has 64 dedicated pipelines that are used for non-bonded force calculations and run at 0.8 GHz. Additionally, it has 65 Tensilica Xtensa LX cores with single-precision floating-point units that are used for other calculations and run at 0.6 GHz. At peak performance levels, each SoC can evaluate 51.2 G interactions per second. It also has 1.8 MB of embedded shared memory banks and six network units with a peak bandwidth of 7.2 GB s−1 for the three-dimensional torus network. The system consists of 512 (8×8×8) SoCs in total, which are mounted on 64 node modules with eight SoCs. The optical transmitters/receivers are used for internode communication. The expected maximum power consumption is 50 kW. While MDGRAPE-4 software has still been improved, we plan to run MD simulations on MDGRAPE-4 in 2014. The MDGRAPE-4 system will enable long-time molecular dynamics simulations of small systems. It is also useful for multiscale molecular simulations where the particle simulation parts often become bottlenecks. PMID:24982255

  15. Electrically Heated Testing of the Kilowatt Reactor Using Stirling Technology (KRUSTY) Experiment Using a Depleted Uranium Core

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell H.; Gibson, Marc A.; Sanzi, James

    2017-01-01

    The Kilopower project aims to develop and demonstrate scalable fission-based power technology for systems capable of delivering 110 kW of electric power with a specific power ranging from 2.5 - 6.5 Wkg. This technology could enable high power science missions or could be used to provide surface power for manned missions to the Moon or Mars. NASA has partnered with the Department of Energys National Nuclear Security Administration, Los Alamos National Labs, and Y-12 National Security Complex to develop and test a prototypic reactor and power system using existing facilities and infrastructure. This technology demonstration, referred to as the Kilowatt Reactor Using Stirling TechnologY (KRUSTY), will undergo nuclear ground testing in the summer of 2017 at the Nevada Test Site. The 1 kWe variation of the Kilopower system was chosen for the KRUSTY demonstration. The concept for the 1 kWe flight system consist of a 4 kWt highly enriched Uranium-Molybdenum reactor operating at 800 degrees Celsius coupled to sodium heat pipes. The heat pipes deliver heat to the hot ends of eight 125 W Stirling convertors producing a net electrical output of 1 kW. Waste heat is rejected using titanium-water heat pipes coupled to carbon composite radiator panels. The KRUSTY test, based on this design, uses a prototypic highly enriched uranium-molybdenum core coupled to prototypic sodium heat pipes. The heat pipes transfer heat to two Advanced Stirling Convertors (ASC-E2s) and six thermal simulators, which simulate the thermal draw of full scale power conversion units. Thermal simulators and Stirling engines are gas cooled. The most recent project milestone was the completion of non-nuclear system level testing using an electrically heated depleted uranium (non-fissioning) reactor core simulator. System level testing at the Glenn Research Center (GRC) has validated performance predictions and has demonstrated system level operation and control in a test configuration that replicates the one to be used at the Device Assembly Facility (DAF) at the Nevada National Security Site. Fabrication, assembly, and testing of the depleted uranium core has allowed for higher fidelity system level testing at GRC, and has validated the fabrication methods to be used on the highly enriched uranium core that will supply heat for the DAF KRUSTY demonstration.

  16. Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-07-15

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. A meaningful adaption will result in high-fidelity and robust adapted core simulator models. To perform adaption, we propose an inverse theory approach in which the multitudes of input data to core simulators, i.e., reactor physics and thermal-hydraulic data, are to be adjusted to improve agreement withmore » measured observables while keeping core simulator models unadapted. At first glance, devising such adaption for typical core simulators with millions of input and observables data would spawn not only several prohibitive challenges but also numerous disparaging concerns. The challenges include the computational burdens of the sensitivity-type calculations required to construct Jacobian operators for the core simulator models. Also, the computational burdens of the uncertainty-type calculations required to estimate the uncertainty information of core simulator input data present a demanding challenge. The concerns however are mainly related to the reliability of the adjusted input data. The methodologies of adaptive simulation are well established in the literature of data adjustment. We adopt the same general framework for data adjustment; however, we refrain from solving the fundamental adjustment equations in a conventional manner. We demonstrate the use of our so-called Efficient Subspace Methods (ESMs) to overcome the computational and storage burdens associated with the core adaption problem. We illustrate the successful use of ESM-based adaptive techniques for a typical boiling water reactor core simulator adaption problem.« less

  17. Aqueous poly(amidoamine) dendrimer G3 and G4 generations with several interior cores at pHs 5 and 7: a molecular dynamics simulation study.

    PubMed

    Kavyani, Sajjad; Amjad-Iranagh, Sepideh; Modarress, Hamid

    2014-03-27

    Poly(amidoamine) (PAMAM) dendrimers play an important role in drug delivery systems, because the dendrimers are susceptible to gain unique features with modification of their structure such as changing their terminals or improving their interior core. To investigate the core improvement and the effect of core nature on PAMAM dendrimers, we studied two generations G3 and G4 PAMAM dendrimers with the interior cores of commonly used ethylendiamine (EDA), 1,5-diaminohexane (DAH), and bis(3-aminopropyl) ether (BAPE) solvated in water, as an aqueous dendrimer system, by using molecular dynamics simulation and applying a coarse-grained (CG) dendrimer force field. To consider the electrostatic interactions, the simulations were performed at two protonation states, pHs 5 and 7. The results indicated that the core improvement of PAMAM dendrimers with DAH produces the largest size for G3 and G4 dendrimers at both pHs 5 and 7. The increase in the size was also observed for BAPE core but it was not so significant as that for DAH core. By considering the internal structure of dendrimers, it was found that PAMAM dendrimer shell with DAH core had more cavities than with BAPE core at both pHs 5 and 7. Also the moment of inertia calculations showed that the generation G3 is more open-shaped and has higher structural asymmetry than the generation G4. Possessing these properties by G3, specially due to its structural asymmetry, make penetration of water beads into the dendrimer feasible. But for higher generation G4 with its relatively structural symmetry, the encapsulation efficiency for water molecules can be enhanced by changing its core to DAH or BAPE. It is also observed that for the higher generation G4 the effect of core modification is more profound than G3 because the core modification promotes the structural asymmetry development of G4 more significantly. Comparing the number of water beads that penetrate into the PAMAM dendrimers for EDA, DAH, and BAPE cores indicates a significant increase when their cores have been modified with DAH or BAPE and substantiates the effective influence of the core nature in the dendrimer encapsulation efficiency.

  18. A Cryogenic Fluid System Simulation in Support of Integrated Systems Health Management

    NASA Technical Reports Server (NTRS)

    Barber, John P.; Johnston, Kyle B.; Daigle, Matthew

    2013-01-01

    Simulations serve as important tools throughout the design and operation of engineering systems. In the context of sys-tems health management, simulations serve many uses. For one, the underlying physical models can be used by model-based health management tools to develop diagnostic and prognostic models. These simulations should incorporate both nominal and faulty behavior with the ability to inject various faults into the system. Such simulations can there-fore be used for operator training, for both nominal and faulty situations, as well as for developing and prototyping health management algorithms. In this paper, we describe a methodology for building such simulations. We discuss the design decisions and tools used to build a simulation of a cryogenic fluid test bed, and how it serves as a core technology for systems health management development and maturation.

  19. Operational Focused Simulation

    DTIC Science & Technology

    2009-12-01

    selected technologies. In order to build the scenario to fit the vignette, the Theater Battle Management Core System ( TBMCS ) databases were adjusted... TBMCS program provided an automated and integrated capability to plan and execute the air battle plan for the modeling and simulation efforts. TBMCS ...is the operational system of record for the Air and Space Operations Center Weapons System (AOC WS). TBMCS provides the Joint/Combined Forces Air

  20. Convection- and SASI-driven flows in parametrized models of core-collapse supernova explosions

    DOE PAGES

    Endeve, E.; Cardall, C. Y.; Budiardja, R. D.; ...

    2016-01-21

    We present initial results from three-dimensional simulations of parametrized core-collapse supernova (CCSN) explosions obtained with our astrophysical simulation code General Astrophysical Simulation System (GenASIS). We are interested in nonlinear flows resulting from neutrino-driven convection and the standing accretion shock instability (SASI) in the CCSN environment prior to and during the explosion. By varying parameters in our model that control neutrino heating and shock dissociation, our simulations result in convection-dominated and SASI-dominated evolution. We describe this initial set of simulation results in some detail. To characterize the turbulent flows in the simulations, we compute and compare velocity power spectra from convection-dominatedmore » and SASI-dominated (both non-exploding and exploding) models. When compared to SASI-dominated models, convection-dominated models exhibit significantly more power on small spatial scales.« less

  1. Integrating a human thermoregulatory model with a clothing model to predict core and skin temperatures.

    PubMed

    Yang, Jie; Weng, Wenguo; Wang, Faming; Song, Guowen

    2017-05-01

    This paper aims to integrate a human thermoregulatory model with a clothing model to predict core and skin temperatures. The human thermoregulatory model, consisting of an active system and a passive system, was used to determine the thermoregulation and heat exchanges within the body. The clothing model simulated heat and moisture transfer from the human skin to the environment through the microenvironment and fabric. In this clothing model, the air gap between skin and clothing, as well as clothing properties such as thickness, thermal conductivity, density, porosity, and tortuosity were taken into consideration. The simulated core and mean skin temperatures were compared to the published experimental results of subject tests at three levels of ambient temperatures of 20 °C, 30 °C, and 40 °C. Although lower signal-to-noise-ratio was observed, the developed model demonstrated positive performance at predicting core temperatures with a maximum difference between the simulations and measurements of no more than 0.43 °C. Generally, the current model predicted the mean skin temperatures with reasonable accuracy. It could be applied to predict human physiological responses and assess thermal comfort and heat stress. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. APPLYING THE PATUXENT LANDSCAPE UNIT MODEL TO HUMAN DOMINATED ECOSYSTEMS: THE CASE OF AGRICULTURE. (R827169)

    EPA Science Inventory

    Non-spatial dynamics are core to landscape simulations. Unit models simulate system interactions aggregated within one space unit of resolution used within a spatial model. For unit models to be applicable to spatial simulations they have to be formulated in a general enough w...

  3. Real-Time Agent-Based Modeling Simulation with in-situ Visualization of Complex Biological Systems: A Case Study on Vocal Fold Inflammation and Healing.

    PubMed

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K

    2016-05-01

    We present an efficient and scalable scheme for implementing agent-based modeling (ABM) simulation with In Situ visualization of large complex systems on heterogeneous computing platforms. The scheme is designed to make optimal use of the resources available on a heterogeneous platform consisting of a multicore CPU and a GPU, resulting in minimal to no resource idle time. Furthermore, the scheme was implemented under a client-server paradigm that enables remote users to visualize and analyze simulation data as it is being generated at each time step of the model. Performance of a simulation case study of vocal fold inflammation and wound healing with 3.8 million agents shows 35× and 7× speedup in execution time over single-core and multi-core CPU respectively. Each iteration of the model took less than 200 ms to simulate, visualize and send the results to the client. This enables users to monitor the simulation in real-time and modify its course as needed.

  4. A monitoring system based on electric vehicle three-stage wireless charging

    NASA Astrophysics Data System (ADS)

    Hei, T.; Liu, Z. Z.; Yang, Y.; Hongxing, CHEN; Zhou, B.; Zeng, H.

    2016-08-01

    An monitoring system for three-stage wireless charging was designed. The vehicle terminal contained the core board which was used for battery information collection and charging control and the power measurement and charging control core board was provided at the transmitting terminal which communicated with receiver by Bluetooth. A touch-screen display unit was designed based on MCGS (Monitor and Control Generated System) to simulate charging behavior and to debug the system conveniently. The practical application shown that the system could be stable and reliable, and had a favorable application foreground.

  5. Towards a Consolidated Approach for the Assessment of Evaluation Models of Nuclear Power Reactors

    DOE PAGES

    Epiney, A.; Canepa, S.; Zerkak, O.; ...

    2016-11-02

    The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less

  6. Melting Penetration Simulation of Fe-U System at High Temperature Using MPS_LER

    NASA Astrophysics Data System (ADS)

    Mustari, A. P. A.; Yamaji, A.; Irwanto, Dwi

    2016-08-01

    Melting penetration information of Fe-U system is necessary for simulating the molten core behavior during severe accident in nuclear power plants. For Fe-U system, the information is mainly obtained from experiment, i.e. TREAT experiment. However, there is no reported data on SS304 at temperature above 1350°C. The MPS_LER has been developed and validated to simulate melting penetration on Fe-U system. The MPS_LER modelled the eutectic phenomenon by solving the diffusion process and by applying the binary phase diagram criteria. This study simulates the melting penetration of the system at higher temperature using MPS_LER. Simulations were conducted on SS304 at 1400, 1450 and 1500°C. The simulation results show rapid increase of melting penetration rate.

  7. Umbra (core)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, Jon David; Oppel III, Fred J.; Hart, Brian E.

    Umbra is a flexible simulation framework for complex systems that can be used by itself for modeling, simulation, and analysis, or to create specific applications. It has been applied to many operations, primarily dealing with robotics and system of system simulations. This version, from 4.8 to 4.8.3b, incorporates bug fixes, refactored code, and new managed C++ wrapper code that can be used to bridge new applications written in C# to the C++ libraries. The new managed C++ wrapper code includes (project/directories) BasicSimulation, CSharpUmbraInterpreter, LogFileView, UmbraAboutBox, UmbraControls, UmbraMonitor and UmbraWrapper.

  8. Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment*†

    PubMed Central

    Khan, Md. Ashfaquzzaman; Herbordt, Martin C.

    2011-01-01

    Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations. PMID:21822327

  9. Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment.

    PubMed

    Khan, Md Ashfaquzzaman; Herbordt, Martin C

    2011-07-20

    Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations.

  10. Fast simulation of the NICER instrument

    NASA Astrophysics Data System (ADS)

    Doty, John P.; Wampler-Doty, Matthew P.; Prigozhin, Gregory Y.; Okajima, Takashi; Arzoumanian, Zaven; Gendreau, Keith

    2016-07-01

    The NICER1 mission uses a complicated physical system to collect information from objects that are, by x-ray timing science standards, rather faint. To get the most out of the data we will need a rigorous understanding of all instrumental effects. We are in the process of constructing a very fast, high fidelity simulator that will help us to assess instrument performance, support simulation-based data reduction, and improve our estimates of measurement error. We will combine and extend existing optics, detector, and electronics simulations. We will employ the Compute Unified Device Architecture (CUDA2) to parallelize these calculations. The price of suitable CUDA-compatible multi-giga op cores is about $0.20/core, so this approach will be very cost-effective.

  11. DKIST Adaptive Optics System: Simulation Results

    NASA Astrophysics Data System (ADS)

    Marino, Jose; Schmidt, Dirk

    2016-05-01

    The 4 m class Daniel K. Inouye Solar Telescope (DKIST), currently under construction, will be equipped with an ultra high order solar adaptive optics (AO) system. The requirements and capabilities of such a solar AO system are beyond those of any other solar AO system currently in operation. We must rely on solar AO simulations to estimate and quantify its performance.We present performance estimation results of the DKIST AO system obtained with a new solar AO simulation tool. This simulation tool is a flexible and fast end-to-end solar AO simulator which produces accurate solar AO simulations while taking advantage of current multi-core computer technology. It relies on full imaging simulations of the extended field Shack-Hartmann wavefront sensor (WFS), which directly includes important secondary effects such as field dependent distortions and varying contrast of the WFS sub-aperture images.

  12. Simulating storage part of application with Simgrid

    NASA Astrophysics Data System (ADS)

    Wang, Cong

    2017-10-01

    Design of a file system simulation and visualization system, using simgrid API and visualization techniques to help users understanding and improving the file system portion of their application. The core of the simulator is the API provided by simgrid, cluefs tracks and catches the procedure of the I/O operation. Run the simulator simulating this application to generate the output visualization file, which can visualize the I/O action proportion and time series. Users can also change the parameters in the configuration file to change the parameters of the storage system such as reading and writing bandwidth, users can also adjust the storage strategy, test the performance, getting reference to be much easier to optimize the storage system. We have tested all the aspects of the simulator, the results suggest that the simulator performance can be believable.

  13. End-to-End Demonstrator of the Safe Affordable Fission Engine (SAFE) 30: Power Conversion and Ion Engine Operation

    NASA Technical Reports Server (NTRS)

    Hrbud, Ivana; VanDyke, Melissa; Houts, Mike; Goodfellow, Keith; Schafer, Charles (Technical Monitor)

    2001-01-01

    The Safe Affordable Fission Engine (SAFE) test series addresses Phase 1 Space Fission Systems issues in particular non-nuclear testing and system integration issues leading to the testing and non-nuclear demonstration of a 400-kW fully integrated flight unit. The first part of the SAFE 30 test series demonstrated operation of the simulated nuclear core and heat pipe system. Experimental data acquired in a number of different test scenarios will validate existing computational models, demonstrated system flexibility (fast start-ups, multiple start-ups/shut downs), simulate predictable failure modes and operating environments. The objective of the second part is to demonstrate an integrated propulsion system consisting of a core, conversion system and a thruster where the system converts thermal heat into jet power. This end-to-end system demonstration sets a precedent for ground testing of nuclear electric propulsion systems. The paper describes the SAFE 30 end-to-end system demonstration and its subsystems.

  14. Methods of information geometry in computational system biology (consistency between chemical and biological evolution).

    PubMed

    Astakhov, Vadim

    2009-01-01

    Interest in simulation of large-scale metabolic networks, species development, and genesis of various diseases requires new simulation techniques to accommodate the high complexity of realistic biological networks. Information geometry and topological formalisms are proposed to analyze information processes. We analyze the complexity of large-scale biological networks as well as transition of the system functionality due to modification in the system architecture, system environment, and system components. The dynamic core model is developed. The term dynamic core is used to define a set of causally related network functions. Delocalization of dynamic core model provides a mathematical formalism to analyze migration of specific functions in biosystems which undergo structure transition induced by the environment. The term delocalization is used to describe these processes of migration. We constructed a holographic model with self-poetic dynamic cores which preserves functional properties under those transitions. Topological constraints such as Ricci flow and Pfaff dimension were found for statistical manifolds which represent biological networks. These constraints can provide insight on processes of degeneration and recovery which take place in large-scale networks. We would like to suggest that therapies which are able to effectively implement estimated constraints, will successfully adjust biological systems and recover altered functionality. Also, we mathematically formulate the hypothesis that there is a direct consistency between biological and chemical evolution. Any set of causal relations within a biological network has its dual reimplementation in the chemistry of the system environment.

  15. Mapping, Awareness, and Virtualization Network Administrator Training Tool (MAVNATT) Architecture and Framework

    DTIC Science & Technology

    2015-06-01

    unit may setup and teardown the entire tactical infrastructure multiple times per day. This tactical network administrator training is a critical...language and runs on Linux and Unix based systems. All provisioning is based around the Nagios Core application, a powerful backend solution for network...start up a large number of virtual machines quickly. CORE supports the simulation of fixed and mobile networks. CORE is open-source, written in Python

  16. Generalized environmental control and life support system computer program (G189A) configuration control, phase 2

    NASA Technical Reports Server (NTRS)

    Mcenulty, R. E.

    1977-01-01

    The G189A simulation of the Shuttle Orbiter ECLSS was upgraded. All simulation library versions and simulation models were converted from the EXEC2 to the EXEC8 computer system and a new program, G189PL, was added to the combination master program library. The program permits the post-plotting of up to 100 frames of plot data over any time interval of a G189 simulation run. The overlay structure of the G189A simulations were restructured for the purpose of conserving computer core requirements and minimizing run time requirements.

  17. Simulation and Modeling of charge particles transport using SIMION for our Time of Flight Positron Annihilation Induce Auger Electron Spectroscopy systems

    NASA Astrophysics Data System (ADS)

    Joglekar, Prasad; Shastry, K.; Satyal, Suman; Weiss, Alexander

    2012-02-01

    Time of flight Positron Annihilation Induced Auger Electron Spectroscopy system, a highly surface selective analytical technique using time of flight of auger electron resulting from the annihilation of core electrons by trapped incident positron in image potential well. We simulated and modeled the trajectories of the charge particles in TOF-PAES using SIMION for the development of new high resolution system at U T Arlington and current TOFPAES system. This poster presents the SIMION simulations results, Time of flight calculations and larmor radius calculations for current system as well as new system.

  18. Inviscid and Viscous CFD Analysis of Booster Separation for the Space Launch System Vehicle

    NASA Technical Reports Server (NTRS)

    Dalle, Derek J.; Rogers, Stuart E.; Chan, William M.; Lee, Henry C.

    2016-01-01

    This paper presents details of Computational Fluid Dynamic (CFD) simulations of the Space Launch System during solid-rocket booster separation using the Cart3D inviscid and Overflow viscous CFD codes. The discussion addresses the use of multiple data sources of computational aerodynamics, experimental aerodynamics, and trajectory simulations for this critical phase of flight. Comparisons are shown between Cart3D simulations and a wind tunnel test performed at NASA Langley Research Center's Unitary Plan Wind Tunnel, and further comparisons are shown between Cart3D and viscous Overflow solutions for the flight vehicle. The Space Launch System (SLS) is a new exploration-class launch vehicle currently in development that includes two Solid Rocket Boosters (SRBs) modified from Space Shuttle hardware. These SRBs must separate from the SLS core during a phase of flight where aerodynamic loads are nontrivial. The main challenges for creating a separation aerodynamic database are the large number of independent variables (including orientation of the core, relative position and orientation of the boosters, and rocket thrust levels) and the complex flow caused by exhaust plumes of the booster separation motors (BSMs), which are small rockets designed to push the boosters away from the core by firing partially in the direction opposite to the motion of the vehicle.

  19. Computational studies on self-assembled paclitaxel structures: templates for hierarchical block copolymer assemblies and sustained drug release.

    PubMed

    Guo, Xin D; Tan, Jeremy P K; Kim, Sung H; Zhang, Li J; Zhang, Ying; Hedrick, James L; Yang, Yi Y; Qian, Yu

    2009-11-01

    Paclitaxel-loaded poly(ethylene oxide)-b-poly(lactide) (PEO-b-PLA) systems have been observed to assemble into fiber structures with remarkably different properties using different chirality and molecular weight of PLA segments. In this study, dissipative particle dynamics (DPD) simulations were carried out to elaborate the microstructures and properties of pure paclitaxel and paclitaxel-loaded PEO-b-PLA systems. Paclitaxel molecules formed ribbon or fiber like structures in water. With the addition of PEO-b-PDLA, PEO-b-PLLA and their stereocomplex, paclitaxel acted as a template and polymer molecules assembled around the paclitaxel structure to form core/shell structured fibers having a PEO shell. For PEO19-b-PDLA27 and PEO19-b-PLLA27 systems, PLA segments and paclitaxel molecules were distributed homogeneously in the core of fibers based on the hydrophobic interactions. In the stereocomplex formulation, paclitaxel molecules were more concentrated in the inner PLA stereocomplex core, which led to slower release of paclitaxel. By increasing the length of PLA segments (e.g. 8,16,22 and 27), the crystalline structure of paclitaxel was gradually weakened and destroyed, which was further proved by X-ray diffraction studies. All the simulation results agreed well with experimental data, suggesting that the DPD simulations may provide a powerful tool for designing drug delivery systems.

  20. A Wearable Goggle Navigation System for Dual-Mode Optical and Ultrasound Localization of Suspicious Lesions: Validation Studies Using Tissue-Simulating Phantoms and an Ex Vivo Human Breast Tissue Model

    PubMed Central

    Wang, Dong; Gan, Qi; Ye, Jian; Yue, Jian; Wang, Benzhong; Povoski, Stephen P.; Martin, Edward W.; Hitchcock, Charles L.; Yilmaz, Alper; Tweedle, Michael F.; Shao, Pengfei; Xu, Ronald X.

    2016-01-01

    Surgical resection remains the primary curative treatment for many early-stage cancers, including breast cancer. The development of intraoperative guidance systems for identifying all sites of disease and improving the likelihood of complete surgical resection is an area of active ongoing research, as this can lead to a decrease in the need of subsequent additional surgical procedures. We develop a wearable goggle navigation system for dual-mode optical and ultrasound imaging of suspicious lesions. The system consists of a light source module, a monochromatic CCD camera, an ultrasound system, a Google Glass, and a host computer. It is tested in tissue-simulating phantoms and an ex vivo human breast tissue model. Our experiments demonstrate that the surgical navigation system provides useful guidance for localization and core needle biopsy of simulated tumor within the tissue-simulating phantom, as well as a core needle biopsy and subsequent excision of Indocyanine Green (ICG)—fluorescing sentinel lymph nodes. Our experiments support the contention that this wearable goggle navigation system can be potentially very useful and fully integrated by the surgeon for optimizing many aspects of oncologic surgery. Further engineering optimization and additional in vivo clinical validation work is necessary before such a surgical navigation system can be fully realized in the everyday clinical setting. PMID:27367051

  1. A Wearable Goggle Navigation System for Dual-Mode Optical and Ultrasound Localization of Suspicious Lesions: Validation Studies Using Tissue-Simulating Phantoms and an Ex Vivo Human Breast Tissue Model.

    PubMed

    Zhang, Zeshu; Pei, Jing; Wang, Dong; Gan, Qi; Ye, Jian; Yue, Jian; Wang, Benzhong; Povoski, Stephen P; Martin, Edward W; Hitchcock, Charles L; Yilmaz, Alper; Tweedle, Michael F; Shao, Pengfei; Xu, Ronald X

    2016-01-01

    Surgical resection remains the primary curative treatment for many early-stage cancers, including breast cancer. The development of intraoperative guidance systems for identifying all sites of disease and improving the likelihood of complete surgical resection is an area of active ongoing research, as this can lead to a decrease in the need of subsequent additional surgical procedures. We develop a wearable goggle navigation system for dual-mode optical and ultrasound imaging of suspicious lesions. The system consists of a light source module, a monochromatic CCD camera, an ultrasound system, a Google Glass, and a host computer. It is tested in tissue-simulating phantoms and an ex vivo human breast tissue model. Our experiments demonstrate that the surgical navigation system provides useful guidance for localization and core needle biopsy of simulated tumor within the tissue-simulating phantom, as well as a core needle biopsy and subsequent excision of Indocyanine Green (ICG)-fluorescing sentinel lymph nodes. Our experiments support the contention that this wearable goggle navigation system can be potentially very useful and fully integrated by the surgeon for optimizing many aspects of oncologic surgery. Further engineering optimization and additional in vivo clinical validation work is necessary before such a surgical navigation system can be fully realized in the everyday clinical setting.

  2. An FPGA computing demo core for space charge simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Jinyuan; Huang, Yifei; /Fermilab

    2009-01-01

    In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computedmore » using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.« less

  3. N-body simulation for self-gravitating collisional systems with a new SIMD instruction set extension to the x86 architecture, Advanced Vector eXtensions

    NASA Astrophysics Data System (ADS)

    Tanikawa, Ataru; Yoshikawa, Kohji; Okamoto, Takashi; Nitadori, Keigo

    2012-02-01

    We present a high-performance N-body code for self-gravitating collisional systems accelerated with the aid of a new SIMD instruction set extension of the x86 architecture: Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). With one processor core of Intel Core i7-2600 processor (8 MB cache and 3.40 GHz) based on Sandy Bridge micro-architecture, we implemented a fourth-order Hermite scheme with individual timestep scheme ( Makino and Aarseth, 1992), and achieved the performance of ˜20 giga floating point number operations per second (GFLOPS) for double-precision accuracy, which is two times and five times higher than that of the previously developed code implemented with the SSE instructions ( Nitadori et al., 2006b), and that of a code implemented without any explicit use of SIMD instructions with the same processor core, respectively. We have parallelized the code by using so-called NINJA scheme ( Nitadori et al., 2006a), and achieved ˜90 GFLOPS for a system containing more than N = 8192 particles with 8 MPI processes on four cores. We expect to achieve about 10 tera FLOPS (TFLOPS) for a self-gravitating collisional system with N ˜ 10 5 on massively parallel systems with at most 800 cores with Sandy Bridge micro-architecture. This performance will be comparable to that of Graphic Processing Unit (GPU) cluster systems, such as the one with about 200 Tesla C1070 GPUs ( Spurzem et al., 2010). This paper offers an alternative to collisional N-body simulations with GRAPEs and GPUs.

  4. Diffusion in Coulomb crystals.

    PubMed

    Hughto, J; Schneider, A S; Horowitz, C J; Berry, D K

    2011-07-01

    Diffusion in Coulomb crystals can be important for the structure of neutron star crusts. We determine diffusion constants D from molecular dynamics simulations. We find that D for Coulomb crystals with relatively soft-core 1/r interactions may be larger than D for Lennard-Jones or other solids with harder-core interactions. Diffusion, for simulations of nearly perfect body-centered-cubic lattices, involves the exchange of ions in ringlike configurations. Here ions "hop" in unison without the formation of long lived vacancies. Diffusion, for imperfect crystals, involves the motion of defects. Finally, we find that diffusion, for an amorphous system rapidly quenched from Coulomb parameter Γ=175 to Coulomb parameters up to Γ=1750, is fast enough that the system starts to crystalize during long simulation runs. These results strongly suggest that Coulomb solids in cold white dwarf stars, and the crust of neutron stars, will be crystalline and not amorphous.

  5. Human Pyruvate Dehydrogenase Complex E2 and E3BP Core Subunits: New Models and Insights from Molecular Dynamics Simulations.

    PubMed

    Hezaveh, Samira; Zeng, An-Ping; Jandt, Uwe

    2016-05-19

    Targeted manipulation and exploitation of beneficial properties of multienzyme complexes, especially for the design of novel and efficiently structured enzymatic reaction cascades, require a solid model understanding of mechanistic principles governing the structure and functionality of the complexes. This type of system-level and quantitative knowledge has been very scarce thus far. We utilize the human pyruvate dehydrogenase complex (hPDC) as a versatile template to conduct corresponding studies. Here we present new homology models of the core subunits of the hPDC, namely E2 and E3BP, as the first time effort to elucidate the assembly of hPDC core based on molecular dynamic simulation. New models of E2 and E3BP were generated and validated at atomistic level for different properties of the proteins. The results of the wild type dimer simulations showed a strong hydrophobic interaction between the C-terminal and the hydrophobic pocket which is the main driving force in the intertrimer binding and the core self-assembly. On the contrary, the C-terminal truncated versions exhibited a drastic loss of hydrophobic interaction leading to a dimeric separation. This study represents a significant step toward a model-based understanding of structure and function of large multienzyme systems like PDC for developing highly efficient biocatalyst or bioreaction cascades.

  6. Delay-time distribution of core-collapse supernovae with late events resulting from binary interaction

    NASA Astrophysics Data System (ADS)

    Zapartas, E.; de Mink, S. E.; Izzard, R. G.; Yoon, S.-C.; Badenes, C.; Götberg, Y.; de Koter, A.; Neijssel, C. J.; Renzo, M.; Schootemeijer, A.; Shrotriya, T. S.

    2017-05-01

    Most massive stars, the progenitors of core-collapse supernovae, are in close binary systems and may interact with their companion through mass transfer or merging. We undertake a population synthesis study to compute the delay-time distribution of core-collapse supernovae, that is, the supernova rate versus time following a starburst, taking into account binary interactions. We test the systematic robustness of our results by running various simulations to account for the uncertainties in our standard assumptions. We find that a significant fraction, %, of core-collapse supernovae are "late", that is, they occur 50-200 Myr after birth, when all massive single stars have already exploded. These late events originate predominantly from binary systems with at least one, or, in most cases, with both stars initially being of intermediate mass (4-8 M⊙). The main evolutionary channels that contribute often involve either the merging of the initially more massive primary star with its companion or the engulfment of the remaining core of the primary by the expanding secondary that has accreted mass at an earlier evolutionary stage. Also, the total number of core-collapse supernovae increases by % because of binarity for the same initial stellar mass. The high rate implies that we should have already observed such late core-collapse supernovae, but have not recognized them as such. We argue that φ Persei is a likely progenitor and that eccentric neutron star - white dwarf systems are likely descendants. Late events can help explain the discrepancy in the delay-time distributions derived from supernova remnants in the Magellanic Clouds and extragalactic type Ia events, lowering the contribution of prompt Ia events. We discuss ways to test these predictions and speculate on the implications for supernova feedback in simulations of galaxy evolution.

  7. Rare-Earth-Free Permanent Magnets for Electrical Vehicle Motors and Wind Turbine Generators: Hexagonal Symmetry Based Materials Systems Mn-Bi and M-type Hexaferrite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Yang-Ki; Haskew, Timothy; Myryasov, Oleg

    2014-06-05

    The research we conducted focuses on the rare-earth (RE)-free permanent magnet by modeling, simulating, and synthesizing exchange coupled two-phase (hard/soft) RE-free core-shell nano-structured magnet. The RE-free magnets are made of magnetically hard core materials (high anisotropy materials including Mn-Bi-X and M-type hexaferrite) coated by soft shell materials (high magnetization materials including Fe-Co or Co). Therefore, our research helps understand the exchange coupling conditions of the core/shell magnets, interface exchange behavior between core and shell materials, formation mechanism of core/shell structures, stability conditions of core and shell materials, etc.

  8. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors.

    PubMed

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.

  9. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    PubMed Central

    Cheung, Kit; Schultz, Simon R.; Luk, Wayne

    2016-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542

  10. Molecular Dynamics Simulations of Star Polymeric Molecules with Diblock Arms, a Comparative Study.

    PubMed

    Swope, William C; Carr, Amber C; Parker, Amanda J; Sly, Joseph; Miller, Robert D; Rice, Julia E

    2012-10-09

    We have performed all atom explicit solvent molecular dynamics simulations of three different star polymeric systems in water, each star molecule consisting of 16 diblock copolymer arms bound to a small adamantane core. The arms of each system consist of an inner "hydrophobic" block (either polylactide, polyvalerolactone, or polyethylene) and an outer hydrophilic block (polyethylene oxide, PEO). These models exhibit unusual structure very close to the core (clearly an artifact of our model) but which we believe becomes "normal" or bulk-like at relatively short distances from this core. We report on a number of temperature-dependent thermodynamic (structural/energetic) properties as well as kinetic properties. Our observations suggest that under physiological conditions, the hydrophobic regions of these systems may be solid and glassy, with only rare and shallow penetration by water, and that a sharp boundary exists between the hydrophobic cores and either the PEO or water. The PEO in these models is seen to be fully water-solvated at low temperatures but tends to phase separate from water as the temperature is increased, reminiscent of a lower critical solution temperature exhibited by PEO-water mixtures. Water penetration concentration and depth is composition and temperature dependent with greater water penetration for the most ester-rich star polymer.

  11. PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems.

    PubMed

    Ghaffarizadeh, Ahmadreza; Heiland, Randy; Friedman, Samuel H; Mumenthaler, Shannon M; Macklin, Paul

    2018-02-01

    Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal "virtual laboratory" for such multicellular systems simulates both the biochemical microenvironment (the "stage") and many mechanically and biochemically interacting cells (the "players" upon the stage). PhysiCell-physics-based multicellular simulator-is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility "out of the box." The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 105-106 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a "cellular cargo delivery" system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net.

  12. Parallelization of GeoClaw code for modeling geophysical flows with adaptive mesh refinement on many-core systems

    USGS Publications Warehouse

    Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.

    2011-01-01

    We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.

  13. Convective cooling in a pool-type research reactor

    NASA Astrophysics Data System (ADS)

    Sipaun, Susan; Usman, Shoaib

    2016-01-01

    A reactor produces heat arising from fission reactions in the nuclear core. In the Missouri University of Science and Technology research reactor (MSTR), this heat is removed by natural convection where the coolant/moderator is demineralised water. Heat energy is transferred from the core into the coolant, and the heated water eventually evaporates from the open pool surface. A secondary cooling system was installed to actively remove excess heat arising from prolonged reactor operations. The nuclear core consists of uranium silicide aluminium dispersion fuel (U3Si2Al) in the form of rectangular plates. Gaps between the plates allow coolant to pass through and carry away heat. A study was carried out to map out heat flow as well as to predict the system's performance via STAR-CCM+ simulation. The core was approximated as porous media with porosity of 0.7027. The reactor is rated 200kW and total heat density is approximately 1.07+E7 Wm-3. An MSTR model consisting of 20% of MSTR's nuclear core in a third of the reactor pool was developed. At 35% pump capacity, the simulation results for the MSTR model showed that water is drawn out of the pool at a rate 1.28 kg s-1 from the 4" pipe, and predicted pool surface temperature not exceeding 30°C.

  14. Two-Dimensional Neutronic and Fuel Cycle Analysis of the Transatomic Power Molten Salt Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Betzler, Benjamin R.; Powers, Jeffrey J.; Worrall, Andrew

    2017-01-15

    This status report presents the results from the first phase of the collaboration between Transatomic Power Corporation (TAP) and Oak Ridge National Laboratory (ORNL) to provide neutronic and fuel cycle analysis of the TAP core design through the Department of Energy Gateway for Accelerated Innovation in Nuclear, Nuclear Energy Voucher program. The TAP design is a molten salt reactor using movable moderator rods to shift the neutron spectrum in the core from mostly epithermal at beginning of life to thermal at end of life. Additional developments in the ChemTriton modeling and simulation tool provide the critical moderator-to-fuel ratio searches andmore » time-dependent parameters necessary to simulate the continuously changing physics in this complex system. Results from simulations with these tools show agreement with TAP-calculated performance metrics for core lifetime, discharge burnup, and salt volume fraction, verifying the viability of reducing actinide waste production with this design. Additional analyses of time step sizes, mass feed rates and enrichments, and isotopic removals provide additional information to make informed design decisions. This work further demonstrates capabilities of ORNL modeling and simulation tools for analysis of molten salt reactor designs and strongly positions this effort for the upcoming three-dimensional core analysis.« less

  15. More reliable forecasts with less precise computations: a fast-track route to cloud-resolved weather and climate simulators?

    PubMed Central

    Palmer, T. N.

    2014-01-01

    This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic–dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only. PMID:24842038

  16. More reliable forecasts with less precise computations: a fast-track route to cloud-resolved weather and climate simulators?

    PubMed

    Palmer, T N

    2014-06-28

    This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic-dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only.

  17. Communication Challenges in Requirements Definition: A Classroom Simulation

    ERIC Educational Resources Information Center

    Ramiller, Neil C.; Wagner, Erica L.

    2011-01-01

    Systems analysis and design is a standard course offering within information systems programs and often an important lecture topic in Information Systems core courses. Given the persistent difficulty that organizations experience in implementing systems that meet their requirements, it is important to help students in these courses get a tangible…

  18. Ejector subassembly for dual wall air drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolle, J.J.

    1996-09-01

    The dry drilling system developed for the Yucca Mountain Site Characterization Project incorporates a surface vacuum system to prevent drilling air and cuttings from contaminating the borehole wall during coring operations. As the drilling depth increases, however there is a potential for borehole contamination because of the limited volume of air which can be removed by the vacuum system. A feasibility analysis has shown that an ejector subassembly mounted in the drill string above the core barrel could significantly enhance the depth capacity of the dry drilling system. The ejector subassembly would use a portion of the air supplied tomore » the core bit to maintain a vacuum on the hole bottom. The results of a design study including performance testing of laboratory scale ejector simulator are presented here.« less

  19. Study of sample drilling techniques for Mars sample return missions

    NASA Technical Reports Server (NTRS)

    Mitchell, D. C.; Harris, P. T.

    1980-01-01

    To demonstrate the feasibility of acquiring various surface samples for a Mars sample return mission the following tasks were performed: (1) design of a Mars rover-mounted drill system capable of acquiring crystalline rock cores; prediction of performance, mass, and power requirements for various size systems, and the generation of engineering drawings; (2) performance of simulated permafrost coring tests using a residual Apollo lunar surface drill, (3) design of a rock breaker system which can be used to produce small samples of rock chips from rocks which are too large to return to Earth, but too small to be cored with the Rover-mounted drill; (4)design of sample containers for the selected regolith cores, rock cores, and small particulate or rock samples; and (5) design of sample handling and transfer techniques which will be required through all phase of sample acquisition, processing, and stowage on-board the Earth return vehicle. A preliminary design of a light-weight Rover-mounted sampling scoop was also developed.

  20. Discrete Event Modeling and Massively Parallel Execution of Epidemic Outbreak Phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2011-01-01

    In complex phenomena such as epidemiological outbreaks, the intensity of inherent feedback effects and the significant role of transients in the dynamics make simulation the only effective method for proactive, reactive or post-facto analysis. The spatial scale, runtime speed, and behavioral detail needed in detailed simulations of epidemic outbreaks make it necessary to use large-scale parallel processing. Here, an optimistic parallel execution of a new discrete event formulation of a reaction-diffusion simulation model of epidemic propagation is presented to facilitate in dramatically increasing the fidelity and speed by which epidemiological simulations can be performed. Rollback support needed during optimistic parallelmore » execution is achieved by combining reverse computation with a small amount of incremental state saving. Parallel speedup of over 5,500 and other runtime performance metrics of the system are observed with weak-scaling execution on a small (8,192-core) Blue Gene / P system, while scalability with a weak-scaling speedup of over 10,000 is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes exceeding several hundreds of millions of individuals in the largest cases are successfully exercised to verify model scalability.« less

  1. Random pinning elucidates the nature of melting transition in two-dimensional core-softened potential system

    NASA Astrophysics Data System (ADS)

    Tsiok, E. N.; Fomin, Y. D.; Ryzhov, V. N.

    2018-01-01

    Despite about forty years of investigations, the nature of the melting transition in two dimensions is not completely clear. In the framework of the most popular Berezinskii-Kosterlitz-Thouless-Halperin-Nelson-Young (BKTHNY) theory, 2D systems melt through two continuous Berezinskii-Kosterlitz-Thouless (BKT) transitions with intermediate hexatic phase. The conventional first-order transition is also possible. On the other hand, recently on the basis of computer simulations the new melting scenario was proposed with continuous BKT type solid-hexatic transition and first order hexatic-liquid transition. However, in the simulations the hexatic phase is extremely narrow that makes its study difficult. In the present paper, we propose to apply the random pinning to investigate the hexatic phase in more detail. The results of molecular dynamics simulations of two dimensional system having core-softened potentials with narrow repulsive step which is similar to the soft disk system are outlined. The system has a small fraction of pinned particles giving quenched disorder. Random pinning widens the hexatic phase without changing the melting scenario and gives the possibility to study the behavior of the diffusivity and order parameters in the vicinity of the melting transition and inside the hexatic phase.

  2. The numerical simulation of a high-speed axial flow compressor

    NASA Technical Reports Server (NTRS)

    Mulac, Richard A.; Adamczyk, John J.

    1991-01-01

    The advancement of high-speed axial-flow multistage compressors is impeded by a lack of detailed flow-field information. Recent development in compressor flow modeling and numerical simulation have the potential to provide needed information in a timely manner. The development of a computer program is described to solve the viscous form of the average-passage equation system for multistage turbomachinery. Programming issues such as in-core versus out-of-core data storage and CPU utilization (parallelization, vectorization, and chaining) are addressed. Code performance is evaluated through the simulation of the first four stages of a five-stage, high-speed, axial-flow compressor. The second part addresses the flow physics which can be obtained from the numerical simulation. In particular, an examination of the endwall flow structure is made, and its impact on blockage distribution assessed.

  3. Improving Representation of Convective Transport for Scale-Aware Parameterization – Part I: Convection and Cloud Properties Simulated with Spectral Bin and Bulk Microphysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Jiwen; Liu, Yi-Chin; Xu, Kuan-Man

    2015-04-27

    The ultimate goal of this study is to improve representation of convective transport by cumulus parameterization for meso-scale and climate models. As Part I of the study, we perform extensive evaluations of cloud-resolving simulations of a squall line and mesoscale convective complexes in mid-latitude continent and tropical regions using the Weather Research and Forecasting (WRF) model with spectral-bin microphysics (SBM) and with two double-moment bulk microphysics schemes: a modified Morrison (MOR) and Milbrandt and Yau (MY2). Compared to observations, in general, SBM gives better simulations of precipitation, vertical velocity of convective cores, and the vertically decreasing trend of radar reflectivitymore » than MOR and MY2, and therefore will be used for analysis of scale-dependence of eddy transport in Part II. The common features of the simulations for all convective systems are (1) the model tends to overestimate convection intensity in the middle and upper troposphere, but SBM can alleviate much of the overestimation and reproduce the observed convection intensity well; (2) the model greatly overestimates radar reflectivity in convective cores (SBM predicts smaller radar reflectivity but does not remove the large overestimation); and (3) the model performs better for mid-latitude convective systems than tropical system. The modeled mass fluxes of the mid latitude systems are not sensitive to microphysics schemes, but are very sensitive for the tropical case indicating strong microphysics modification to convection. Cloud microphysical measurements of rain, snow and graupel in convective cores will be critically important to further elucidate issues within cloud microphysics schemes.« less

  4. Scaling NS-3 DCE Experiments on Multi-Core Servers

    DTIC Science & Technology

    2016-06-15

    that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on

  5. Output-Based Adaptive Meshing Applied to Space Launch System Booster Separation Analysis

    NASA Technical Reports Server (NTRS)

    Dalle, Derek J.; Rogers, Stuart E.

    2015-01-01

    This paper presents details of Computational Fluid Dynamic (CFD) simulations of the Space Launch System during solid-rocket booster separation using the Cart3D inviscid code with comparisons to Overflow viscous CFD results and a wind tunnel test performed at NASA Langley Research Center's Unitary PlanWind Tunnel. The Space Launch System (SLS) launch vehicle includes two solid-rocket boosters that burn out before the primary core stage and thus must be discarded during the ascent trajectory. The main challenges for creating an aerodynamic database for this separation event are the large number of basis variables (including orientation of the core, relative position and orientation of the boosters, and rocket thrust levels) and the complex flow caused by the booster separation motors. The solid-rocket boosters are modified from their form when used with the Space Shuttle Launch Vehicle, which has a rich flight history. However, the differences between the SLS core and the Space Shuttle External Tank result in the boosters separating with much narrower clearances, and so reducing aerodynamic uncertainty is necessary to clear the integrated system for flight. This paper discusses an approach that has been developed to analyze about 6000 wind tunnel simulations and 5000 flight vehicle simulations using Cart3D in adaptive-meshing mode. In addition, a discussion is presented of Overflow viscous CFD runs used for uncertainty quantification. Finally, the article presents lessons learned and improvements that will be implemented in future separation databases.

  6. Dynamical Core in Atmospheric Model Does Matter in the Simulation of Arctic Climate

    NASA Astrophysics Data System (ADS)

    Jun, Sang-Yoon; Choi, Suk-Jin; Kim, Baek-Min

    2018-03-01

    Climate models using different dynamical cores can simulate significantly different winter Arctic climates even if equipped with virtually the same physics schemes. Current climate simulated by the global climate model using cubed-sphere grid with spectral element method (SE core) exhibited significantly warmer Arctic surface air temperature compared to that using latitude-longitude grid with finite volume method core. Compared to the finite volume method core, SE core simulated additional adiabatic warming in the Arctic lower atmosphere, and this was consistent with the eddy-forced secondary circulation. Downward longwave radiation further enhanced Arctic near-surface warming with a higher surface air temperature of about 1.9 K. Furthermore, in the atmospheric response to the reduced sea ice conditions with the same physical settings, only the SE core showed a robust cooling response over North America. We emphasize that special attention is needed in selecting the dynamical core of climate models in the simulation of the Arctic climate and associated teleconnection patterns.

  7. Dynamic Response Testing in an Electrically Heated Reactor Test Facility

    NASA Astrophysics Data System (ADS)

    Bragg-Sitton, Shannon M.; Morton, T. J.

    2006-01-01

    Non-nuclear testing can be a valuable tool in the development of a space nuclear power or propulsion system. In a non-nuclear test bed, electric heaters are used to simulate the heat from nuclear fuel. Standard testing allows one to fully assess thermal, heat transfer, and stress related attributes of a given system, but fails to demonstrate the dynamic response that would be present in an integrated, fueled reactor system. The integration of thermal hydraulic hardware tests with simulated neutronic response provides a bridge between electrically heated testing and fueled nuclear testing. By implementing a neutronic response model to simulate the dynamic response that would be expected in a fueled reactor system, one can better understand system integration issues, characterize integrated system response times and response characteristics, and assess potential design improvements at a relatively small fiscal investment. Initial system dynamic response testing was demonstrated on the integrated SAFE-100a heat pipe (HP) cooled, electrically heated reactor and heat exchanger hardware, utilizing a one-group solution to the point kinetics equations to simulate the expected neutronic response of the system. Reactivity feedback calculations were then based on a bulk reactivity feedback coefficient and measured average core temperature. This paper presents preliminary results from similar dynamic testing of a direct drive gas cooled reactor system (DDG), demonstrating the applicability of the testing methodology to any reactor type and demonstrating the variation in system response characteristics in different reactor concepts. Although the HP and DDG designs both utilize a fast spectrum reactor, the method of cooling the reactor differs significantly, leading to a variable system response that can be demonstrated and assessed in a non-nuclear test facility. Planned system upgrades to allow implementation of higher fidelity dynamic testing are also discussed. Proposed DDG testing will utilize a higher fidelity point kinetics model to control core power transients, and reactivity feedback will be based on localized feedback coefficients and several independent temperature measurements taken within the core block. This paper presents preliminary test results and discusses the methodology that will be implemented in follow-on DDG testing and the additional instrumentation required to implement high fidelity dynamic testing.

  8. A dual-waveband dynamic IR scene projector based on DMD

    NASA Astrophysics Data System (ADS)

    Hu, Yu; Zheng, Ya-wei; Gao, Jiao-bo; Sun, Ke-feng; Li, Jun-na; Zhang, Lei; Zhang, Fang

    2016-10-01

    Infrared scene simulation system can simulate multifold objects and backgrounds to perform dynamic test and evaluate EO detecting system in the hardware in-the-loop test. The basic structure of a dual-waveband dynamic IR scene projector was introduced in the paper. The system's core device is an IR Digital Micro-mirror Device (DMD) and the radiant source is a mini-type high temperature IR plane black-body. An IR collimation optical system which transmission range includes 3-5μm and 8-12μm is designed as the projection optical system. Scene simulation software was developed with Visual C++ and Vega soft tools and a software flow chart was presented. The parameters and testing results of the system were given, and this system was applied with satisfying performance in an IR imaging simulation testing.

  9. Brown Dwarf Binaries from Disintegrating Triple Systems

    NASA Astrophysics Data System (ADS)

    Reipurth, Bo; Mikkola, Seppo

    2015-04-01

    Binaries in which both components are brown dwarfs (BDs) are being discovered at an increasing rate, and their properties may hold clues to their origin. We have carried out 200,000 N-body simulations of three identical stellar embryos with masses drawn from a Chabrier IMF and embedded in a molecular core. The bodies are initially non-hierarchical and undergo chaotic motions within the cloud core, while accreting using Bondi-Hoyle accretion. The coupling of dynamics and accretion often leads to one or two dominant bodies controlling the center of the cloud core, while banishing the other(s) to the lower-density outskirts, leading to stunted growth. Eventually each system transforms either to a bound hierarchical configuration or breaks apart into separate single and binary components. The orbital motion is followed for 100 Myr. In order to illustrate 200,000 end-states of such dynamical evolution with accretion, we introduce the “triple diagnostic diagram,” which plots two dimensionless numbers against each other, representing the binary mass ratio and the mass ratio of the third body to the total system mass. Numerous freefloating BD binaries are formed in these simulations, and statistical properties are derived. The separation distribution function is in good correspondence with observations, showing a steep rise at close separations, peaking around 13 AU and declining more gently, reaching zero at separations greater than 200 AU. Unresolved BD triple systems may appear as wider BD binaries. Mass ratios are strongly peaked toward unity, as observed, but this is partially due to the initial assumptions. Eccentricities gradually increase toward higher values, due to the lack of viscous interactions in the simulations, which would both shrink the orbits and decrease their eccentricities. Most newborn triple systems are unstable and while there are 9209 ejected BD binaries at 1 Myr, corresponding to about 4% of the 200,000 simulations, this number has grown to 15,894 at 100 Myr (˜8%). The total binary fraction among freefloating BDs is 0.43, higher than indicated by current observations, which, however, are still incomplete. Also, the gradual breakup of higher-order multiples leads to many more singles, thus lowering the binary fraction. The main threat to newly born triple systems is internal instabilities, not external perturbations. At 1 Myr there are 1325 BD binaries still bound to a star, corresponding to 0.66% of the simulations, but only 253 (0.13%) are stable on timescales >100 Myr. These simulations indicate that dynamical interactions in newborn triple systems of stellar embryos embedded in and accreting from a cloud core naturally form a population of freefloating BD binaries, and this mechanism may constitute a significant pathway for the formation of BD binaries.

  10. Computational Cosmology at the Bleeding Edge

    NASA Astrophysics Data System (ADS)

    Habib, Salman

    2013-04-01

    Large-area sky surveys are providing a wealth of cosmological information to address the mysteries of dark energy and dark matter. Observational probes based on tracking the formation of cosmic structure are essential to this effort, and rely crucially on N-body simulations that solve the Vlasov-Poisson equation in an expanding Universe. As statistical errors from survey observations continue to shrink, and cosmological probes increase in number and complexity, simulations are entering a new regime in their use as tools for scientific inference. Changes in supercomputer architectures provide another rationale for developing new parallel simulation and analysis capabilities that can scale to computational concurrency levels measured in the millions to billions. In this talk I will outline the motivations behind the development of the HACC (Hardware/Hybrid Accelerated Cosmology Code) extreme-scale cosmological simulation framework and describe its essential features. By exploiting a novel algorithmic structure that allows flexible tuning across diverse computer architectures, including accelerated and many-core systems, HACC has attained a performance of 14 PFlops on the IBM BG/Q Sequoia system at 69% of peak, using more than 1.5 million cores.

  11. Efficiency of static core turn-off in a system-on-a-chip with variation

    DOEpatents

    Cher, Chen-Yong; Coteus, Paul W; Gara, Alan; Kursun, Eren; Paulsen, David P; Schuelke, Brian A; Sheets, II, John E; Tian, Shurong

    2013-10-29

    A processor-implemented method for improving efficiency of a static core turn-off in a multi-core processor with variation, the method comprising: conducting via a simulation a turn-off analysis of the multi-core processor at the multi-core processor's design stage, wherein the turn-off analysis of the multi-core processor at the multi-core processor's design stage includes a first output corresponding to a first multi-core processor core to turn off; conducting a turn-off analysis of the multi-core processor at the multi-core processor's testing stage, wherein the turn-off analysis of the multi-core processor at the multi-core processor's testing stage includes a second output corresponding to a second multi-core processor core to turn off; comparing the first output and the second output to determine if the first output is referring to the same core to turn off as the second output; outputting a third output corresponding to the first multi-core processor core if the first output and the second output are both referring to the same core to turn off.

  12. Quantum Monte Carlo simulation of the ferroelectric or ferrielectric nanowire with core shell morphology

    NASA Astrophysics Data System (ADS)

    Feraoun, A.; Zaim, A.; Kerouad, M.

    2016-09-01

    By using the Quantum Monte Carlo simulation; the electric properties of a nanowire, consisting of a ferroelectric core of spin-1/2 surrounded by a ferroelectric shell of spin-1/2 with ferro- or anti-ferroelectric interfacial coupling have been studied within the framework of the Transverse Ising Model (TIM). We have examined the effects of the shell coupling Js, the interfacial coupling JInt, the transverse field Ω, and the temperature T on the hysteresis behavior and on the electric properties of the system. The remanent polarization and the coercive field as a function of the transverse field and the temperature are examined. A number of characteristic behavior have been found such as the appearance of triple hysteresis loops for appropriate values of the system parameters.

  13. Analysis of the return to power scenario following a LBLOCA in a PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macian, R.; Tyler, T.N.; Mahaffy, J.H.

    1995-09-01

    The risk of reactivity accidents has been considered an important safety issue since the beginning of the nuclear power industry. In particular, several events leading to such scenarios for PWR`s have been recognized and studied to assess the potential risk of fuel damage. The present paper analyzes one such event: the possible return to power during the reflooding phase following a LBLOCA. TRAC-PF1/MOD2 coupled with a three-dimensional neutronic model of the core based on the Nodal Expansion Method (NEM) was used to perform the analysis. The system computer model contains a detailed representation of a complete typical 4-loop PWR. Thus,more » the simulation can follow complex system interactions during reflooding, which may influence the neutronics feedback in the core. Analyses were made with core models bases on cross sections generated by LEOPARD. A standard and a potentially more limiting case, with increased pressurizer and accumulator inventories, were run. In both simulations, the reactor reaches a stable state after the reflooding is completed. The lower core region, filled with cold water, generates enough power to boil part of the incoming liquid, thus preventing the core average liquid fraction from reaching a value high enough to cause a return to power. At the same time, the mass flow rate through the core is adequate to maintain the rod temperature well below the fuel damage limit.« less

  14. Simulation of DKIST solar adaptive optics system

    NASA Astrophysics Data System (ADS)

    Marino, Jose; Carlisle, Elizabeth; Schmidt, Dirk

    2016-07-01

    Solar adaptive optics (AO) simulations are a valuable tool to guide the design and optimization process of current and future solar AO and multi-conjugate AO (MCAO) systems. Solar AO and MCAO systems rely on extended object cross-correlating Shack-Hartmann wavefront sensors to measure the wavefront. Accurate solar AO simulations require computationally intensive operations, which have until recently presented a prohibitive computational cost. We present an update on the status of a solar AO and MCAO simulation tool being developed at the National Solar Observatory. The simulation tool is a multi-threaded application written in the C++ language that takes advantage of current large multi-core CPU computer systems and fast ethernet connections to provide accurate full simulation of solar AO and MCAO systems. It interfaces with KAOS, a state of the art solar AO control software developed by the Kiepenheuer-Institut fuer Sonnenphysik, that provides reliable AO control. We report on the latest results produced by the solar AO simulation tool.

  15. Recent improvements of reactor physics codes in MHI

    NASA Astrophysics Data System (ADS)

    Kosaka, Shinya; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki

    2015-12-01

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO's Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.

  16. Recent improvements of reactor physics codes in MHI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosaka, Shinya, E-mail: shinya-kosaka@mhi.co.jp; Yamaji, Kazuya; Kirimura, Kazuki

    2015-12-31

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO’s Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipatedmore » transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.« less

  17. Relativistic semiempirical-core-potential calculations in Ca+,Sr+ , and Ba+ ions on Lagrange meshes

    NASA Astrophysics Data System (ADS)

    Filippin, Livio; Schiffmann, Sacha; Dohet-Eraly, Jérémy; Baye, Daniel; Godefroid, Michel

    2018-01-01

    Relativistic atomic structure calculations are carried out in alkaline-earth-metal ions using a semiempirical-core-potential approach. The systems are partitioned into frozen-core electrons and an active valence electron. The core orbitals are defined by a Dirac-Hartree-Fock calculation using the grasp2k package. The valence electron is described by a Dirac-like Hamiltonian involving a core-polarization potential to simulate the core-valence electron correlation. The associated equation is solved with the Lagrange-mesh method, which is an approximate variational approach having the form of a mesh calculation because of the use of a Gauss quadrature to calculate matrix elements. Properties involving the low-lying metastable D 3 /2 ,5 /2 2 states of Ca+, Sr+, and Ba+ are studied, such as polarizabilities, one- and two-photon decay rates, and lifetimes. Good agreement is found with other theory and observation, which is promising for further applications in alkalilike systems.

  18. Shell-corona microgels from double interpenetrating networks.

    PubMed

    Rudyak, Vladimir Yu; Gavrilov, Alexey A; Kozhunova, Elena Yu; Chertovich, Alexander V

    2018-04-18

    Polymer microgels with a dense outer shell offer outstanding features as universal carriers for different guest molecules. In this paper, microgels formed by an interpenetrating network comprised of collapsed and swollen subnetworks are investigated using dissipative particle dynamics (DPD) computer simulations, and it is found that such systems can form classical core-corona structures, shell-corona structures, and core-shell-corona structures, depending on the subchain length and molecular mass of the system. The core-corona structures consisting of a dense core and soft corona are formed at small microgel sizes when the subnetworks are able to effectively separate in space. The most interesting shell-corona structures consist of a soft cavity in a dense shell surrounded with a loose corona, and are found at intermediate gel sizes; the area of their existence depends on the subchain length and the corresponding mesh size. At larger molecular masses the collapsing network forms additional cores inside the soft cavity, leading to the core-shell-corona structure.

  19. Performance comparison of a fiber optic communication system based on optical OFDM and an optical OFDM-MIMO with Alamouti code by using numerical simulations

    NASA Astrophysics Data System (ADS)

    Serpa-Imbett, C. M.; Marín-Alfonso, J.; Gómez-Santamaría, C.; Betancur-Agudelo, L.; Amaya-Fernández, F.

    2013-12-01

    Space division multiplexing in multicore fibers is one of the most promise technologies in order to support transmissions of next-generation peta-to-exaflop-scale supercomputers and mega data centers, owing to advantages in terms of costs and space saving of the new optical fibers with multiple cores. Additionally, multicore fibers allow photonic signal processing in optical communication systems, taking advantage of the mode coupling phenomena. In this work, we numerically have simulated an optical MIMO-OFDM (multiple-input multiple-output orthogonal frequency division multiplexing) by using the coded Alamouti to be transmitted through a twin-core fiber with low coupling. Furthermore, an optical OFDM is transmitted through a core of a singlemode fiber, using pilot-aided channel estimation. We compare the transmission performance in the twin-core fiber and in the singlemode fiber taking into account numerical results of the bit-error rate, considering linear propagation, and Gaussian noise through an optical fiber link. We carry out an optical fiber transmission of OFDM frames using 8 PSK and 16 QAM, with bit rates values of 130 Gb/s and 170 Gb/s, respectively. We obtain a penalty around 4 dB for the 8 PSK transmissions, after 100 km of linear fiber optic propagation for both singlemode and twin core fiber. We obtain a penalty around 6 dB for the 16 QAM transmissions, with linear propagation after 100 km of optical fiber. The transmission in a two-core fiber by using Alamouti coded OFDM-MIMO exhibits a better performance, offering a good alternative in the mitigation of fiber impairments, allowing to expand Alamouti coded in multichannel systems spatially multiplexed in multicore fibers.

  20. A new parameter-free soft-core potential for silica and its application to simulation of silica anomalies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izvekov, Sergei, E-mail: sergiy.izvyekov.civ@mail.mil; Rice, Betsy M.

    2015-12-28

    A core-softening of the effective interaction between oxygen atoms in water and silica systems and its role in developing anomalous thermodynamic, transport, and structural properties have been extensively debated. For silica, the progress with addressing these issues has been hampered by a lack of effective interaction models with explicit core-softening. In this work, we present an extension of a two-body soft-core interatomic force field for silica recently reported by us [S. Izvekov and B. M. Rice, J. Chem. Phys. 136(13), 134508 (2012)] to include three-body forces. Similar to two-body interaction terms, the three-body terms are derived using parameter-free force-matching ofmore » the interactions from ab initio MD simulations of liquid silica. The derived shape of the O–Si–O three-body potential term affirms the existence of repulsion softening between oxygen atoms at short separations. The new model shows a good performance in simulating liquid, amorphous, and crystalline silica. By comparing the soft-core model and a similar model with the soft-core suppressed, we demonstrate that the topology reorganization within the local tetrahedral network and the O–O core-softening are two competitive mechanisms responsible for anomalous thermodynamic and kinetic behaviors observed in liquid and amorphous silica. The studied anomalies include the temperature of density maximum locus and anomalous diffusivity in liquid silica, and irreversible densification of amorphous silica. We show that the O–O core-softened interaction enhances the observed anomalies primarily through two mechanisms: facilitating the defect driven structural rearrangements of the silica tetrahedral network and modifying the tetrahedral ordering induced interactions toward multiple characteristic scales, the feature which underlies the thermodynamic anomalies.« less

  1. The 2005 MARTE Robotic Drilling Experiment in Río Tinto, Spain: Objectives, Approach, and Results of a Simulated Mission to Search for Life in the Martian Subsurface

    NASA Astrophysics Data System (ADS)

    Stoker, Carol R.; Cannon, Howard N.; Dunagan, Stephen E.; Lemke, Lawrence G.; Glass, Brian J.; Miller, David; Gomez-Elvira, Javier; Davis, Kiel; Zavaleta, Jhony; Winterholler, Alois; Roman, Matt; Rodriguez-Manfredi, Jose Antonio; Bonaccorsi, Rosalba; Bell, Mary Sue; Brown, Adrian; Battler, Melissa; Chen, Bin; Cooper, George; Davidson, Mark; Fernández-Remolar, David; Gonzales-Pastor, Eduardo; Heldmann, Jennifer L.; Martínez-Frías, Jesus; Parro, Victor; Prieto-Ballesteros, Olga; Sutter, Brad; Schuerger, Andrew C.; Schutt, John; Rull, Fernando

    2008-10-01

    The Mars Astrobiology Research and Technology Experiment (MARTE) simulated a robotic drilling mission to search for subsurface life on Mars. The drill site was on Peña de Hierro near the headwaters of the Río Tinto river (southwest Spain), on a deposit that includes massive sulfides and their gossanized remains that resemble some iron and sulfur minerals found on Mars. The mission used a fluidless, 10-axis, autonomous coring drill mounted on a simulated lander. Cores were faced; then instruments collected color wide-angle context images, color microscopic images, visible near infrared point spectra, and (lower resolution) visible-near infrared hyperspectral images. Cores were then stored for further processing or ejected. A borehole inspection system collected panoramic imaging and Raman spectra of borehole walls. Life detection was performed on full cores with an adenosine triphosphate luciferin-luciferase bioluminescence assay and on crushed core sections with SOLID2, an antibody array-based instrument. Two remotely located science teams analyzed the remote sensing data and chose subsample locations. In 30 days of operation, the drill penetrated to 6 m and collected 21 cores. Biosignatures were detected in 12 of 15 samples analyzed by SOLID2. Science teams correctly interpreted the nature of the deposits drilled as compared to the ground truth. This experiment shows that drilling to search for subsurface life on Mars is technically feasible and scientifically rewarding.

  2. The 2005 MARTE Robotic Drilling Experiment in Río Tinto, Spain: objectives, approach, and results of a simulated mission to search for life in the Martian subsurface.

    PubMed

    Stoker, Carol R; Cannon, Howard N; Dunagan, Stephen E; Lemke, Lawrence G; Glass, Brian J; Miller, David; Gomez-Elvira, Javier; Davis, Kiel; Zavaleta, Jhony; Winterholler, Alois; Roman, Matt; Rodriguez-Manfredi, Jose Antonio; Bonaccorsi, Rosalba; Bell, Mary Sue; Brown, Adrian; Battler, Melissa; Chen, Bin; Cooper, George; Davidson, Mark; Fernández-Remolar, David; Gonzales-Pastor, Eduardo; Heldmann, Jennifer L; Martínez-Frías, Jesus; Parro, Victor; Prieto-Ballesteros, Olga; Sutter, Brad; Schuerger, Andrew C; Schutt, John; Rull, Fernando

    2008-10-01

    The Mars Astrobiology Research and Technology Experiment (MARTE) simulated a robotic drilling mission to search for subsurface life on Mars. The drill site was on Peña de Hierro near the headwaters of the Río Tinto river (southwest Spain), on a deposit that includes massive sulfides and their gossanized remains that resemble some iron and sulfur minerals found on Mars. The mission used a fluidless, 10-axis, autonomous coring drill mounted on a simulated lander. Cores were faced; then instruments collected color wide-angle context images, color microscopic images, visible-near infrared point spectra, and (lower resolution) visible-near infrared hyperspectral images. Cores were then stored for further processing or ejected. A borehole inspection system collected panoramic imaging and Raman spectra of borehole walls. Life detection was performed on full cores with an adenosine triphosphate luciferin-luciferase bioluminescence assay and on crushed core sections with SOLID2, an antibody array-based instrument. Two remotely located science teams analyzed the remote sensing data and chose subsample locations. In 30 days of operation, the drill penetrated to 6 m and collected 21 cores. Biosignatures were detected in 12 of 15 samples analyzed by SOLID2. Science teams correctly interpreted the nature of the deposits drilled as compared to the ground truth. This experiment shows that drilling to search for subsurface life on Mars is technically feasible and scientifically rewarding.

  3. Assessing climate change impacts, benefits of mitigation, and uncertainties on major global forest regions under multiple socioeconomic and emissions scenarios

    Treesearch

    John B Kim; Erwan Monier; Brent Sohngen; G Stephen Pitts; Ray Drapek; James McFarland; Sara Ohrel; Jefferson Cole

    2016-01-01

    We analyze a set of simulations to assess the impact of climate change on global forests where MC2 dynamic global vegetation model (DGVM) was run with climate simulations from the MIT Integrated Global System Model-Community Atmosphere Model (IGSM-CAM) modeling framework. The core study relies on an ensemble of climate simulations under two emissions scenarios: a...

  4. Nuclear Engine System Simulation (NESS). Volume 1: Program user's guide

    NASA Astrophysics Data System (ADS)

    Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.

    1993-03-01

    A Nuclear Thermal Propulsion (NTP) engine system design analysis tool is required to support current and future Space Exploration Initiative (SEI) propulsion and vehicle design studies. Currently available NTP engine design models are those developed during the NERVA program in the 1960's and early 1970's and are highly unique to that design or are modifications of current liquid propulsion system design models. To date, NTP engine-based liquid design models lack integrated design of key NTP engine design features in the areas of reactor, shielding, multi-propellant capability, and multi-redundant pump feed fuel systems. Additionally, since the SEI effort is in the initial development stage, a robust, verified NTP analysis design tool could be of great use to the community. This effort developed an NTP engine system design analysis program (tool), known as the Nuclear Engine System Simulation (NESS) program, to support ongoing and future engine system and stage design study efforts. In this effort, Science Applications International Corporation's (SAIC) NTP version of the Expanded Liquid Engine Simulation (ELES) program was modified extensively to include Westinghouse Electric Corporation's near-term solid-core reactor design model. The ELES program has extensive capability to conduct preliminary system design analysis of liquid rocket systems and vehicles. The program is modular in nature and is versatile in terms of modeling state-of-the-art component and system options as discussed. The Westinghouse reactor design model, which was integrated in the NESS program, is based on the near-term solid-core ENABLER NTP reactor design concept. This program is now capable of accurately modeling (characterizing) a complete near-term solid-core NTP engine system in great detail, for a number of design options, in an efficient manner. The following discussion summarizes the overall analysis methodology, key assumptions, and capabilities associated with the NESS presents an example problem, and compares the results to related NTP engine system designs. Initial installation instructions and program disks are in Volume 2 of the NESS Program User's Guide.

  5. Nuclear Engine System Simulation (NESS). Volume 1: Program user's guide

    NASA Technical Reports Server (NTRS)

    Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.

    1993-01-01

    A Nuclear Thermal Propulsion (NTP) engine system design analysis tool is required to support current and future Space Exploration Initiative (SEI) propulsion and vehicle design studies. Currently available NTP engine design models are those developed during the NERVA program in the 1960's and early 1970's and are highly unique to that design or are modifications of current liquid propulsion system design models. To date, NTP engine-based liquid design models lack integrated design of key NTP engine design features in the areas of reactor, shielding, multi-propellant capability, and multi-redundant pump feed fuel systems. Additionally, since the SEI effort is in the initial development stage, a robust, verified NTP analysis design tool could be of great use to the community. This effort developed an NTP engine system design analysis program (tool), known as the Nuclear Engine System Simulation (NESS) program, to support ongoing and future engine system and stage design study efforts. In this effort, Science Applications International Corporation's (SAIC) NTP version of the Expanded Liquid Engine Simulation (ELES) program was modified extensively to include Westinghouse Electric Corporation's near-term solid-core reactor design model. The ELES program has extensive capability to conduct preliminary system design analysis of liquid rocket systems and vehicles. The program is modular in nature and is versatile in terms of modeling state-of-the-art component and system options as discussed. The Westinghouse reactor design model, which was integrated in the NESS program, is based on the near-term solid-core ENABLER NTP reactor design concept. This program is now capable of accurately modeling (characterizing) a complete near-term solid-core NTP engine system in great detail, for a number of design options, in an efficient manner. The following discussion summarizes the overall analysis methodology, key assumptions, and capabilities associated with the NESS presents an example problem, and compares the results to related NTP engine system designs. Initial installation instructions and program disks are in Volume 2 of the NESS Program User's Guide.

  6. pysimm: A Python Package for Simulation of Molecular Systems

    NASA Astrophysics Data System (ADS)

    Fortunato, Michael; Colina, Coray

    pysimm, short for python simulation interface for molecular modeling, is a python package designed to facilitate the structure generation and simulation of molecular systems through convenient and programmatic access to object-oriented representations of molecular system data. This poster presents core features of pysimm and design philosophies that highlight a generalized methodology for incorporation of third-party software packages through API interfaces. The integration with the LAMMPS simulation package is explained to demonstrate this methodology. pysimm began as a back-end python library that powered a cloud-based application on nanohub.org for amorphous polymer simulation. The extension from a specific application library to general purpose simulation interface is explained. Additionally, this poster highlights the rapid development of new applications to construct polymer chains capable of controlling chain morphology such as molecular weight distribution and monomer composition.

  7. Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messer, Bronson; Sewell, Christopher; Heitmann, Katrin

    2015-01-01

    Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial inmore » situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.« less

  8. Capabilities and Testing of the Fission Surface Power Primary Test Circuit (FSP-PTC)

    NASA Technical Reports Server (NTRS)

    Garber, Anne E.

    2007-01-01

    An actively pumped alkali metal flow circuit, designed and fabricated at the NASA Marshall Space Flight Center, is currently undergoing testing in the Early Flight Fission Test Facility (EFF-TF). Sodium potassium (NaK), which was used in the SNAP-10A fission reactor, was selected as the primary coolant. Basic circuit components include: simulated reactor core, NaK to gas heat exchanger, electromagnetic (EM) liquid metal pump, liquid metal flowmeter, load/drain reservoir, expansion reservoir, test section, and instrumentation. Operation of the circuit is based around a 37-pin partial-array core (pin and flow path dimensions are the same as those in a full core), designed to operate at 33 kWt. NaK flow rates of greater than 1 kg/sec may be achieved, depending upon the power applied to the EM pump. The heat exchanger provides for the removal of thermal energy from the circuit, simulating the presence of an energy conversion system. The presence of the test section increases the versatility of the circuit. A second liquid metal pump, an energy conversion system, and highly instrumented thermal simulators are all being considered for inclusion within the test section. This paper summarizes the capabilities and ongoing testing of the Fission Surface Power Primary Test Circuit (FSP-PTC).

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reipurth, Bo; Mikkola, Seppo, E-mail: reipurth@ifa.hawaii.edu, E-mail: Seppo.Mikkola@utu.fi

    Binaries in which both components are brown dwarfs (BDs) are being discovered at an increasing rate, and their properties may hold clues to their origin. We have carried out 200,000 N-body simulations of three identical stellar embryos with masses drawn from a Chabrier IMF and embedded in a molecular core. The bodies are initially non-hierarchical and undergo chaotic motions within the cloud core, while accreting using Bondi–Hoyle accretion. The coupling of dynamics and accretion often leads to one or two dominant bodies controlling the center of the cloud core, while banishing the other(s) to the lower-density outskirts, leading to stuntedmore » growth. Eventually each system transforms either to a bound hierarchical configuration or breaks apart into separate single and binary components. The orbital motion is followed for 100 Myr. In order to illustrate 200,000 end-states of such dynamical evolution with accretion, we introduce the “triple diagnostic diagram,” which plots two dimensionless numbers against each other, representing the binary mass ratio and the mass ratio of the third body to the total system mass. Numerous freefloating BD binaries are formed in these simulations, and statistical properties are derived. The separation distribution function is in good correspondence with observations, showing a steep rise at close separations, peaking around 13 AU and declining more gently, reaching zero at separations greater than 200 AU. Unresolved BD triple systems may appear as wider BD binaries. Mass ratios are strongly peaked toward unity, as observed, but this is partially due to the initial assumptions. Eccentricities gradually increase toward higher values, due to the lack of viscous interactions in the simulations, which would both shrink the orbits and decrease their eccentricities. Most newborn triple systems are unstable and while there are 9209 ejected BD binaries at 1 Myr, corresponding to about 4% of the 200,000 simulations, this number has grown to 15,894 at 100 Myr (∼8%). The total binary fraction among freefloating BDs is 0.43, higher than indicated by current observations, which, however, are still incomplete. Also, the gradual breakup of higher-order multiples leads to many more singles, thus lowering the binary fraction. The main threat to newly born triple systems is internal instabilities, not external perturbations. At 1 Myr there are 1325 BD binaries still bound to a star, corresponding to 0.66% of the simulations, but only 253 (0.13%) are stable on timescales >100 Myr. These simulations indicate that dynamical interactions in newborn triple systems of stellar embryos embedded in and accreting from a cloud core naturally form a population of freefloating BD binaries, and this mechanism may constitute a significant pathway for the formation of BD binaries.« less

  10. Simulation and Modeling of Positrons and Electrons in advanced Time-of-Flight Positron Annihilation Induced Auger Electron Spectroscopy Systems

    NASA Astrophysics Data System (ADS)

    Joglekar, Prasad; Shastry, Karthik; Satyal, Suman; Weiss, Alexander

    2011-10-01

    Time of Flight Positron Annihilation Induced Auger Electron Spectroscopy (T-O-F PAES) is a highly surface selective analytical technique in which elemental identification is accomplished through a measurement of the flight time distributions of Auger electrons resulting from the annihilation of core electron by positrons. SIMION charged particle optics simulation software was used to model the trajectories both the incident positrons and outgoing electrons in our existing T-O-F PAES system as well as in a new system currently under construction in our laboratory. The implication of these simulation regarding the instrument design and performance are discussed.

  11. OpenDanubia - An integrated, modular simulation system to support regional water resource management

    NASA Astrophysics Data System (ADS)

    Muerth, M.; Waldmann, D.; Heinzeller, C.; Hennicker, R.; Mauser, W.

    2012-04-01

    The already completed, multi-disciplinary research project GLOWA-Danube has developed a regional scale, integrated modeling system, which was successfully applied on the 77,000 km2 Upper Danube basin to investigate the impact of Global Change on both the natural and anthropogenic water cycle. At the end of the last project phase, the integrated modeling system was transferred into the open source project OpenDanubia, which now provides both the core system as well as all major model components to the general public. First, this will enable decision makers from government, business and management to use OpenDanubia as a tool for proactive management of water resources in the context of global change. Secondly, the model framework to support integrated simulations and all simulation models developed for OpenDanubia in the scope of GLOWA-Danube are further available for future developments and research questions. OpenDanubia allows for the investigation of water-related scenarios considering different ecological and economic aspects to support both scientists and policy makers to design policies for sustainable environmental management. OpenDanubia is designed as a framework-based, distributed system. The model system couples spatially distributed physical and socio-economic process during run-time, taking into account their mutual influence. To simulate the potential future impacts of Global Change on agriculture, industrial production, water supply, households and tourism businesses, so-called deep actor models are implemented in OpenDanubia. All important water-related fluxes and storages in the natural environment are implemented in OpenDanubia as spatially explicit, process-based modules. This includes the land surface water and energy balance, dynamic plant water uptake, ground water recharge and flow as well as river routing and reservoirs. Although the complete system is relatively demanding on data requirements and hardware requirements, the modular structure and the generic core system (Core Framework, Actor Framework) allows the application in new regions and the selection of a reduced number of modules for simulation. As part of the Open Source Initiative in GLOWA-Danube (opendanubia.glowa-danube.de) a comprehensive documentation for the system installation was created and both the program code of the framework and of all major components is licensed under the GNU General Public License. In addition, some helpful programs and scripts necessary for the operation and processing of input and result data sets are provided.

  12. A New Capability for Nuclear Thermal Propulsion Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amiri, Benjamin W.; Nuclear and Radiological Engineering Department, University of Florida, Gainesville, FL 32611; Kapernick, Richard J.

    2007-01-30

    This paper describes a new capability for Nuclear Thermal Propulsion (NTP) design that has been developed, and presents the results of some analyses performed with this design tool. The purpose of the tool is to design to specified mission and material limits, while maximizing system thrust to weight. The head end of the design tool utilizes the ROCket Engine Transient Simulation (ROCETS) code to generate a system design and system design requirements as inputs to the core analysis. ROCETS is a modular system level code which has been used extensively in the liquid rocket engine industry for many years. Themore » core design tool performs high-fidelity reactor core nuclear and thermal-hydraulic design analysis. At the heart of this process are two codes TMSS-NTP and NTPgen, which together greatly automate the analysis, providing the capability to rapidly produce designs that meet all specified requirements while minimizing mass. A PERL based command script, called CORE DESIGNER controls the execution of these two codes, and checks for convergence throughout the process. TMSS-NTP is executed first, to produce a suite of core designs that meet the specified reactor core mechanical, thermal-hydraulic and structural requirements. The suite of designs consists of a set of core layouts and, for each core layout specific designs that span a range of core fuel volumes. NTPgen generates MCNPX models for each of the core designs from TMSS-NTP. Iterative analyses are performed in NTPgen until a reactor design (fuel volume) is identified for each core layout that meets cold and hot operation reactivity requirements and that is zoned to meet a radial core power distribution requirement.« less

  13. Ion Structure Near a Core-Shell Dielectric Nanoparticle

    NASA Astrophysics Data System (ADS)

    Ma, Manman; Gan, Zecheng; Xu, Zhenli

    2017-02-01

    A generalized image charge formulation is proposed for the Green's function of a core-shell dielectric nanoparticle for which theoretical and simulation investigations are rarely reported due to the difficulty of resolving the dielectric heterogeneity. Based on the formulation, an efficient and accurate algorithm is developed for calculating electrostatic polarization charges of mobile ions, allowing us to study related physical systems using the Monte Carlo algorithm. The computer simulations show that a fine-tuning of the shell thickness or the ion-interface correlation strength can greatly alter electric double-layer structures and capacitances, owing to the complicated interplay between dielectric boundary effects and ion-interface correlations.

  14. Development of IR imaging system simulator

    NASA Astrophysics Data System (ADS)

    Xiang, Xinglang; He, Guojing; Dong, Weike; Dong, Lu

    2017-02-01

    To overcome the disadvantages of the tradition semi-physical simulation and injection simulation equipment in the performance evaluation of the infrared imaging system (IRIS), a low-cost and reconfigurable IRIS simulator, which can simulate the realistic physical process of infrared imaging, is proposed to test and evaluate the performance of the IRIS. According to the theoretical simulation framework and the theoretical models of the IRIS, the architecture of the IRIS simulator is constructed. The 3D scenes are generated and the infrared atmospheric transmission effects are simulated using OGRE technology in real-time on the computer. The physical effects of the IRIS are classified as the signal response characteristic, modulation transfer characteristic and noise characteristic, and they are simulated on the single-board signal processing platform based on the core processor FPGA in real-time using high-speed parallel computation method.

  15. SYNTHETIC OBSERVATIONS OF MAGNETIC FIELDS IN PROTOSTELLAR CORES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Joyce W. Y.; Hull, Charles L. H.; Offner, Stella S. R., E-mail: chat.hull@cfa.harvard.edu, E-mail: jwyl1g12@soton.ac.uk

    The role of magnetic fields in the early stages of star formation is not well constrained. In order to discriminate between different star formation models, we analyze 3D magnetohydrodynamic simulations of low-mass cores and explore the correlation between magnetic field orientation and outflow orientation over time. We produce synthetic observations of dust polarization at resolutions comparable to millimeter-wave dust polarization maps observed by the Combined Array for Research in Millimeter-wave Astronomy and compare these with 2D visualizations of projected magnetic field and column density. Cumulative distribution functions of the projected angle between the magnetic field and outflow show different degreesmore » of alignment in simulations with differing mass-to-flux ratios. The distribution function for the less magnetized core agrees with observations finding random alignment between outflow and field orientations, while the more magnetized core exhibits stronger alignment. We find that fractional polarization increases when the system is viewed such that the magnetic field is close to the plane of the sky, and the values of fractional polarization are consistent with observational measurements. The simulation outflow, which reflects the underlying angular momentum of the accreted gas, changes direction significantly over over the first ∼0.1 Myr of evolution. This movement could lead to the observed random alignment between outflows and the magnetic fields in protostellar cores.« less

  16. An intercomparison of GCM and RCM dynamical downscaling for characterizing the hydroclimatology of California and Nevada

    NASA Astrophysics Data System (ADS)

    Xu, Z.; Rhoades, A.; Johansen, H.; Ullrich, P. A.; Collins, W. D.

    2017-12-01

    Dynamical downscaling is widely used to properly characterize regional surface heterogeneities that shape the local hydroclimatology. However, the factors in dynamical downscaling, including the refinement of model horizontal resolution, large-scale forcing datasets and dynamical cores, have not been fully evaluated. Two cutting-edge global-to-regional downscaling methods are used to assess these, specifically the variable-resolution Community Earth System Model (VR-CESM) and the Weather Research & Forecasting (WRF) regional climate model, under different horizontal resolutions (28, 14, and 7 km). Two groups of WRF simulations are driven by either the NCEP reanalysis dataset (WRF_NCEP) or VR-CESM outputs (WRF_VRCESM) to evaluate the effects of the large-scale forcing datasets. The impacts of dynamical core are assessed by comparing the VR-CESM simulations to the coupled WRF_VRCESM simulations with the same physical parameterizations and similar grid domains. The simulated hydroclimatology (i.e., total precipitation, snow cover, snow water equivalent and surface temperature) are compared with the reference datasets. The large-scale forcing datasets are critical to the WRF simulations in more accurately simulating total precipitation, SWE and snow cover, but not surface temperature. Both the WRF and VR-CESM results highlight that no significant benefit is found in the simulated hydroclimatology by just increasing horizontal resolution refinement from 28 to 7 km. Simulated surface temperature is sensitive to the choice of dynamical core. WRF generally simulates higher temperatures than VR-CESM, alleviates the systematic cold bias of DJF temperatures over the California mountain region, but overestimates the JJA temperature in California's Central Valley.

  17. High-performance biocomputing for simulating the spread of contagion over large contact networks

    PubMed Central

    2012-01-01

    Background Many important biological problems can be modeled as contagion diffusion processes over interaction networks. This article shows how the EpiSimdemics interaction-based simulation system can be applied to the general contagion diffusion problem. Two specific problems, computational epidemiology and human immune system modeling, are given as examples. We then show how the graphics processing unit (GPU) within each compute node of a cluster can effectively be used to speed-up the execution of these types of problems. Results We show that a single GPU can accelerate the EpiSimdemics computation kernel by a factor of 6 and the entire application by a factor of 3.3, compared to the execution time on a single core. When 8 CPU cores and 2 GPU devices are utilized, the speed-up of the computational kernel increases to 9.5. When combined with effective techniques for inter-node communication, excellent scalability can be achieved without significant loss of accuracy in the results. Conclusions We show that interaction-based simulation systems can be used to model disparate and highly relevant problems in biology. We also show that offloading some of the work to GPUs in distributed interaction-based simulations can be an effective way to achieve increased intra-node efficiency. PMID:22537298

  18. Succession of Hydrocarbon Degradation and Microbial Diversity during a Simulated Petroleum Seepage in Caspian Sea Sediments

    NASA Astrophysics Data System (ADS)

    Mishra, S.; Stagars, M.; Wefers, P.; Schmidt, M.; Knittel, K.; Krueger, M.; Leifer, I.; Treude, T.

    2016-02-01

    Microbial degradation of petroleum was investigated in intact sediment cores of Caspian Sea during a simulated petroleum seepage using a sediment-oil-flow-through (SOFT) system. Over the course of the SOFT experiment (190 days), distinct redox zones established and evolved in the sediment core. Methanogenesis and sulfate reduction were identified to be important processes in the anaerobic degradation of hydrocarbons. C1 to C6 n-alkanes were completely exhausted in the sulfate-reducing zone and some higher alkanes decreased during the upward migration of petroleum. A diversity of sulfate-reducing bacteria was identified by 16s rRNA phylogenetic studies, some of which are associated with marine seeps and petroleum degradation. The δ13C signal of produced methane decreased from -33.7‰ to -49.5‰ indicating crude oil degradation by methanogenesis, which was supported by enrichment culturing of methanogens with petroleum hydrocarbons and presence of methanogenic archaea. The SOFT system is, to the best of our knowledge, the first system that simulates an oil-seep like condition and enables live monitoring of biogeochemical changes within a sediment core during petroleum seepage. During our presentation we will compare the Caspian Sea data with other sediments we studied using the SOFT system from sites such as Santa Barbara (Pacific Ocean), the North Alex Mud Volcano (Mediterranean Sea) and the Eckernfoerde Bay (Baltic Sea). This research was funded by the Deutsche Forschungsgemeinschaft (SPP 1319) and DEA Deutsche Erdoel AG. Further support came from the Helmholtz and Max Planck Gesellschaft.

  19. Design and evaluation of hydrophobic coated buoyant core as floating drug delivery system for sustained release of cisapride

    PubMed Central

    Jacob, Shery; Nair, Anroop B; Patil, Pandurang N

    2010-01-01

    An inert hydrophobic buoyant coated–core was developed as floating drug delivery system (FDDS) for sustained release of cisapride using direct compression technology. Core contained low density, porous ethyl cellulose, which was coated with an impermeable, insoluble hydrophobic coating polymer such as rosin. It was further seal coated with low viscosity hydroxypropyl methyl cellulose (HPMC E15) to minimize moisture permeation and better adhesion with an outer drug layer. It was found that stable buoyant core was sufficient to float the tablet more than 8 h without the aid of sodium bicarbonate and citric acid. Sustained release of cisapride was achieved with HPMC K4M in the outer drug layer. The floating lag time required for these novel FDDS was found to be zero, however it is likely that the porosity or density of the core is critical for floatability of these tablets. The in vitro release pattern of these tablets in simulated gastric fluid showed the constant and controlled release for prolonged time. It can be concluded that the hydrophobic coated buoyant core could be used as FDDS for gastroretentive delivery system of cisapride or other suitable drugs. PMID:24825997

  20. Terascale Cluster for Advanced Turbulent Combustion Simulations

    DTIC Science & Technology

    2008-07-25

    the system We have given the name CATS (for Combustion And Turbulence Simulator) to the terascale system that was obtained through this grant. CATS ...lnfiniBand interconnect. CATS includes an interactive login node and a file server, each holding in excess of 1 terabyte of file storage. The 35 active...compute nodes of CATS enable us to run up to 140-core parallel MPI batch jobs; one node is reserved to run the scheduler. CATS is operated and

  1. Full Core TREAT Kinetics Demonstration Using Rattlesnake/BISON Coupling Within MAMMOTH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortensi, Javier; DeHart, Mark D.; Gleicher, Frederick N.

    2015-08-01

    This report summarizes key aspects of research in evaluation of modeling needs for TREAT transient simulation. Using a measured TREAT critical measurement and a transient for a small, experimentally simplified core, Rattlesnake and MAMMOTH simulations are performed building from simple infinite media to a full core model. Cross sections processing methods are evaluated, various homogenization approaches are assessed and the neutronic behavior of the core studied to determine key modeling aspects. The simulation of the minimum critical core with the diffusion solver shows very good agreement with the reference Monte Carlo simulation and the experiment. The full core transient simulationmore » with thermal feedback shows a significantly lower power peak compared to the documented experimental measurement, which is not unexpected in the early stages of model development.« less

  2. Nonlinear combining and compression in multicore fibers

    DOE PAGES

    Chekhovskoy, I. S.; Rubenchik, A. M.; Shtyrina, O. V.; ...

    2016-10-25

    In this paper, we demonstrate numerically light-pulse combining and pulse compression using wave-collapse (self-focusing) energy-localization dynamics in a continuous-discrete nonlinear system, as implemented in a multicore fiber (MCF) using one-dimensional (1D) and 2D core distribution designs. Large-scale numerical simulations were performed to determine the conditions of the most efficient coherent combining and compression of pulses injected into the considered MCFs. We demonstrate the possibility of combining in a single core 90% of the total energy of pulses initially injected into all cores of a 7-core MCF with a hexagonal lattice. Finally, a pulse compression factor of about 720 can bemore » obtained with a 19-core ring MCF.« less

  3. Molecular Simulation Studies of Covalently and Ionically Grafted Nanoparticles

    NASA Astrophysics Data System (ADS)

    Hong, Bingbing

    Solvent-free covalently- or ionically-grafted nanoparticles (CGNs and IGNs) are a new class of organic-inorganic hybrid composite materials exhibiting fluid-like behaviors around room temperature. With similar structures to prior systems, e.g. nanocomposites, neutral or charged colloids, ionic liquids, etc, CGNs and IGNs inherit the functionality of inorganic nanopariticles, the facile processibility of polymers, as well as conductivity and nonvolatility from their constituent materials. In spite of the extensive prior experimental research having covered synthesis and measurements of thermal and dynamic properties, little progress in understanding of these new materials at the molecular level has been achieved, because of the lack of simulation work in this new area. Atomistic and coarse-grained molecular dynamics simulations have been performed in this thesis to investigate the thermodynamics, structure, and dynamics of these systems and to seek predictive methods predictable for their properties. Starting from poly(ethylene oxide) oligomers (PEO) melts, we established atomistic models based on united-atom representations of methylene. The Green-Kubo and Einstein-Helfand formulas were used to calculate the transport properties. The simulations generate densities, viscosities, diffusivities, in good agreement with experimental data. The chain-length dependence of the transport properties suggests that neither Rouse nor reptation models are applicable in the short-chain regime investigated. Coupled with thermodynamic integration methods, the models give good predictions of pressure-composition-density relations for CO 2 + PEO oligomers. Water effects on the Henry's constant of CO 2 in PEO have also been investigated. The dependence of the calculated Henry's constants on the weight percentage of water falls on a temperature-dependent master curve, irrespective of PEO chain length. CGNs are modeled by the inclusion of solid-sphere nanoparticles into the atomistic oligomers. The calculated viscosities from Green-Kubo relationships and temperature extrapolation are of the same order of magnitude as experimental values, but show a smaller activation energy relative to real CGNs systems. Grafted systems have higher viscosities, smaller diffusion coefficients, and slower chain dynamics than the ungrafted counterparts - nanocomposites - at high temperatures. At lower temperatures, grafted systems exhibit faster dynamics for both nanoparticles and chains relative to ungrafted systems, because of lower aggregation of nanoparticles and enhanced correlations between nanoparticles and chains. This agrees with the experimental observation that the new materials have liquid-like behavior in the absence of a solvent. To lower the simulated temperatures into the experimental range, we established a coarse-grained CGNs model by matching structural distribution functions to atomistic simulation data. In contrast with linear polymer systems, for which coarse-graining always accelerate dynamics, coarse-graining of grafted nanoparticles can either accelerate or slowdown the core motions, depending on the length of the grafted chains. This can be qualitatively predicted by a simple transition-state theory. Similar atomistic models to CGNs were developed for IGNs, with ammonium counterions described by an explicit-hydrogen way; these were in turn compared with "generic" coarse-grained IGNs. The elimination of chemical details in the coarse-grained models does not bring in qualitative changes to the radial distribution functions and diffusion of atomistic IGNs, but saves considerable simulation resources and make simulations near room temperatures affordable. The chain counterions in both atomistic and coarse-grained models are mobile, moving from site to site and from nanoparticle to nanoparticle. At the same temperature and the same core volume fractions, the nanoparticle diffusivities in coarse-grained IGNs are slower by a factor ten than the cores of CGNs. The coarse-grained IGNs models are later used to investigate the system dynamics through analysis of the dependence on temperature and structural parameters of the transport properties (self-diffusion coefficients, viscosities and conductivities). Further, migration kinetics of oligomeric counterions is analyzed in a manner analogous to unimer exchange between micellar aggregates. The counterion migrations follow the "double-core" mechanism and are kinetically controlled by neighboring-core collisions. (Abstract shortened by UMI.)

  4. Computational Aerodynamic Simulations of an 840 ft/sec Tip Speed Advanced Ducted Propulsor Fan System Model for Acoustic Methods Assessment and Development

    NASA Technical Reports Server (NTRS)

    Tweedt, Daniel L.

    2014-01-01

    Computational Aerodynamic simulations of an 840 ft/sec tip speed, Advanced Ducted Propulsor fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, lownoise research fan/nacelle model that has undergone extensive experimental testing in the 9- by 15- foot Low Speed Wind Tunnel at the NASA Glenn Research Center, resulting in quality, detailed aerodynamic and acoustic measurement data. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating conditions simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, excluding a long core duct section downstream of the core inlet guide vane. As a result, only fan rotational speed and system bypass ratio, set by specifying static pressure downstream of the core inlet guide vane row, were adjusted in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. The computed blade row flow fields for all five fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the computed flow fields reveals no excessive boundary layer separations or related secondary-flow problems. A few spanwise comparisons between computational and measurement data in the bypass duct show that they are in good agreement, thus providing a partial validation of the computational results.

  5. Hot super-Earths and giant planet cores from different migration histories

    NASA Astrophysics Data System (ADS)

    Cossou, Christophe; Raymond, Sean N.; Hersant, Franck; Pierens, Arnaud

    2014-09-01

    Planetary embryos embedded in gaseous protoplanetary disks undergo Type I orbital migration. Migration can be inward or outward depending on the local disk properties but, in general, only planets more massive than several M⊕ can migrate outward. Here we propose that an embryo's migration history determines whether it becomes a hot super-Earth or the core of a giant planet. Systems of hot super-Earths (or mini-Neptunes) form when embryos migrate inward and pile up at the inner edge of the disk. Giant planet cores form when inward-migrating embryos become massive enough to switch direction and migrate outward. We present simulations of this process using a modified N-body code, starting from a swarm of planetary embryos. Systems of hot super-Earths form in resonant chains with the innermost planet at or interior to the disk inner edge. Resonant chains are disrupted by late dynamical instabilities triggered by the dispersal of the gaseous disk. Giant planet cores migrate outward toward zero-torque zones, which move inward and eventually disappear as the disk disperses. Giant planet cores migrate inward with these zones and are stranded at ~1-5 AU. Our model reproduces several properties of the observed extra-solar planet populations. The frequency of giant planet cores increases strongly when the mass in solids is increased, consistent with the observed giant exoplanet - stellar metallicity correlation. The frequency of hot super-Earths is not a function of stellar metallicity, also in agreement with observations. Our simulations can reproduce the broad characteristics of the observed super-Earth population.

  6. Hydrocarbon Degradation in Caspian Sea Sediment Cores Subjected to Simulated Petroleum Seepage in a Newly Designed Sediment-Oil-Flow-Through System.

    PubMed

    Mishra, Sonakshi; Wefers, Peggy; Schmidt, Mark; Knittel, Katrin; Krüger, Martin; Stagars, Marion H; Treude, Tina

    2017-01-01

    The microbial community response to petroleum seepage was investigated in a whole round sediment core (16 cm length) collected nearby natural hydrocarbon seepage structures in the Caspian Sea, using a newly developed Sediment-Oil-Flow-Through (SOFT) system. Distinct redox zones established and migrated vertically in the core during the 190 days-long simulated petroleum seepage. Methanogenic petroleum degradation was indicated by an increase in methane concentration from 8 μM in an untreated core compared to 2300 μM in the lower sulfate-free zone of the SOFT core at the end of the experiment, accompanied by a respective decrease in the δ 13 C signal of methane from -33.7 to -49.5‰. The involvement of methanogens in petroleum degradation was further confirmed by methane production in enrichment cultures from SOFT sediment after the addition of hexadecane, methylnapthalene, toluene, and ethylbenzene. Petroleum degradation coupled to sulfate reduction was indicated by the increase of integrated sulfate reduction rates from 2.8 SO 4 2- m -2 day -1 in untreated cores to 5.7 mmol SO 4 2- m -2 day -1 in the SOFT core at the end of the experiment, accompanied by a respective accumulation of sulfide from 30 to 447 μM. Volatile hydrocarbons (C2-C6 n -alkanes) passed through the methanogenic zone mostly unchanged and were depleted within the sulfate-reducing zone. The amount of heavier n -alkanes (C10-C38) decreased step-wise toward the top of the sediment core and a preferential degradation of shorter (C30) was seen during the seepage. This study illustrates, to the best of our knowledge, for the first time the development of methanogenic petroleum degradation and the succession of benthic microbial processes during petroleum passage in a whole round sediment core.

  7. Hydrocarbon Degradation in Caspian Sea Sediment Cores Subjected to Simulated Petroleum Seepage in a Newly Designed Sediment-Oil-Flow-Through System

    PubMed Central

    Mishra, Sonakshi; Wefers, Peggy; Schmidt, Mark; Knittel, Katrin; Krüger, Martin; Stagars, Marion H.; Treude, Tina

    2017-01-01

    The microbial community response to petroleum seepage was investigated in a whole round sediment core (16 cm length) collected nearby natural hydrocarbon seepage structures in the Caspian Sea, using a newly developed Sediment-Oil-Flow-Through (SOFT) system. Distinct redox zones established and migrated vertically in the core during the 190 days-long simulated petroleum seepage. Methanogenic petroleum degradation was indicated by an increase in methane concentration from 8 μM in an untreated core compared to 2300 μM in the lower sulfate-free zone of the SOFT core at the end of the experiment, accompanied by a respective decrease in the δ13C signal of methane from -33.7 to -49.5‰. The involvement of methanogens in petroleum degradation was further confirmed by methane production in enrichment cultures from SOFT sediment after the addition of hexadecane, methylnapthalene, toluene, and ethylbenzene. Petroleum degradation coupled to sulfate reduction was indicated by the increase of integrated sulfate reduction rates from 2.8 SO42-m-2 day-1 in untreated cores to 5.7 mmol SO42-m-2 day-1 in the SOFT core at the end of the experiment, accompanied by a respective accumulation of sulfide from 30 to 447 μM. Volatile hydrocarbons (C2–C6 n-alkanes) passed through the methanogenic zone mostly unchanged and were depleted within the sulfate-reducing zone. The amount of heavier n-alkanes (C10–C38) decreased step-wise toward the top of the sediment core and a preferential degradation of shorter (C30) was seen during the seepage. This study illustrates, to the best of our knowledge, for the first time the development of methanogenic petroleum degradation and the succession of benthic microbial processes during petroleum passage in a whole round sediment core. PMID:28503172

  8. Role Of Impurities On Deformation Of HCP Crystal: A Multi-Scale Approach

    NASA Astrophysics Data System (ADS)

    Bhatia, Mehul Anoopkumar

    Commercially pure (CP) and extra low interstitial (ELI) grade Ti-alloys present excellent corrosion resistance, lightweight, and formability making them attractive materials for expanded use in transportation and medical applications. However, the strength and toughness of CP titanium are affected by relatively small variations in their impurity/solute content (IC), e.g., O, Al, and V. This increase in strength is due to the fact that the solute either increases the critical stress required for the prismatic slip systems ({10- 10}) or activates another slip system ((0001), {10-11}). In particular, solute additions such as O can effectively strengthen the alloy but with an attendant loss in ductility by changing the behavior from wavy (cross slip) to planar nature. In order to understand the underlying behavior of strengthening by solutes, it is important to understand the atomic scale mechanism. This dissertation aims to address this knowledge gap through a synergistic combination of density functional theory (DFT) and molecular dynamics. Further, due to the long-range strain fields of the dislocations and the periodicity of the DFT simulation cells, it is difficult to apply ab initio simulations to study the dislocation core structure. To alleviate this issue we developed a multiscale quantum mechanics/molecular mechanics approach (QM/MM) to study the dislocation core. We use the developed QM/MM method to study the pipe diffusion along a prismatic edge dislocation core. Complementary to the atomistic simulations, the Semi-discrete Variational Peierls-Nabarro model (SVPN) was also used to analyze the dislocation core structure and mobility. The chemical interaction between the solute/impurity and the dislocation core is captured by the so-called generalized stacking fault energy (GSFE) surface which was determined from DFT-VASP calculations. By taking the chemical interaction into consideration the SVPN model can predict the dislocation core structure and mobility in the presence and absence of the solute/impurity and thus reveal the effect of impurity/solute on the softening/hardening behavior in alpha-Ti. Finally, to study the interaction of the dislocation core with other planar defects such as grain boundaries (GB), we develop an automated method to theoretically generate GBs in HCP type materials.

  9. Formation of massive, dense cores by cloud-cloud collisions

    NASA Astrophysics Data System (ADS)

    Takahira, Ken; Shima, Kazuhiro; Habe, Asao; Tasker, Elizabeth J.

    2018-03-01

    We performed sub-parsec (˜ 0.014 pc) scale simulations of cloud-cloud collisions of two idealized turbulent molecular clouds (MCs) with different masses in the range of (0.76-2.67) × 104 M_{⊙} and with collision speeds of 5-30 km s-1. Those parameters are larger than in Takahira, Tasker, and Habe (2014, ApJ, 792, 63), in which study the colliding system showed a partial gaseous arc morphology that supports the NANTEN observations of objects indicated to be colliding MCs using numerical simulations. Gas clumps with density greater than 10-20 g cm-3 were identified as pre-stellar cores and tracked through the simulation to investigate the effects of the mass of colliding clouds and the collision speeds on the resulting core population. Our results demonstrate that the smaller cloud property is more important for the results of cloud-cloud collisions. The mass function of formed cores can be approximated by a power-law relation with an index γ = -1.6 in slower cloud-cloud collisions (v ˜ 5 km s-1), and is in good agreement with observation of MCs. A faster relative speed increases the number of cores formed in the early stage of collisions and shortens the gas accretion phase of cores in the shocked region, leading to the suppression of core growth. The bending point appears in the high-mass part of the core mass function and the bending point mass decreases with increase in collision speed for the same combination of colliding clouds. The higher-mass part of the core mass function than the bending point mass can be approximated by a power law with γ = -2-3 that is similar to the power index of the massive part of the observed stellar initial mass function. We discuss implications of our results for the massive-star formation in our Galaxy.

  10. Formation of massive, dense cores by cloud-cloud collisions

    NASA Astrophysics Data System (ADS)

    Takahira, Ken; Shima, Kazuhiro; Habe, Asao; Tasker, Elizabeth J.

    2018-05-01

    We performed sub-parsec (˜ 0.014 pc) scale simulations of cloud-cloud collisions of two idealized turbulent molecular clouds (MCs) with different masses in the range of (0.76-2.67) × 104 M_{⊙} and with collision speeds of 5-30 km s-1. Those parameters are larger than in Takahira, Tasker, and Habe (2014, ApJ, 792, 63), in which study the colliding system showed a partial gaseous arc morphology that supports the NANTEN observations of objects indicated to be colliding MCs using numerical simulations. Gas clumps with density greater than 10-20 g cm-3 were identified as pre-stellar cores and tracked through the simulation to investigate the effects of the mass of colliding clouds and the collision speeds on the resulting core population. Our results demonstrate that the smaller cloud property is more important for the results of cloud-cloud collisions. The mass function of formed cores can be approximated by a power-law relation with an index γ = -1.6 in slower cloud-cloud collisions (v ˜ 5 km s-1), and is in good agreement with observation of MCs. A faster relative speed increases the number of cores formed in the early stage of collisions and shortens the gas accretion phase of cores in the shocked region, leading to the suppression of core growth. The bending point appears in the high-mass part of the core mass function and the bending point mass decreases with increase in collision speed for the same combination of colliding clouds. The higher-mass part of the core mass function than the bending point mass can be approximated by a power law with γ = -2-3 that is similar to the power index of the massive part of the observed stellar initial mass function. We discuss implications of our results for the massive-star formation in our Galaxy.

  11. Behavior-aware cache hierarchy optimization for low-power multi-core embedded systems

    NASA Astrophysics Data System (ADS)

    Zhao, Huatao; Luo, Xiao; Zhu, Chen; Watanabe, Takahiro; Zhu, Tianbo

    2017-07-01

    In modern embedded systems, the increasing number of cores requires efficient cache hierarchies to ensure data throughput, but such cache hierarchies are restricted by their tumid size and interference accesses which leads to both performance degradation and wasted energy. In this paper, we firstly propose a behavior-aware cache hierarchy (BACH) which can optimally allocate the multi-level cache resources to many cores and highly improved the efficiency of cache hierarchy, resulting in low energy consumption. The BACH takes full advantage of the explored application behaviors and runtime cache resource demands as the cache allocation bases, so that we can optimally configure the cache hierarchy to meet the runtime demand. The BACH was implemented on the GEM5 simulator. The experimental results show that energy consumption of a three-level cache hierarchy can be saved from 5.29% up to 27.94% compared with other key approaches while the performance of the multi-core system even has a slight improvement counting in hardware overhead.

  12. Modeling of grain-oriented Si-steel and amorphous alloy iron core under ferroresonance using Jiles-Atherton hysteresis method

    NASA Astrophysics Data System (ADS)

    Sima, Wenxia; Zou, Mi; Yang, Ming; Yang, Qing; Peng, Daixiao

    2018-05-01

    Amorphous alloy is increasingly widely used in the iron core of power transformer due to its excellent low loss performance. However, its potential harm to the power system is not fully studied during the electromagnetic transients of the transformer. This study develops a simulation model to analyze the effect of transformer iron core materials on ferroresonance. The model is based on the transformer π equivalent circuit. The flux linkage-current (ψ-i) Jiles-Atherton reactor is developed in an Electromagnetic Transients Program-Alternative Transients Program and is used to represent the magnetizing branches of the transformer model. Two ferroresonance cases are studied to compare the performance of grain-oriented Si-steel and amorphous alloy cores. The ferroresonance overvoltage and overcurrent are discussed under different system parameters. Results show that amorphous alloy transformer generates higher voltage and current than those of grain-oriented Si-steel transformer and significantly harms the power system safety.

  13. Total Precipitable Water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2012-01-01

    The simulation was performed on 64K cores of Intrepid, running at 0.25 simulated-years-per-day and taking 25 million core-hours. This is the first simulation using both the CAM5 physics and the highly scalable spectral element dynamical core. The animation of Total Precipitable Water clearly shows hurricanes developing in the Atlantic and Pacific.

  14. Waterlike anomalies in a two-dimensional core-softened potential

    NASA Astrophysics Data System (ADS)

    Bordin, José Rafael; Barbosa, Marcia C.

    2018-02-01

    We investigate the structural, thermodynamic, and dynamic behavior of a two-dimensional (2D) core-corona system using Langevin dynamics simulations. The particles are modeled by employing a core-softened potential which exhibits waterlike anomalies in three dimensions. In previous studies in a quasi-2D system a new region in the pressure versus temperature phase diagram of structural anomalies was observed. Here we show that for the two-dimensional case two regions in the pressure versus temperature phase diagram with structural, density, and diffusion anomalies are observed. Our findings indicate that, while the anomalous region at lower densities is due the competition between the two length scales in the potential at higher densities, the anomalous region is related to the reentrance of the melting line.

  15. VERAIn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan

    2015-02-16

    CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less

  16. Tinker-HP: a massively parallel molecular dynamics package for multiscale simulations of large complex systems with advanced point dipole polarizable force fields.

    PubMed

    Lagardère, Louis; Jolly, Luc-Henri; Lipparini, Filippo; Aviat, Félix; Stamm, Benjamin; Jing, Zhifeng F; Harger, Matthew; Torabifard, Hedieh; Cisneros, G Andrés; Schnieders, Michael J; Gresh, Nohad; Maday, Yvon; Ren, Pengyu Y; Ponder, Jay W; Piquemal, Jean-Philip

    2018-01-28

    We present Tinker-HP, a massively MPI parallel package dedicated to classical molecular dynamics (MD) and to multiscale simulations, using advanced polarizable force fields (PFF) encompassing distributed multipoles electrostatics. Tinker-HP is an evolution of the popular Tinker package code that conserves its simplicity of use and its reference double precision implementation for CPUs. Grounded on interdisciplinary efforts with applied mathematics, Tinker-HP allows for long polarizable MD simulations on large systems up to millions of atoms. We detail in the paper the newly developed extension of massively parallel 3D spatial decomposition to point dipole polarizable models as well as their coupling to efficient Krylov iterative and non-iterative polarization solvers. The design of the code allows the use of various computer systems ranging from laboratory workstations to modern petascale supercomputers with thousands of cores. Tinker-HP proposes therefore the first high-performance scalable CPU computing environment for the development of next generation point dipole PFFs and for production simulations. Strategies linking Tinker-HP to Quantum Mechanics (QM) in the framework of multiscale polarizable self-consistent QM/MD simulations are also provided. The possibilities, performances and scalability of the software are demonstrated via benchmarks calculations using the polarizable AMOEBA force field on systems ranging from large water boxes of increasing size and ionic liquids to (very) large biosystems encompassing several proteins as well as the complete satellite tobacco mosaic virus and ribosome structures. For small systems, Tinker-HP appears to be competitive with the Tinker-OpenMM GPU implementation of Tinker. As the system size grows, Tinker-HP remains operational thanks to its access to distributed memory and takes advantage of its new algorithmic enabling for stable long timescale polarizable simulations. Overall, a several thousand-fold acceleration over a single-core computation is observed for the largest systems. The extension of the present CPU implementation of Tinker-HP to other computational platforms is discussed.

  17. PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems

    PubMed Central

    Ghaffarizadeh, Ahmadreza; Mumenthaler, Shannon M.

    2018-01-01

    Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal “virtual laboratory” for such multicellular systems simulates both the biochemical microenvironment (the “stage”) and many mechanically and biochemically interacting cells (the “players” upon the stage). PhysiCell—physics-based multicellular simulator—is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility “out of the box.” The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 105-106 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a “cellular cargo delivery” system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net. PMID:29474446

  18. Simulation Methods for Design of Networked Power Electronics and Information Systems

    DTIC Science & Technology

    2014-07-01

    Insertion of latency in every branch and at every node permits the system model to be efficiently distributed across many separate computing cores. An... the system . We demonstrated extensibility and generality of the Virtual Test Bed (VTB) framework to support multiple solvers and their associated...Information Systems Objectives The overarching objective of this program is to develop methods for fast

  19. High-capacity mixed fiber-wireless backhaul networks using MMW radio-over-MCF and MIMO

    NASA Astrophysics Data System (ADS)

    Pham, Thu A.; Pham, Hien T. T.; Le, Hai-Chau; Dang, Ngoc T.

    2017-10-01

    In this paper, we have proposed a high-capacity backhaul network, which is based on mixed fiber-wireless systems using millimeter-wave radio-over-multi-core fiber (MMW RoMCF) and multiple-input multiple-output (MIMO) transmission, for next generation mobile access networks. In addition, we also investigate the use of avalanche photodiode (APD) to improve capacity of the proposed backhaul downlink. We then theoretically analyze the system capacity comprehensively while considering various physical impairments including noise, MCF crosstalk, and fading modeled by Rician MIMO channel. The feasibility of the proposed backhaul architecture is verified via the numerical simulation experiments. The research results demonstrate that our developed backhaul solution can significantly enhance the backhaul capacity; the system capacity of 24 bps/Hz can be achieved with 20-km 8-core MCF and 8 × 8 MIMO transmitted over 100-m Rician fading link. It is also shown that the system performance, in term of channel capacity, strongly depend on the MCF inter-core crosstalk, which is governed by the mode coupling coefficient, the core pitch, and the bending radius.

  20. Comparative study between single core model and detail core model of CFD modelling on reactor core cooling behaviour

    NASA Astrophysics Data System (ADS)

    Darmawan, R.

    2018-01-01

    Nuclear power industry is facing uncertainties since the occurrence of the unfortunate accident at Fukushima Daiichi Nuclear Power Plant. The issue of nuclear power plant safety becomes the major hindrance in the planning of nuclear power program for new build countries. Thus, the understanding of the behaviour of reactor system is very important to ensure the continuous development and improvement on reactor safety. Throughout the development of nuclear reactor technology, investigation and analysis on reactor safety have gone through several phases. In the early days, analytical and experimental methods were employed. For the last four decades 1D system level codes were widely used. The continuous development of nuclear reactor technology has brought about more complex system and processes of nuclear reactor operation. More detailed dimensional simulation codes are needed to assess these new reactors. Recently, 2D and 3D system level codes such as CFD are being explored. This paper discusses a comparative study on two different approaches of CFD modelling on reactor core cooling behaviour.

  1. Convergence studies of deterministic methods for LWR explicit reflector methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canepa, S.; Hursin, M.; Ferroukhi, H.

    2013-07-01

    The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on verymore » different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)« less

  2. Performance of hybrid programming models for multiscale cardiac simulations: preparing for petascale computation.

    PubMed

    Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias

    2011-10-01

    Future multiscale and multiphysics models that support research into human disease, translational medical science, and treatment can utilize the power of high-performance computing (HPC) systems. We anticipate that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message-passing processes [e.g., the message-passing interface (MPI)] with multithreading (e.g., OpenMP, Pthreads). The objective of this study is to compare the performance of such hybrid programming models when applied to the simulation of a realistic physiological multiscale model of the heart. Our results show that the hybrid models perform favorably when compared to an implementation using only the MPI and, furthermore, that OpenMP in combination with the MPI provides a satisfactory compromise between performance and code complexity. Having the ability to use threads within MPI processes enables the sophisticated use of all processor cores for both computation and communication phases. Considering that HPC systems in 2012 will have two orders of magnitude more cores than what was used in this study, we believe that faster than real-time multiscale cardiac simulations can be achieved on these systems.

  3. Implementation of extended Lagrangian dynamics in GROMACS for polarizable simulations using the classical Drude oscillator model.

    PubMed

    Lemkul, Justin A; Roux, Benoît; van der Spoel, David; MacKerell, Alexander D

    2015-07-15

    Explicit treatment of electronic polarization in empirical force fields used for molecular dynamics simulations represents an important advancement in simulation methodology. A straightforward means of treating electronic polarization in these simulations is the inclusion of Drude oscillators, which are auxiliary, charge-carrying particles bonded to the cores of atoms in the system. The additional degrees of freedom make these simulations more computationally expensive relative to simulations using traditional fixed-charge (additive) force fields. Thus, efficient tools are needed for conducting these simulations. Here, we present the implementation of highly scalable algorithms in the GROMACS simulation package that allow for the simulation of polarizable systems using extended Lagrangian dynamics with a dual Nosé-Hoover thermostat as well as simulations using a full self-consistent field treatment of polarization. The performance of systems of varying size is evaluated, showing that the present code parallelizes efficiently and is the fastest implementation of the extended Lagrangian methods currently available for simulations using the Drude polarizable force field. © 2015 Wiley Periodicals, Inc.

  4. Design and pilot evaluation of the RAH-66 Comanche Core AFCS

    NASA Technical Reports Server (NTRS)

    Fogler, Donald L., Jr.; Keller, James F.

    1993-01-01

    This paper addresses the design and pilot evaluation of the Core Automatic Flight Control System (AFCS) for the Reconnaissance/Attack Helicopter (RAH-66) Comanche. During the period from November 1991 through February 1992, the RAH-66 Comanche control laws were evaluated through a structured pilot acceptance test using a motion base simulator. Design requirements, descriptions of the control law design, and handling qualities data collected from ADS-33 maneuvers are presented.

  5. Featured Image: The Simulated Collapse of a Core

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-11-01

    This stunning snapshot (click for a closer look!) is from a simulation of a core-collapse supernova. Despite having been studied for many decades, the mechanism driving the explosions of core-collapse supernovae is still an area of active research. Extremely complex simulations such as this one represent best efforts to include as many realistic physical processes as is currently computationally feasible. In this study led by Luke Roberts (a NASA Einstein Postdoctoral Fellow at Caltech at the time), a core-collapse supernova is modeled long-term in fully 3D simulations that include the effects of general relativity, radiation hydrodynamics, and even neutrino physics. The authors use these simulations to examine the evolution of a supernova after its core bounce. To read more about the teams findings (and see more awesome images from their simulations), check out the paper below!CitationLuke F. Roberts et al 2016 ApJ 831 98. doi:10.3847/0004-637X/831/1/98

  6. Preliminary analysis of loss-of-coolant accident in Fukushima nuclear accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su'ud, Zaki; Anshari, Rio

    Loss-of-Coolant Accident (LOCA) in Boiling Water Reactor (BWR) especially on Fukushima Nuclear Accident will be discussed in this paper. The Tohoku earthquake triggered the shutdown of nuclear power reactors at Fukushima Nuclear Power station. Though shutdown process has been completely performed, cooling process, at much smaller level than in normal operation, is needed to remove decay heat from the reactor core until the reactor reach cold-shutdown condition. If LOCA happen at this condition, it will cause the increase of reactor fuel and other core temperatures and can lead to reactor core meltdown and exposure of radioactive material to the environmentmore » such as in the Fukushima Dai Ichi nuclear accident case. In this study numerical simulation has been performed to calculate pressure composition, water level and temperature distribution on reactor during this accident. There are two coolant regulating system that operational on reactor unit 1 at this accident, Isolation Condensers (IC) system and Safety Relief Valves (SRV) system. Average mass flow of steam to the IC system in this event is 10 kg/s and could keep reactor core from uncovered about 3,2 hours and fully uncovered in 4,7 hours later. There are two coolant regulating system at operational on reactor unit 2, Reactor Core Isolation Condenser (RCIC) System and Safety Relief Valves (SRV). Average mass flow of coolant that correspond this event is 20 kg/s and could keep reactor core from uncovered about 73 hours and fully uncovered in 75 hours later. There are three coolant regulating system at operational on reactor unit 3, Reactor Core Isolation Condenser (RCIC) system, High Pressure Coolant Injection (HPCI) system and Safety Relief Valves (SRV). Average mass flow of water that correspond this event is 15 kg/s and could keep reactor core from uncovered about 37 hours and fully uncovered in 40 hours later.« less

  7. Preliminary analysis of loss-of-coolant accident in Fukushima nuclear accident

    NASA Astrophysics Data System (ADS)

    Su'ud, Zaki; Anshari, Rio

    2012-06-01

    Loss-of-Coolant Accident (LOCA) in Boiling Water Reactor (BWR) especially on Fukushima Nuclear Accident will be discussed in this paper. The Tohoku earthquake triggered the shutdown of nuclear power reactors at Fukushima Nuclear Power station. Though shutdown process has been completely performed, cooling process, at much smaller level than in normal operation, is needed to remove decay heat from the reactor core until the reactor reach cold-shutdown condition. If LOCA happen at this condition, it will cause the increase of reactor fuel and other core temperatures and can lead to reactor core meltdown and exposure of radioactive material to the environment such as in the Fukushima Dai Ichi nuclear accident case. In this study numerical simulation has been performed to calculate pressure composition, water level and temperature distribution on reactor during this accident. There are two coolant regulating system that operational on reactor unit 1 at this accident, Isolation Condensers (IC) system and Safety Relief Valves (SRV) system. Average mass flow of steam to the IC system in this event is 10 kg/s and could keep reactor core from uncovered about 3,2 hours and fully uncovered in 4,7 hours later. There are two coolant regulating system at operational on reactor unit 2, Reactor Core Isolation Condenser (RCIC) System and Safety Relief Valves (SRV). Average mass flow of coolant that correspond this event is 20 kg/s and could keep reactor core from uncovered about 73 hours and fully uncovered in 75 hours later. There are three coolant regulating system at operational on reactor unit 3, Reactor Core Isolation Condenser (RCIC) system, High Pressure Coolant Injection (HPCI) system and Safety Relief Valves (SRV). Average mass flow of water that correspond this event is 15 kg/s and could keep reactor core from uncovered about 37 hours and fully uncovered in 40 hours later.

  8. Modeling of a Flooding Induced Station Blackout for a Pressurized Water Reactor Using the RISMC Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Prescott, Steven R; Smith, Curtis L

    2011-07-01

    In the Risk Informed Safety Margin Characterization (RISMC) approach we want to understand not just the frequency of an event like core damage, but how close we are (or are not) to key safety-related events and how might we increase our safety margins. The RISMC Pathway uses the probabilistic margin approach to quantify impacts to reliability and safety by coupling both probabilistic (via stochastic simulation) and mechanistic (via physics models) approaches. This coupling takes place through the interchange of physical parameters and operational or accident scenarios. In this paper we apply the RISMC approach to evaluate the impact of amore » power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., system activation) and to perform statistical analyses (e.g., run multiple RELAP-7 simulations where sequencing/timing of events have been changed according to a set of stochastic distributions). By using the RISMC toolkit, we can evaluate how power uprate affects the system recovery measures needed to avoid core damage after the PWR lost all available AC power by a tsunami induced flooding. The simulation of the actual flooding is performed by using a smooth particle hydrodynamics code: NEUTRINO.« less

  9. Graphical Modeling Meets Systems Pharmacology.

    PubMed

    Lombardo, Rosario; Priami, Corrado

    2017-01-01

    A main source of failures in systems projects (including systems pharmacology) is poor communication level and different expectations among the stakeholders. A common and not ambiguous language that is naturally comprehensible by all the involved players is a boost to success. We present bStyle, a modeling tool that adopts a graphical language close enough to cartoons to be a common media to exchange ideas and data and that it is at the same time formal enough to enable modeling, analysis, and dynamic simulations of a system. Data analysis and simulation integrated in the same application are fundamental to understand the mechanisms of actions of drugs: a core aspect of systems pharmacology.

  10. Graphical Modeling Meets Systems Pharmacology

    PubMed Central

    Lombardo, Rosario; Priami, Corrado

    2017-01-01

    A main source of failures in systems projects (including systems pharmacology) is poor communication level and different expectations among the stakeholders. A common and not ambiguous language that is naturally comprehensible by all the involved players is a boost to success. We present bStyle, a modeling tool that adopts a graphical language close enough to cartoons to be a common media to exchange ideas and data and that it is at the same time formal enough to enable modeling, analysis, and dynamic simulations of a system. Data analysis and simulation integrated in the same application are fundamental to understand the mechanisms of actions of drugs: a core aspect of systems pharmacology. PMID:28469411

  11. Density-based cluster algorithms for the identification of core sets

    NASA Astrophysics Data System (ADS)

    Lemke, Oliver; Keller, Bettina G.

    2016-10-01

    The core-set approach is a discretization method for Markov state models of complex molecular dynamics. Core sets are disjoint metastable regions in the conformational space, which need to be known prior to the construction of the core-set model. We propose to use density-based cluster algorithms to identify the cores. We compare three different density-based cluster algorithms: the CNN, the DBSCAN, and the Jarvis-Patrick algorithm. While the core-set models based on the CNN and DBSCAN clustering are well-converged, constructing core-set models based on the Jarvis-Patrick clustering cannot be recommended. In a well-converged core-set model, the number of core sets is up to an order of magnitude smaller than the number of states in a conventional Markov state model with comparable approximation error. Moreover, using the density-based clustering one can extend the core-set method to systems which are not strongly metastable. This is important for the practical application of the core-set method because most biologically interesting systems are only marginally metastable. The key point is to perform a hierarchical density-based clustering while monitoring the structure of the metric matrix which appears in the core-set method. We test this approach on a molecular-dynamics simulation of a highly flexible 14-residue peptide. The resulting core-set models have a high spatial resolution and can distinguish between conformationally similar yet chemically different structures, such as register-shifted hairpin structures.

  12. Preliminary control system design and analysis for the Space Station Furnace Facility thermal control system

    NASA Technical Reports Server (NTRS)

    Jackson, M. E.

    1995-01-01

    This report presents the Space Station Furnace Facility (SSFF) thermal control system (TCS) preliminary control system design and analysis. The SSFF provides the necessary core systems to operate various materials processing furnaces. The TCS is defined as one of the core systems, and its function is to collect excess heat from furnaces and to provide precise cold temperature control of components and of certain furnace zones. Physical interconnection of parallel thermal control subsystems through a common pump implies the description of the TCS by coupled nonlinear differential equations in pressure and flow. This report formulates the system equations and develops the controllers that cause the interconnected subsystems to satisfy flow rate tracking requirements. Extensive digital simulation results are presented to show the flow rate tracking performance.

  13. A comparison of East Asian summer monsoon simulations from CAM3.1 with three dynamic cores

    NASA Astrophysics Data System (ADS)

    Wei, Ting; Wang, Lanning; Dong, Wenjie; Dong, Min; Zhang, Jingyong

    2011-12-01

    This paper examines the sensitivity of CAM3.1 simulations of East Asian summer monsoon (EASM) to the choice of dynamic cores using three long-term simulations, one with each of the following cores: the Eulerian spectral transform method (EUL), semi-Lagrangian scheme (SLD) and finite volume approach (FV). Our results indicate that the dynamic cores significantly influence the simulated fields not only through dynamics, such as wind, but also through physical processes, such as precipitation. Generally speaking, SLD is superior to EUL and FV in simulating the climatological features of EASM and its interannual variability. The SLD version of the CAM model partially reduces its known deficiency in simulating the climatological features of East Asian summer precipitation. The strength and position of simulated western Pacific subtropical high (WPSH) and its ridge line compare more favourably with observations in SLD and FV than in EUL. They contribute to the intensification of the south-easterly along the south of WPSH and the vertical motion through the troposphere around 30° N, where the subtropical rain belt exists. Additionally, SLD simulates the scope of the westerly jet core over East Asia more realistically than the other two dynamic cores do. Considerable systematic errors of the seasonal migration of monsoon rain belt and water vapour flux exist in all of the three versions of CAM3.1 model, although it captures the broad northward shift of convection, and the simulated results share similarities. The interannual variation of EASM is found to be more accurate in SLD simulation, which reasonably reproduces the leading combined patterns of precipitation and 850-hPa winds in East Asia, as well as the 2.5- and 10-year periods of Li-Zeng EASM index. These results emphasise the importance of dynamic cores for the EASM simulation as distinct from the simulation's sensitivity to the physical parameterisations.

  14. Accelerating Climate Simulations Through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  15. Bridging FPGA and GPU technologies for AO real-time control

    NASA Astrophysics Data System (ADS)

    Perret, Denis; Lainé, Maxime; Bernard, Julien; Gratadour, Damien; Sevin, Arnaud

    2016-07-01

    Our team has developed a common environment for high performance simulations and real-time control of AO systems based on the use of Graphics Processors Units in the context of the COMPASS project. Such a solution, based on the ability of the real time core in the simulation to provide adequate computing performance, limits the cost of developing AO RTC systems and makes them more scalable. A code developed and validated in the context of the simulation may be injected directly into the system and tested on sky. Furthermore, the use of relatively low cost components also offers significant advantages for the system hardware platform. However, the use of GPUs in an AO loop comes with drawbacks: the traditional way of offloading computation from CPU to GPUs - involving multiple copies and unacceptable overhead in kernel launching - is not well suited in a real time context. This last application requires the implementation of a solution enabling direct memory access (DMA) to the GPU memory from a third party device, bypassing the operating system. This allows this device to communicate directly with the real-time core of the simulation feeding it with the WFS camera pixel stream. We show that DMA between a custom FPGA-based frame-grabber and a computation unit (GPU, FPGA, or Coprocessor such as Xeon-phi) across PCIe allows us to get latencies compatible with what will be needed on ELTs. As a fine-grained synchronization mechanism is not yet made available by GPU vendors, we propose the use of memory polling to avoid interrupts handling and involvement of a CPU. Network and Vision protocols are handled by the FPGA-based Network Interface Card (NIC). We present the results we obtained on a complete AO loop using camera and deformable mirror simulators.

  16. A computationally efficient method for full-core conjugate heat transfer modeling of sodium fast reactors

    DOE PAGES

    Hu, Rui; Yu, Yiqi

    2016-09-08

    For efficient and accurate temperature predictions of sodium fast reactor structures, a 3-D full-core conjugate heat transfer modeling capability is developed for an advanced system analysis tool, SAM. The hexagon lattice core is modeled with 1-D parallel channels representing the subassembly flow, and 2-D duct walls and inter-assembly gaps. The six sides of the hexagon duct wall and near-wall coolant region are modeled separately to account for different temperatures and heat transfer between coolant flow and each side of the duct wall. The Jacobian Free Newton Krylov (JFNK) solution method is applied to solve the fluid and solid field simultaneouslymore » in a fully coupled fashion. The 3-D full-core conjugate heat transfer modeling capability in SAM has been demonstrated by a verification test problem with 7 fuel assemblies in a hexagon lattice layout. In addition, the SAM simulation results are compared with RANS-based CFD simulations. Very good agreements have been achieved between the results of the two approaches.« less

  17. Initial Comparison of Direct and Legacy Modeling Approaches for Radial Core Expansion Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shemon, Emily R.

    2016-10-10

    Radial core expansion in sodium-cooled fast reactors provides an important reactivity feedback effect. As the reactor power increases due to normal start up conditions or accident scenarios, the core and surrounding materials heat up, causing both grid plate expansion and bowing of the assembly ducts. When the core restraint system is designed correctly, the resulting structural deformations introduce negative reactivity which decreases the reactor power. Historically, an indirect procedure has been used to estimate the reactivity feedback due to structural deformation which relies upon perturbation theory and coupling legacy physics codes with limited geometry capabilities. With advancements in modeling andmore » simulation, radial core expansion phenomena can now be modeled directly, providing an assessment of the accuracy of the reactivity feedback coefficients generated by indirect legacy methods. Recently a new capability was added to the PROTEUS-SN unstructured geometry neutron transport solver to analyze deformed meshes quickly and directly. By supplying the deformed mesh in addition to the base configuration input files, PROTEUS-SN automatically processes material adjustments including calculation of region densities to conserve mass, calculation of isotopic densities according to material models (for example, sodium density as a function of temperature), and subsequent re-homogenization of materials. To verify the new capability of directly simulating deformed meshes, PROTEUS-SN was used to compute reactivity feedback for a series of contrived yet representative deformed configurations for the Advanced Burner Test Reactor design. The indirect legacy procedure was also performed to generate reactivity feedback coefficients for the same deformed configurations. Interestingly, the legacy procedure consistently overestimated reactivity feedbacks by 35% compared to direct simulations by PROTEUS-SN. This overestimation indicates that the legacy procedures are in fact not conservative and could be overestimating reactivity feedback effects that are closely tied to reactor safety. We conclude that there is indeed value in performing direct simulation of deformed meshes despite the increased computational expense. PROTEUS-SN is already part of the SHARP multi-physics toolkit where both thermal hydraulics and structural mechanical feedback modeling can be applied but this is the first comparison of direct simulation to legacy techniques for radial core expansion.« less

  18. Development of the V4.2m5 and V5.0m0 Multigroup Cross Section Libraries for MPACT for PWR and BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Clarno, Kevin T.; Gentry, Cole

    2017-03-01

    The MPACT neutronics module of the Consortium for Advanced Simulation of Light Water Reactors (CASL) core simulator is a 3-D whole core transport code being developed for the CASL toolset, Virtual Environment for Reactor Analysis (VERA). Key characteristics of the MPACT code include (1) a subgroup method for resonance selfshielding and (2) a whole-core transport solver with a 2-D/1-D synthesis method. The MPACT code requires a cross section library to support all the MPACT core simulation capabilities which would be the most influencing component for simulation accuracy.

  19. Active Flash: Out-of-core Data Analytics on Flash Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S

    2012-01-01

    Next generation science will increasingly come to rely on the ability to perform efficient, on-the-fly analytics of data generated by high-performance computing (HPC) simulations, modeling complex physical phenomena. Scientific computing workflows are stymied by the traditional chaining of simulation and data analysis, creating multiple rounds of redundant reads and writes to the storage system, which grows in cost with the ever-increasing gap between compute and storage speeds in HPC clusters. Recent HPC acquisitions have introduced compute node-local flash storage as a means to alleviate this I/O bottleneck. We propose a novel approach, Active Flash, to expedite data analysis pipelines bymore » migrating to the location of the data, the flash device itself. We argue that Active Flash has the potential to enable true out-of-core data analytics by freeing up both the compute core and the associated main memory. By performing analysis locally, dependence on limited bandwidth to a central storage system is reduced, while allowing this analysis to proceed in parallel with the main application. In addition, offloading work from the host to the more power-efficient controller reduces peak system power usage, which is already in the megawatt range and poses a major barrier to HPC system scalability. We propose an architecture for Active Flash, explore energy and performance trade-offs in moving computation from host to storage, demonstrate the ability of appropriate embedded controllers to perform data analysis and reduction tasks at speeds sufficient for this application, and present a simulation study of Active Flash scheduling policies. These results show the viability of the Active Flash model, and its capability to potentially have a transformative impact on scientific data analysis.« less

  20. IPSL-CM5A2. An Earth System Model designed to run long simulations for past and future climates.

    NASA Astrophysics Data System (ADS)

    Sepulchre, Pierre; Caubel, Arnaud; Marti, Olivier; Hourdin, Frédéric; Dufresne, Jean-Louis; Boucher, Olivier

    2017-04-01

    The IPSL-CM5A model was developed and released in 2013 "to study the long-term response of the climate system to natural and anthropogenic forcings as part of the 5th Phase of the Coupled Model Intercomparison Project (CMIP5)" [Dufresne et al., 2013]. Although this model also has been used for numerous paleoclimate studies, a major limitation was its computation time, which averaged 10 model-years / day on 32 cores of the Curie supercomputer (on TGCC computing center, France). Such performances were compatible with the experimental designs of intercomparison projects (e.g. CMIP, PMIP) but became limiting for modelling activities involving several multi-millenial experiments, which are typical for Quaternary or "deeptime" paleoclimate studies, in which a fully-equilibrated deep-ocean is mandatory. Here we present the Earth-System model IPSL-CM5A2. Based on IPSL-CM5A, technical developments have been performed both on separate components and on the coupling system in order to speed up the whole coupled model. These developments include the integration of hybrid parallelization MPI-OpenMP in LMDz atmospheric component, the use of a new input-ouput library to perform parallel asynchronous input/output by using computing cores as "IO servers", the use of a parallel coupling library between the ocean and the atmospheric components. Running on 304 cores, the model can now simulate 55 years per day, opening new gates towards multi-millenial simulations. Apart from obtaining better computing performances, one aim of setting up IPSL-CM5A2 was also to overcome the cold bias depicted in global surface air temperature (t2m) in IPSL-CM5A. We present the tuning strategy to overcome this bias as well as the main characteristics (including biases) of the pre-industrial climate simulated by IPSL-CM5A2. Lastly, we shortly present paleoclimate simulations run with this model, for the Holocene and for deeper timescales in the Cenozoic, for which the particular continental configuration was overcome by a new design of the ocean tripolar grid.

  1. A framework for service enterprise workflow simulation with multi-agents cooperation

    NASA Astrophysics Data System (ADS)

    Tan, Wenan; Xu, Wei; Yang, Fujun; Xu, Lida; Jiang, Chuanqun

    2013-11-01

    Process dynamic modelling for service business is the key technique for Service-Oriented information systems and service business management, and the workflow model of business processes is the core part of service systems. Service business workflow simulation is the prevalent approach to be used for analysis of service business process dynamically. Generic method for service business workflow simulation is based on the discrete event queuing theory, which is lack of flexibility and scalability. In this paper, we propose a service workflow-oriented framework for the process simulation of service businesses using multi-agent cooperation to address the above issues. Social rationality of agent is introduced into the proposed framework. Adopting rationality as one social factor for decision-making strategies, a flexible scheduling for activity instances has been implemented. A system prototype has been developed to validate the proposed simulation framework through a business case study.

  2. Imagining the future: The core episodic simulation network dissociates as a function of timecourse and the amount of simulated information

    PubMed Central

    Thakral, Preston P.; Benoit, Roland G.; Schacter, Daniel L.

    2017-01-01

    Neuroimaging data indicate that episodic memory (i.e., remembering specific past experiences) and episodic simulation (i.e., imagining specific future experiences) are associated with enhanced activity in a common set of neural regions, often referred to as the core network. This network comprises the hippocampus, parahippocampal cortex, lateral and medial parietal cortex, lateral temporal cortex, and medial prefrontal cortex. Evidence for a core network has been taken as support for the idea that episodic memory and episodic simulation are supported by common processes. Much remains to be learned about how specific core network regions contribute to specific aspects of episodic simulation. Prior neuroimaging studies of episodic memory indicate that certain regions within the core network are differentially sensitive to the amount of information recollected (e.g., the left lateral parietal cortex). In addition, certain core network regions dissociate as a function of their timecourse of engagement during episodic memory (e.g., transient activity in the posterior hippocampus and sustained activity in the left lateral parietal cortex). In the current study, we assessed whether similar dissociations could be observed during episodic simulation. We found that the left lateral parietal cortex modulates as a function of the amount of simulated details. Of particular interest, while the hippocampus was insensitive to the amount of simulated details, we observed a temporal dissociation within the hippocampus: transient activity occurred in relatively posterior portions of the hippocampus and sustained activity occurred in anterior portions. Because the posterior hippocampal and lateral parietal findings parallel those observed previously during episodic memory, the present results add to the evidence that episodic memory and episodic simulation are supported by common processes. Critically, the present study also provides evidence that regions within the core network support dissociable processes. PMID:28324695

  3. Medicanes in an ocean-atmosphere coupled regional climate model

    NASA Astrophysics Data System (ADS)

    Akhtar, Naveed; Brauch, Jennifer; Ahrens, Bodo

    2014-05-01

    So-called medicanes (Mediterranean hurricanes) are meso-scale, marine and warm core Mediterranean cyclones which exhibit some similarities with tropical cyclones. The strong cyclonic winds associated with them are a potential thread for highly populated coastal areas around the Mediterranean basin. In this study we employ an atmospheric limited-area model (COSMO-CLM) coupled with a one-dimensional ocean model (NEMO-1d) to simulate medicanes. The goal of this study is to assess the robustness of the coupled model to simulate these extreme events. For this purpose 11 historical medicane events are simulated by the atmosphere-only and the coupled models using different set-ups (horizontal grid-spacings: 0.44o, 0.22o, 0.088o; with/with-out spectral nudging). The results show that at high resolution the coupled model is not only able to simulate all medicane events but also improves the simulated track length, warm core, and wind speed of simulated medicanes compared to atmosphere-only simulations. In most of the cases the medicanes trajectories and structures are better represented in coupled simulations compared to atmosphere-only simulations. We conclude that the coupled model is a suitable tool for systemic and detailed study of historical medicane events and also for future projections.

  4. Three-dimensional discrete element method simulation of core disking

    NASA Astrophysics Data System (ADS)

    Wu, Shunchuan; Wu, Haoyan; Kemeny, John

    2018-04-01

    The phenomenon of core disking is commonly seen in deep drilling of highly stressed regions in the Earth's crust. Given its close relationship with the in situ stress state, the presence and features of core disking can be used to interpret the stresses when traditional in situ stress measuring techniques are not available. The core disking process was simulated in this paper using the three-dimensional discrete element method software PFC3D (particle flow code). In particular, PFC3D is used to examine the evolution of fracture initiation, propagation and coalescence associated with core disking under various stress states. In this paper, four unresolved problems concerning core disking are investigated with a series of numerical simulations. These simulations also provide some verification of existing results by other researchers: (1) Core disking occurs when the maximum principal stress is about 6.5 times the tensile strength. (2) For most stress situations, core disking occurs from the outer surface, except for the thrust faulting stress regime, where the fractures were found to initiate from the inner part. (3) The anisotropy of the two horizontal principal stresses has an effect on the core disking morphology. (4) The thickness of core disk has a positive relationship with radial stress and a negative relationship with axial stresses.

  5. Virtual machine-based simulation platform for mobile ad-hoc network-based cyber infrastructure

    DOE PAGES

    Yoginath, Srikanth B.; Perumalla, Kayla S.; Henz, Brian J.

    2015-09-29

    In modeling and simulating complex systems such as mobile ad-hoc networks (MANETs) in de-fense communications, it is a major challenge to reconcile multiple important considerations: the rapidity of unavoidable changes to the software (network layers and applications), the difficulty of modeling the critical, implementation-dependent behavioral effects, the need to sustain larger scale scenarios, and the desire for faster simulations. Here we present our approach in success-fully reconciling them using a virtual time-synchronized virtual machine(VM)-based parallel ex-ecution framework that accurately lifts both the devices as well as the network communications to a virtual time plane while retaining full fidelity. At themore » core of our framework is a scheduling engine that operates at the level of a hypervisor scheduler, offering a unique ability to execute multi-core guest nodes over multi-core host nodes in an accurate, virtual time-synchronized manner. In contrast to other related approaches that suffer from either speed or accuracy issues, our framework provides MANET node-wise scalability, high fidelity of software behaviors, and time-ordering accuracy. The design and development of this framework is presented, and an ac-tual implementation based on the widely used Xen hypervisor system is described. Benchmarks with synthetic and actual applications are used to identify the benefits of our approach. The time inaccuracy of traditional emulation methods is demonstrated, in comparison with the accurate execution of our framework verified by theoretically correct results expected from analytical models of the same scenarios. In the largest high fidelity tests, we are able to perform virtual time-synchronized simulation of 64-node VM-based full-stack, actual software behaviors of MANETs containing a mix of static and mobile (unmanned airborne vehicle) nodes, hosted on a 32-core host, with full fidelity of unmodified ad-hoc routing protocols, unmodified application executables, and user-controllable physical layer effects including inter-device wireless signal strength, reachability, and connectivity.« less

  6. Virtual machine-based simulation platform for mobile ad-hoc network-based cyber infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B.; Perumalla, Kayla S.; Henz, Brian J.

    In modeling and simulating complex systems such as mobile ad-hoc networks (MANETs) in de-fense communications, it is a major challenge to reconcile multiple important considerations: the rapidity of unavoidable changes to the software (network layers and applications), the difficulty of modeling the critical, implementation-dependent behavioral effects, the need to sustain larger scale scenarios, and the desire for faster simulations. Here we present our approach in success-fully reconciling them using a virtual time-synchronized virtual machine(VM)-based parallel ex-ecution framework that accurately lifts both the devices as well as the network communications to a virtual time plane while retaining full fidelity. At themore » core of our framework is a scheduling engine that operates at the level of a hypervisor scheduler, offering a unique ability to execute multi-core guest nodes over multi-core host nodes in an accurate, virtual time-synchronized manner. In contrast to other related approaches that suffer from either speed or accuracy issues, our framework provides MANET node-wise scalability, high fidelity of software behaviors, and time-ordering accuracy. The design and development of this framework is presented, and an ac-tual implementation based on the widely used Xen hypervisor system is described. Benchmarks with synthetic and actual applications are used to identify the benefits of our approach. The time inaccuracy of traditional emulation methods is demonstrated, in comparison with the accurate execution of our framework verified by theoretically correct results expected from analytical models of the same scenarios. In the largest high fidelity tests, we are able to perform virtual time-synchronized simulation of 64-node VM-based full-stack, actual software behaviors of MANETs containing a mix of static and mobile (unmanned airborne vehicle) nodes, hosted on a 32-core host, with full fidelity of unmodified ad-hoc routing protocols, unmodified application executables, and user-controllable physical layer effects including inter-device wireless signal strength, reachability, and connectivity.« less

  7. Simulation of Benchmark Cases with the Terminal Area Simulation System (TASS)

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at; Proctor, Fred

    2011-01-01

    The hydrodynamic core of the Terminal Area Simulation System (TASS) is evaluated against different benchmark cases. In the absence of closed form solutions for the equations governing atmospheric flows, the models are usually evaluated against idealized test cases. Over the years, various authors have suggested a suite of these idealized cases which have become standards for testing and evaluating the dynamics and thermodynamics of atmospheric flow models. In this paper, simulations of three such cases are described. In addition, the TASS model is evaluated against a test case that uses an exact solution of the Navier-Stokes equations. The TASS results are compared against previously reported simulations of these banchmark cases in the literature. It is demonstrated that the TASS model is highly accurate, stable and robust.

  8. Simulation of Benchmark Cases with the Terminal Area Simulation System (TASS)

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat N.; Proctor, Fred H.

    2011-01-01

    The hydrodynamic core of the Terminal Area Simulation System (TASS) is evaluated against different benchmark cases. In the absence of closed form solutions for the equations governing atmospheric flows, the models are usually evaluated against idealized test cases. Over the years, various authors have suggested a suite of these idealized cases which have become standards for testing and evaluating the dynamics and thermodynamics of atmospheric flow models. In this paper, simulations of three such cases are described. In addition, the TASS model is evaluated against a test case that uses an exact solution of the Navier-Stokes equations. The TASS results are compared against previously reported simulations of these benchmark cases in the literature. It is demonstrated that the TASS model is highly accurate, stable and robust.

  9. Extended frequency turbofan model

    NASA Technical Reports Server (NTRS)

    Mason, J. R.; Park, J. W.; Jaekel, R. F.

    1980-01-01

    The fan model was developed using two dimensional modeling techniques to add dynamic radial coupling between the core stream and the bypass stream of the fan. When incorporated into a complete TF-30 engine simulation, the fan model greatly improved compression system frequency response to planar inlet pressure disturbances up to 100 Hz. The improved simulation also matched engine stability limits at 15 Hz, whereas the one dimensional fan model required twice the inlet pressure amplitude to stall the simulation. With verification of the two dimensional fan model, this program formulated a high frequency F-100(3) engine simulation using row by row compression system characteristics. In addition to the F-100(3) remote splitter fan, the program modified the model fan characteristics to simulate a proximate splitter version of the F-100(3) engine.

  10. Multidimensional simulations of core-collapse supernovae with CHIMERA

    NASA Astrophysics Data System (ADS)

    Lentz, Eric J.; Bruenn, S. W.; Yakunin, K.; Endeve, E.; Blondin, J. M.; Harris, J. A.; Hix, W. R.; Marronetti, P.; Messer, O. B.; Mezzacappa, A.

    2014-01-01

    Core-collapse supernovae are driven by a multidimensional neutrino radiation hydrodynamic (RHD) engine, and full simulation requires at least axisymmetric (2D) and ultimately symmetry-free 3D RHD simulation. We present recent and ongoing work with our multidimensional RHD supernova code CHIMERA to understand the nature of the core-collapse explosion mechanism and its consequences. Recently completed simulations of 12-25 solar mass progenitors(Woosley & Heger 2007) in well resolved (0.7 degrees in latitude) 2D simulations exhibit robust explosions meeting the observationally expected explosion energy. We examine the role of hydrodynamic instabilities (standing accretion shock instability, neutrino driven convection, etc.) on the explosion dynamics and the development of the explosion energy. Ongoing 3D and 2D simulations examine the role that simulation resolution and the removal of the imposed axisymmetry have in the triggering and development of an explosion from stellar core collapse. Companion posters will explore the gravitational wave signals (Yakunin et al.) and nucleosynthesis (Harris et al.) of our simulations.

  11. Finite Element Modelling and Analysis of Damage Detection Methodology in Piezo Electric Sensor and Actuator Integrated Sandwich Cantilever Beam

    NASA Astrophysics Data System (ADS)

    Pradeep, K. R.; Thomas, A. M.; Basker, V. T.

    2018-03-01

    Structural health monitoring (SHM) is an essential component of futuristic civil, mechanical and aerospace structures. It detects the damages in system or give warning about the degradation of structure by evaluating performance parameters. This is achieved by the integration of sensors and actuators into the structure. Study of damage detection process in piezoelectric sensor and actuator integrated sandwich cantilever beam is carried out in this paper. Possible skin-core debond at the root of the cantilever beam is simulated and compared with undamaged case. The beam is actuated using piezoelectric actuators and performance differences are evaluated using Polyvinylidene fluoride (PVDF) sensors. The methodology utilized is the voltage/strain response of the damaged versus undamaged beam against transient actuation. Finite element model of piezo-beam is simulated in ANSYSTM using 8 noded coupled field element, with nodal degrees of freedoms are translations in the x, y directions and voltage. An aluminium sandwich beam with a length of 800mm, thickness of core 22.86mm and thickness of skin 0.3mm is considered. Skin-core debond is simulated in the model as unmerged nodes. Reduction in the fundamental frequency of the damaged beam is found to be negligible. But the voltage response of the PVDF sensor under transient excitation shows significantly visible change indicating the debond. Piezo electric based damage detection system is an effective tool for the damage detection of aerospace and civil structural system having inaccessible/critical locations and enables online monitoring possibilities as the power requirement is minimal.

  12. MHDL CAD tool with fault circuit handling

    NASA Astrophysics Data System (ADS)

    Espinosa Flores-Verdad, Guillermo; Altamirano Robles, Leopoldo; Osorio Roque, Leticia

    2003-04-01

    Behavioral modeling and simulation, with Analog Hardware and Mixed Signal Description High Level Languages (MHDLs), have generated the development of diverse simulation tools that allow handling the requirements of the modern designs. These systems have million of transistors embedded and they are radically diverse between them. This tendency of simulation tools is exemplified by the development of languages for modeling and simulation, whose applications are the re-use of complete systems, construction of virtual prototypes, realization of test and synthesis. This paper presents the general architecture of a Mixed Hardware Description Language, based on the standard 1076.1-1999 IEEE VHDL Analog and Mixed-Signal Extensions known as VHDL-AMS. This architecture is novel by consider the modeling and simulation of faults. The main modules of the CAD tool are briefly described in order to establish the information flow and its transformations, starting from the description of a circuit model, going throw the lexical analysis, mathematical models generation and the simulation core, ending at the collection of the circuit behavior as simulation"s data. In addition, the incorporated mechanisms to the simulation core are explained in order to realize the handling of faults into the circuit models. Currently, the CAD tool works with algebraic and differential descriptions for the circuit models, nevertheless the language design is open to be able to handle different model types: Fuzzy Models, Differentials Equations, Transfer Functions and Tables. This applies for fault models too, in this sense the CAD tool considers the inclusion of mutants and saboteurs. To exemplified the results obtained until now, the simulated behavior of a circuit is shown when it is fault free and when it has been modified by the inclusion of a fault as a mutant or a saboteur. The obtained results allow the realization of a virtual diagnosis for mixed circuits. This language works in a UNIX system; it was developed with an object-oriented methodology and programmed in C++.

  13. Design analysis tracking and data relay satellite simulation system

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The design and development of the equipment necessary to simulate the S-band multiple access link between user spacecraft, the Tracking and Data Relay Satellite, and a ground control terminal are discussed. The core of the S-band multiple access concept is the use of an Adaptive Ground Implemented Phased Array. The array contains thirty channels and provides the multiplexing and demultiplexing equipment required to demonstrate the ground implemented beam forming feature. The system provided will make it possible to demonstrate the performance of a desired user and ten interfering sources attempting to pass data through the multiple access system.

  14. Power control of SAFE reactor using fuzzy logic

    NASA Astrophysics Data System (ADS)

    Irvine, Claude

    2002-01-01

    Controlling the 100 kW SAFE (Safe Affordable Fission Engine) reactor consists of design and implementation of a fuzzy logic process control system to regulate dynamic variables related to nuclear system power. The first phase of development concentrates primarily on system power startup and regulation, maintaining core temperature equilibrium, and power profile matching. This paper discusses the experimental work performed in those areas. Nuclear core power from the fuel elements is simulated using resistive heating elements while heat rejection is processed by a series of heat pipes. Both axial and radial nuclear power distributions are determined from neuronic modeling codes. The axial temperature profile of the simulated core is matched to the nuclear power profile by varying the resistance of the heating elements. The SAFE model establishes radial temperature profile equivalence by establishing 32 control zones as the nodal coordinates. Control features also allow for slow warm up, since complete shutoff can occur in the heat pipes if heat-source temperatures drop/rise below a certain minimum value, depending on the specific fluid and gas combination in the heat pipe. The entire system is expected to be self-adaptive, i.e., capable of responding to long-range changes in the space environment. Particular attention in the development of the fuzzy logic algorithm shall ensure that the system process remains at set point, virtually eliminating overshoot on start-up and during in-process disturbances. The controller design will withstand harsh environments and applications where it might come in contact with water, corrosive chemicals, radiation fields, etc. .

  15. The Explosion Mechanism of Core-Collapse Supernovae: Progress in Supernova Theory and Experiments

    DOE PAGES

    Foglizzo, Thierry; Kazeroni, Rémi; Guilet, Jérôme; ...

    2015-01-01

    The explosion of core-collapse supernova depends on a sequence of events taking place in less than a second in a region of a few hundred kilometers at the center of a supergiant star, after the stellar core approaches the Chandrasekhar mass and collapses into a proto-neutron star, and before a shock wave is launched across the stellar envelope. Theoretical efforts to understand stellar death focus on the mechanism which transforms the collapse into an explosion. Progress in understanding this mechanism is reviewed with particular attention to its asymmetric character. We highlight a series of successful studies connecting observations of supernovamore » remnants and pulsars properties to the theory of core-collapse using numerical simulations. The encouraging results from first principles models in axisymmetric simulations is tempered by new puzzles in 3D. The diversity of explosion paths and the dependence on the pre-collapse stellar structure is stressed, as well as the need to gain a better understanding of hydrodynamical and MHD instabilities such as SASI and neutrino-driven convection. The shallow water analogy of shock dynamics is presented as a comparative system where buoyancy effects are absent. This dynamical system can be studied numerically and also experimentally with a water fountain. Lastly, we analyse the potential of this complementary research tool for supernova theory. We also review its potential for public outreach in science museums.« less

  16. Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Putnam, Williama

    2011-01-01

    The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.

  17. The Explosion Mechanism of Core-Collapse Supernovae: Progress in Supernova Theory and Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foglizzo, Thierry; Kazeroni, Rémi; Guilet, Jérôme

    The explosion of core-collapse supernova depends on a sequence of events taking place in less than a second in a region of a few hundred kilometers at the center of a supergiant star, after the stellar core approaches the Chandrasekhar mass and collapses into a proto-neutron star, and before a shock wave is launched across the stellar envelope. Theoretical efforts to understand stellar death focus on the mechanism which transforms the collapse into an explosion. Progress in understanding this mechanism is reviewed with particular attention to its asymmetric character. We highlight a series of successful studies connecting observations of supernovamore » remnants and pulsars properties to the theory of core-collapse using numerical simulations. The encouraging results from first principles models in axisymmetric simulations is tempered by new puzzles in 3D. The diversity of explosion paths and the dependence on the pre-collapse stellar structure is stressed, as well as the need to gain a better understanding of hydrodynamical and MHD instabilities such as SASI and neutrino-driven convection. The shallow water analogy of shock dynamics is presented as a comparative system where buoyancy effects are absent. This dynamical system can be studied numerically and also experimentally with a water fountain. Lastly, we analyse the potential of this complementary research tool for supernova theory. We also review its potential for public outreach in science museums.« less

  18. Scaling a Convection-Resolving RCM to Near-Global Scales

    NASA Astrophysics Data System (ADS)

    Leutwyler, D.; Fuhrer, O.; Chadha, T.; Kwasniewski, G.; Hoefler, T.; Lapillonne, X.; Lüthi, D.; Osuna, C.; Schar, C.; Schulthess, T. C.; Vogt, H.

    2017-12-01

    In the recent years, first decade-long kilometer-scale resolution RCM simulations have been performed on continental-scale computational domains. However, the size of the planet Earth is still an order of magnitude larger and thus the computational implications of performing global climate simulations at this resolution are challenging. We explore the gap between the currently established RCM simulations and global simulations by scaling the GPU accelerated version of the COSMO model to a near-global computational domain. To this end, the evolution of an idealized moist baroclinic wave has been simulated over the course of 10 days with a grid spacing of up to 930 m. The computational mesh employs 36'000 x 16'001 x 60 grid points and covers 98.4% of the planet's surface. The code shows perfect weak scaling up to 4'888 Nodes of the Piz Daint supercomputer and yields 0.043 simulated years per day (SYPD) which is approximately one seventh of the 0.2-0.3 SYPD required to conduct AMIP-type simulations. However, at half the resolution (1.9 km) we've observed 0.23 SYPD. Besides formation of frontal precipitating systems containing embedded explicitly-resolved convective motions, the simulations reveal a secondary instability that leads to cut-off warm-core cyclonic vortices in the cyclone's core, once the grid spacing is refined to the kilometer scale. The explicit representation of embedded moist convection and the representation of the previously unresolved instabilities exhibit a physically different behavior in comparison to coarser-resolution simulations. The study demonstrates that global climate simulations using kilometer-scale resolution are imminent and serves as a baseline benchmark for global climate model applications and future exascale supercomputing systems.

  19. Flow Analysis of a Gas Turbine Low- Pressure Subsystem

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    1997-01-01

    The NASA Lewis Research Center is coordinating a project to numerically simulate aerodynamic flow in the complete low-pressure subsystem (LPS) of a gas turbine engine. The numerical model solves the three-dimensional Navier-Stokes flow equations through all components within the low-pressure subsystem as well as the external flow around the engine nacelle. The Advanced Ducted Propfan Analysis Code (ADPAC), which is being developed jointly by Allison Engine Company and NASA, is the Navier-Stokes flow code being used for LPS simulation. The majority of the LPS project is being done under a NASA Lewis contract with Allison. Other contributors to the project are NYMA and the University of Toledo. For this project, the Energy Efficient Engine designed by GE Aircraft Engines is being modeled. This engine includes a low-pressure system and a high-pressure system. An inlet, a fan, a booster stage, a bypass duct, a lobed mixer, a low-pressure turbine, and a jet nozzle comprise the low-pressure subsystem within this engine. The tightly coupled flow analysis evaluates aerodynamic interactions between all components of the LPS. The high-pressure core engine of this engine is simulated with a one-dimensional thermodynamic cycle code in order to provide boundary conditions to the detailed LPS model. This core engine consists of a high-pressure compressor, a combustor, and a high-pressure turbine. The three-dimensional LPS flow model is coupled to the one-dimensional core engine model to provide a "hybrid" flow model of the complete gas turbine Energy Efficient Engine. The resulting hybrid engine model evaluates the detailed interaction between the LPS components at design and off-design engine operating conditions while considering the lumped-parameter performance of the core engine.

  20. Thermal evolution of trans-Neptunian objects, icy satellites, and minor icy planets in the early solar system

    NASA Astrophysics Data System (ADS)

    Bhatia, Gurpreet Kaur; Sahijpal, Sandeep

    2017-12-01

    Numerical simulations are performed to understand the early thermal evolution and planetary scale differentiation of icy bodies with the radii in the range of 100-2500 km. These icy bodies include trans-Neptunian objects, minor icy planets (e.g., Ceres, Pluto); the icy satellites of Jupiter, Saturn, Uranus, and Neptune; and probably the icy-rocky cores of these planets. The decay energy of the radionuclides, 26Al, 60Fe, 40K, 235U, 238U, and 232Th, along with the impact-induced heating during the accretion of icy bodies were taken into account to thermally evolve these planetary bodies. The simulations were performed for a wide range of initial ice and rock (dust) mass fractions of the icy bodies. Three distinct accretion scenarios were used. The sinking of the rock mass fraction in primitive water oceans produced by the substantial melting of ice could lead to planetary scale differentiation with the formation of a rocky core that is surrounded by a water ocean and an icy crust within the initial tens of millions of years of the solar system in case the planetary bodies accreted prior to the substantial decay of 26Al. However, over the course of billions of years, the heat produced due to 40K, 235U, 238U, and 232Th could have raised the temperature of the interiors of the icy bodies to the melting point of iron and silicates, thereby leading to the formation of an iron core. Our simulations indicate the presence of an iron core even at the center of icy bodies with radii ≥500 km for different ice mass fractions.

  1. Evaluation of Cloud-resolving and Limited Area Model Intercomparison Simulations using TWP-ICE Observations. Part 1: Deep Convective Updraft Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varble, A. C.; Zipser, Edward J.; Fridlind, Ann

    2014-12-27

    Ten 3D cloud-resolving model (CRM) simulations and four 3D limited area model (LAM) simulations of an intense mesoscale convective system observed on January 23-24, 2006 during the Tropical Warm Pool – International Cloud Experiment (TWP-ICE) are compared with each other and with observed radar reflectivity fields and dual-Doppler retrievals of vertical wind speeds in an attempt to explain published results showing a high bias in simulated convective radar reflectivity aloft. This high bias results from ice water content being large, which is a product of large, strong convective updrafts, although hydrometeor size distribution assumptions modulate the size of this bias.more » Snow reflectivity can exceed 40 dBZ in a two-moment scheme when a constant bulk density of 100 kg m-3 is used. Making snow mass more realistically proportional to area rather than volume should somewhat alleviate this problem. Graupel, unlike snow, produces high biased reflectivity in all simulations. This is associated with large amounts of liquid water above the freezing level in updraft cores. Peak vertical velocities in deep convective updrafts are greater than dual-Doppler retrieved values, especially in the upper troposphere. Freezing of large rainwater contents lofted above the freezing level in simulated updraft cores greatly contributes to these excessive upper tropospheric vertical velocities. Strong simulated updraft cores are nearly undiluted, with some showing supercell characteristics. Decreasing horizontal grid spacing from 900 meters to 100 meters weakens strong updrafts, but not enough to match observational retrievals. Therefore, overly intense simulated updrafts may partly be a product of interactions between convective dynamics, parameterized microphysics, and large-scale environmental biases that promote different convective modes and strengths than observed.« less

  2. Two-dimensional Core-collapse Supernova Explosions Aided by General Relativity with Multidimensional Neutrino Transport

    NASA Astrophysics Data System (ADS)

    O’Connor, Evan P.; Couch, Sean M.

    2018-02-01

    We present results from simulations of core-collapse supernovae in FLASH using a newly implemented multidimensional neutrino transport scheme and a newly implemented general relativistic (GR) treatment of gravity. We use a two-moment method with an analytic closure (so-called M1 transport) for the neutrino transport. This transport is multienergy, multispecies, velocity dependent, and truly multidimensional, i.e., we do not assume the commonly used “ray-by-ray” approximation. Our GR gravity is implemented in our Newtonian hydrodynamics simulations via an effective relativistic potential that closely reproduces the GR structure of neutron stars and has been shown to match GR simulations of core collapse quite well. In axisymmetry, we simulate core-collapse supernovae with four different progenitor models in both Newtonian and GR gravity. We find that the more compact proto–neutron star structure realized in simulations with GR gravity gives higher neutrino luminosities and higher neutrino energies. These differences in turn give higher neutrino heating rates (upward of ∼20%–30% over the corresponding Newtonian gravity simulations) that increase the efficacy of the neutrino mechanism. Three of the four models successfully explode in the simulations assuming GREP gravity. In our Newtonian gravity simulations, two of the four models explode, but at times much later than observed in our GR gravity simulations. Our results, in both Newtonian and GR gravity, compare well with several other studies in the literature. These results conclusively show that the approximation of Newtonian gravity for simulating the core-collapse supernova central engine is not acceptable. We also simulate four additional models in GR gravity to highlight the growing disparity between parameterized 1D models of core-collapse supernovae and the current generation of 2D models.

  3. Runtime Performance and Virtual Network Control Alternatives in VM-Based High-Fidelity Network Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J

    2012-01-01

    In prior work (Yoginath and Perumalla, 2011; Yoginath, Perumalla and Henz, 2012), the motivation, challenges and issues were articulated in favor of virtual time ordering of Virtual Machines (VMs) in network simulations hosted on multi-core machines. Two major components in the overall virtualization challenge are (1) virtual timeline establishment and scheduling of VMs, and (2) virtualization of inter-VM communication. Here, we extend prior work by presenting scaling results for the first component, with experiment results on up to 128 VMs scheduled in virtual time order on a single 12-core host. We also explore the solution space of design alternatives formore » the second component, and present performance results from a multi-threaded, multi-queue implementation of inter-VM network control for synchronized execution with VM scheduling, incorporated in our NetWarp simulation system.« less

  4. Numerical simulation of the geodynamo reaches Earth's core dynamical regime

    NASA Astrophysics Data System (ADS)

    Aubert, J.; Gastine, T.; Fournier, A.

    2016-12-01

    Numerical simulations of the geodynamo have been successful at reproducing a number of static (field morphology) and kinematic (secular variation patterns, core surface flows and westward drift) features of Earth's magnetic field, making them a tool of choice for the analysis and retrieval of geophysical information on Earth's core. However, classical numerical models have been run in a parameter regime far from that of the real system, prompting the question of whether we do get "the right answers for the wrong reasons", i.e. whether the agreement between models and nature simply occurs by chance and without physical relevance in the dynamics. In this presentation, we show that classical models succeed in describing the geodynamo because their large-scale spatial structure is essentially invariant as one progresses along a well-chosen path in parameter space to Earth's core conditions. This path is constrained by the need to enforce the relevant force balance (MAC or Magneto-Archimedes-Coriolis) and preserve the ratio of the convective overturn and magnetic diffusion times. Numerical simulations performed along this path are shown to be spatially invariant at scales larger than that where the magnetic energy is ohmically dissipated. This property enables the definition of large-eddy simulations that show good agreement with direct numerical simulations in the range where both are feasible, and that can be computed at unprecedented values of the control parameters, such as an Ekman number E=10-8. Combining direct and large-eddy simulations, large-scale invariance is observed over half the logarithmic distance in parameter space between classical models and Earth. The conditions reached at this mid-point of the path are furthermore shown to be representative of the rapidly-rotating, asymptotic dynamical regime in which Earth's core resides, with a MAC force balance undisturbed by viscosity or inertia, the enforcement of a Taylor state and strong-field dynamo action. We conclude that numerical modelling has advanced to a stage where it is possible to use models correctly representing the statics, kinematics and now the dynamics of the geodynamo. This opens the way to a better analysis of the geomagnetic field in the time and space domains.

  5. Parallelization of a spatial random field characterization process using the Method of Anchored Distributions and the HTCondor high throughput computing system

    NASA Astrophysics Data System (ADS)

    Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.

    2013-12-01

    A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)

  6. Final Technical Report for Contract No. DE-EE0006332, "Integrated Simulation Development and Decision Support Tool-Set for Utility Market and Distributed Solar Power Generation"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cormier, Dallas; Edra, Sherwin; Espinoza, Michael

    This project will enable utilities to develop long-term strategic plans that integrate high levels of renewable energy generation, and to better plan power system operations under high renewable penetration. The program developed forecast data streams for decision support and effective integration of centralized and distributed solar power generation in utility operations. This toolset focused on real time simulation of distributed power generation within utility grids with the emphasis on potential applications in day ahead (market) and real time (reliability) utility operations. The project team developed and demonstrated methodologies for quantifying the impact of distributed solar generation on core utility operations,more » identified protocols for internal data communication requirements, and worked with utility personnel to adapt the new distributed generation (DG) forecasts seamlessly within existing Load and Generation procedures through a sophisticated DMS. This project supported the objectives of the SunShot Initiative and SUNRISE by enabling core utility operations to enhance their simulation capability to analyze and prepare for the impacts of high penetrations of solar on the power grid. The impact of high penetration solar PV on utility operations is not only limited to control centers, but across many core operations. Benefits of an enhanced DMS using state-of-the-art solar forecast data were demonstrated within this project and have had an immediate direct operational cost savings for Energy Marketing for Day Ahead generation commitments, Real Time Operations, Load Forecasting (at an aggregate system level for Day Ahead), Demand Response, Long term Planning (asset management), Distribution Operations, and core ancillary services as required for balancing and reliability. This provided power system operators with the necessary tools and processes to operate the grid in a reliable manner under high renewable penetration.« less

  7. Interface requirements to couple thermal-hydraulic codes to severe accident codes: ATHLET-CD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trambauer, K.

    1997-07-01

    The system code ATHLET-CD is being developed by GRS in cooperation with IKE and IPSN. Its field of application comprises the whole spectrum of leaks and large breaks, as well as operational and abnormal transients for LWRs and VVERs. At present the analyses cover the in-vessel thermal-hydraulics, the early phases of core degradation, as well as fission products and aerosol release from the core and their transport in the Reactor Coolant System. The aim of the code development is to extend the simulation of core degradation up to failure of the reactor pressure vessel and to cover all physically reasonablemore » accident sequences for western and eastern LWRs including RMBKs. The ATHLET-CD structure is highly modular in order to include a manifold spectrum of models and to offer an optimum basis for further development. The code consists of four general modules to describe the reactor coolant system thermal-hydraulics, the core degradation, the fission product core release, and fission product and aerosol transport. Each general module consists of some basic modules which correspond to the process to be simulated or to its specific purpose. Besides the code structure based on the physical modelling, the code follows four strictly separated steps during the course of a calculation: (1) input of structure, geometrical data, initial and boundary condition, (2) initialization of derived quantities, (3) steady state calculation or input of restart data, and (4) transient calculation. In this paper, the transient solution method is briefly presented and the coupling methods are discussed. Three aspects have to be considered for the coupling of different modules in one code system. First is the conservation of masses and energy in the different subsystems as there are fluid, structures, and fission products and aerosols. Second is the convergence of the numerical solution and stability of the calculation. The third aspect is related to the code performance, and running time.« less

  8. Hysteresis behaviors in a ferrimagnetic Ising nanotube with hexagonal core-shell structure

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Wang, Wei; Lv, Dan; Zhao, Xue-ru; Huang, Te; Wang, Ze-yuan

    2018-07-01

    Monte Carlo simulation has been employed to study the hysteresis behaviors of a ferrimagnetic mixed-spin (1, 3/2) Ising nanotube with hexagonal core-shell structure. The effects of different single-ion anisotropies, exchange couplings and temperature on the hysteresis loops of the system and sublattices are discussed in detail. Multiple hysteresis loops such as triple loops have been observed in the system under certain physical parameters. It is found that the anisotropy, the exchange coupling and the temperature strongly affect the coercivities and the remanences of the system and the sublattices. Comparing our results with other theoretical and experimental studies, a satisfactory agreement can be achieved qualitatively.

  9. UAS-Systems Integration, Validation, and Diagnostics Simulation Capability

    NASA Technical Reports Server (NTRS)

    Buttrill, Catherine W.; Verstynen, Harry A.

    2014-01-01

    As part of the Phase 1 efforts of NASA's UAS-in-the-NAS Project a task was initiated to explore the merits of developing a system simulation capability for UAS to address airworthiness certification requirements. The core of the capability would be a software representation of an unmanned vehicle, including all of the relevant avionics and flight control system components. The specific system elements could be replaced with hardware representations to provide Hardware-in-the-Loop (HWITL) test and evaluation capability. The UAS Systems Integration and Validation Laboratory (UAS-SIVL) was created to provide a UAS-systems integration, validation, and diagnostics hardware-in-the-loop simulation capability. This paper discusses how SIVL provides a robust and flexible simulation framework that permits the study of failure modes, effects, propagation paths, criticality, and mitigation strategies to help develop safety, reliability, and design data that can assist with the development of certification standards, means of compliance, and design best practices for civil UAS.

  10. Preliminary LOCA analysis of the westinghouse small modular reactor using the WCOBRA/TRAC-TF2 thermal-hydraulics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, J.; Kucukboyaci, V. N.; Nguyen, L.

    2012-07-01

    The Westinghouse Small Modular Reactor (SMR) is an 800 MWt (> 225 MWe) integral pressurized water reactor (iPWR) with all primary components, including the steam generator and the pressurizer located inside the reactor vessel. The reactor core is based on a partial-height 17x17 fuel assembly design used in the AP1000{sup R} reactor core. The Westinghouse SMR utilizes passive safety systems and proven components from the AP1000 plant design with a compact containment that houses the integral reactor vessel and the passive safety systems. A preliminary loss of coolant accident (LOCA) analysis of the Westinghouse SMR has been performed using themore » WCOBRA/TRAC-TF2 code, simulating a transient caused by a double ended guillotine (DEG) break in the direct vessel injection (DVI) line. WCOBRA/TRAC-TF2 is a new generation Westinghouse LOCA thermal-hydraulics code evolving from the US NRC licensed WCOBRA/TRAC code. It is designed to simulate PWR LOCA events from the smallest break size to the largest break size (DEG cold leg). A significant number of fluid dynamics models and heat transfer models were developed or improved in WCOBRA/TRAC-TF2. A large number of separate effects and integral effects tests were performed for a rigorous code assessment and validation. WCOBRA/TRAC-TF2 was introduced into the Westinghouse SMR design phase to assist a quick and robust passive cooling system design and to identify thermal-hydraulic phenomena for the development of the SMR Phenomena Identification Ranking Table (PIRT). The LOCA analysis of the Westinghouse SMR demonstrates that the DEG DVI break LOCA is mitigated by the injection and venting from the Westinghouse SMR passive safety systems without core heat up, achieving long term core cooling. (authors)« less

  11. CFD Analysis of Upper Plenum Flow for a Sodium-Cooled Small Modular Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kraus, A.; Hu, R.

    2015-01-01

    Upper plenum flow behavior is important for many operational and safety issues in sodium fast reactors. The Prototype Gen-IV Sodium Fast Reactor (PGSFR), a pool-type, 150 MWe output power design, was used as a reference case for a detailed characterization of upper plenum flow for normal operating conditions. Computational Fluid Dynamics (CFD) simulation was utilized with detailed geometric modeling of major structures. Core outlet conditions based on prior system-level calculations were mapped to approximate the outlet temperatures and flow rates for each core assembly. Core outlet flow was found to largely bypass the Upper Internal Structures (UIS). Flow curves overmore » the shield and circulates within the pool before exiting the plenum. Cross-flows and temperatures were evaluated near the core outlet, leading to a proposed height for the core outlet thermocouples to ensure accurate assembly-specific temperature readings. A passive scalar was used to evaluate fluid residence time from core outlet to IHX inlet, which can be used to assess the applicability of various methods for monitoring fuel failure. Additionally, the gas entrainment likelihood was assessed based on the CFD simulation results. Based on the evaluation of velocity gradients and turbulent kinetic energies and the available gas entrainment criteria in the literature, it was concluded that significant gas entrainment is unlikely for the current PGSFR design.« less

  12. Equation of state and critical point behavior of hard-core double-Yukawa fluids.

    PubMed

    Montes, J; Robles, M; López de Haro, M

    2016-02-28

    A theoretical study on the equation of state and the critical point behavior of hard-core double-Yukawa fluids is presented. Thermodynamic perturbation theory, restricted to first order in the inverse temperature and having the hard-sphere fluid as the reference system, is used to derive a relatively simple analytical equation of state of hard-core multi-Yukawa fluids. Using such an equation of state, the compressibility factor and phase behavior of six representative hard-core double-Yukawa fluids are examined and compared with available simulation results. The effect of varying the parameters of the hard-core double-Yukawa intermolecular potential on the location of the critical point is also analyzed using different perspectives. The relevance of this analysis for fluids whose molecules interact with realistic potentials is also pointed out.

  13. On the recovery of electric currents in the liquid core of the Earth

    NASA Astrophysics Data System (ADS)

    Kuslits, Lukács; Prácser, Ernő; Lemperger, István

    2017-04-01

    Inverse geodynamo modelling has become a standard method to get a more accurate image of the processes within the outer core. In this poster excerpts from the preliminary results of an other approach are presented. This comes around the possibility of recovering the currents within the liquid core directly, using Main Magnetic Field data. The approximation of different systems of the flow of charge is possible with various geometries. Based on previous geodynamo simulations, current coils can furnish a good initial geometry for such an estimation. The presentation introduces our preliminary test results and the study of reliability of the applied inversion algorithm for different numbers of coils, distributed in a grid simbolysing the domain between the inner-core and core-mantle boundaries. We shall also present inverted current structures using Main Field model data.

  14. Characterizing core-periphery structure of complex network by h-core and fingerprint curve

    NASA Astrophysics Data System (ADS)

    Li, Simon S.; Ye, Adam Y.; Qi, Eric P.; Stanley, H. Eugene; Ye, Fred Y.

    2018-02-01

    It is proposed that the core-periphery structure of complex networks can be simulated by h-cores and fingerprint curves. While the features of core structure are characterized by h-core, the features of periphery structure are visualized by rose or spiral curve as the fingerprint curve linking to entire-network parameters. It is suggested that a complex network can be approached by h-core and rose curves as the first-order Fourier-approach, where the core-periphery structure is characterized by five parameters: network h-index, network radius, degree power, network density and average clustering coefficient. The simulation looks Fourier-like analysis.

  15. Uranus and Neptune: Refugees from the Jupiter-Saturn zone?

    NASA Astrophysics Data System (ADS)

    Thommes, E. W.; Duncan, M. J.; Levison, H. F.

    1999-09-01

    Plantesimal accretion models of planet formation have been quite successful at reproducing the terrestrial region of the Solar System. However, in the outer Solar System these models run into problems, and it becomes very difficult to grow bodies to the current mass of the ``ice giants," Uranus and Neptune. Here we present an alternative scenario to in-situ formation of the ice giants. In addition to the Jupiter and Saturn solid cores, several more bodies of mass ~ 10 MEarth or more are likely to have formed in the region between 4 and 10 AU. As Jupiter's core, and perhaps Saturn's, accreted nebular gas, the other nearby bodies must have been scattered outward. Dynamical friction with the trans-Saturnian part of the planetesimal disk would have acted to decouple these ``failed cores" from their scatterer, and to circularize their orbits. Numerical simulations presented here show that systems very similar to our outer Solar System (including Uranus, Neptune, the Kuiper belt, and the scattered disk) are a natural product of this process.

  16. Real-time electron dynamics for massively parallel excited-state simulations

    NASA Astrophysics Data System (ADS)

    Andrade, Xavier

    The simulation of the real-time dynamics of electrons, based on time dependent density functional theory (TDDFT), is a powerful approach to study electronic excited states in molecular and crystalline systems. What makes the method attractive is its flexibility to simulate different kinds of phenomena beyond the linear-response regime, including strongly-perturbed electronic systems and non-adiabatic electron-ion dynamics. Electron-dynamics simulations are also attractive from a computational point of view. They can run efficiently on massively parallel architectures due to the low communication requirements. Our implementations of electron dynamics, based on the codes Octopus (real-space) and Qball (plane-waves), allow us to simulate systems composed of thousands of atoms and to obtain good parallel scaling up to 1.6 million processor cores. Due to the versatility of real-time electron dynamics and its parallel performance, we expect it to become the method of choice to apply the capabilities of exascale supercomputers for the simulation of electronic excited states.

  17. Orthogonal recursive bisection as data decomposition strategy for massively parallel cardiac simulations.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Pitman, Michael C; Rice, John J

    2011-06-01

    We present the orthogonal recursive bisection algorithm that hierarchically segments the anatomical model structure into subvolumes that are distributed to cores. The anatomy is derived from the Visible Human Project, with electrophysiology based on the FitzHugh-Nagumo (FHN) and ten Tusscher (TT04) models with monodomain diffusion. Benchmark simulations with up to 16,384 and 32,768 cores on IBM Blue Gene/P and L supercomputers for both FHN and TT04 results show good load balancing with almost perfect speedup factors that are close to linear with the number of cores. Hence, strong scaling is demonstrated. With 32,768 cores, a 1000 ms simulation of full heart beat requires about 6.5 min of wall clock time for a simulation of the FHN model. For the largest machine partitions, the simulations execute at a rate of 0.548 s (BG/P) and 0.394 s (BG/L) of wall clock time per 1 ms of simulation time. To our knowledge, these simulations show strong scaling to substantially higher numbers of cores than reported previously for organ-level simulation of the heart, thus significantly reducing run times. The ability to reduce runtimes could play a critical role in enabling wider use of cardiac models in research and clinical applications.

  18. Space-based infrared sensors of space target imaging effect analysis

    NASA Astrophysics Data System (ADS)

    Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang

    2018-02-01

    Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.

  19. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    PubMed

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  20. Digital core based transmitted ultrasonic wave simulation and velocity accuracy analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Shan, Rui

    2016-06-01

    Transmitted ultrasonic wave simulation (TUWS) in a digital core is one of the important elements of digital rock physics and is used to study wave propagation in porous cores and calculate equivalent velocity. When simulating wave propagates in a 3D digital core, two additional layers are attached to its two surfaces vertical to the wave-direction and one planar wave source and two receiver-arrays are properly installed. After source excitation, the two receivers then record incident and transmitted waves of the digital rock. Wave propagating velocity, which is the velocity of the digital core, is computed by the picked peak-time difference between the two recorded waves. To evaluate the accuracy of TUWS, a digital core is fully saturated with gas, oil, and water to calculate the corresponding velocities. The velocities increase with decreasing wave frequencies in the simulation frequency band, and this is considered to be the result of scattering. When the pore fluids are varied from gas to oil and finally to water, the velocity-variation characteristics between the different frequencies are similar, thereby approximately following the variation law of velocities obtained from linear elastic statics simulation (LESS), although their absolute values are different. However, LESS has been widely used. The results of this paper show that the transmission ultrasonic simulation has high relative precision.

  1. Nonlinear dynamic simulation of single- and multi-spool core engines

    NASA Technical Reports Server (NTRS)

    Schobeiri, T.; Lippke, C.; Abouelkheir, M.

    1993-01-01

    In this paper a new computational method for accurate simulation of the nonlinear dynamic behavior of single- and multi-spool core engines, turbofan engines, and power generation gas turbine engines is presented. In order to perform the simulation, a modularly structured computer code has been developed which includes individual mathematical modules representing various engine components. The generic structure of the code enables the dynamic simulation of arbitrary engine configurations ranging from single-spool thrust generation to multi-spool thrust/power generation engines under adverse dynamic operating conditions. For precise simulation of turbine and compressor components, row-by-row calculation procedures were implemented that account for the specific turbine and compressor cascade and blade geometry and characteristics. The dynamic behavior of the subject engine is calculated by solving a number of systems of partial differential equations, which describe the unsteady behavior of the individual components. In order to ensure the capability, accuracy, robustness, and reliability of the code, comprehensive critical performance assessment and validation tests were performed. As representatives, three different transient cases with single- and multi-spool thrust and power generation engines were simulated. The transient cases range from operating with a prescribed fuel schedule, to extreme load changes, to generator and turbine shut down.

  2. Ligand structure and mechanical properties of single-nanoparticle-thick membranes.

    PubMed

    Salerno, K Michael; Bolintineanu, Dan S; Lane, J Matthew D; Grest, Gary S

    2015-06-01

    The high mechanical stiffness of single-nanoparticle-thick membranes is believed to result from the local structure of ligand coatings that mediate interactions between nanoparticles. These ligand structures are not directly observable experimentally. We use molecular dynamics simulations to observe variations in ligand structure and simultaneously measure variations in membrane mechanical properties. We have shown previously that ligand end group has a large impact on ligand structure and membrane mechanical properties. Here we introduce and apply quantitative molecular structure measures to these membranes and extend analysis to multiple nanoparticle core sizes and ligand lengths. Simulations of nanoparticle membranes with a nanoparticle core diameter of 4 or 6 nm, a ligand length of 11 or 17 methylenes, and either carboxyl (COOH) or methyl (CH(3)) ligand end groups are presented. In carboxyl-terminated ligand systems, structure and interactions are dominated by an end-to-end orientation of ligands. In methyl-terminated ligand systems large ordered ligand structures form, but nanoparticle interactions are dominated by disordered, partially interdigitated ligands. Core size and ligand length also affect both ligand arrangement within the membrane and the membrane's macroscopic mechanical response, but are secondary to the role of the ligand end group. Moreover, the particular end group (COOH or CH(3)) alters the nature of how ligand length, in turn, affects the membrane properties. The effect of core size does not depend on the ligand end group, with larger cores always leading to stiffer membranes. Asymmetry in the stress and ligand density is observed in membranes during preparation at a water-vapor interface, with the stress asymmetry persisting in all membranes after drying.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salerno, Kenneth Michael; Bolintineanu, Dan S.; Lane, J. Matthew D.

    We believe that the high mechanical stiffness of single-nanoparticle-thick membranes is the result of the local structure of ligand coatings that mediate interactions between nanoparticles. These ligand structures are not directly observable experimentally. We use molecular dynamics simulations to observe variations in ligand structure and simultaneously measure variations in membrane mechanical properties. We have shown previously that ligand end group has a large impact on ligand structure and membrane mechanical properties. Here we introduce and apply quantitative molecular structure measures to these membranes and extend analysis to multiple nanoparticle core sizes and ligand lengths. Simulations of nanoparticle membranes with amore » nanoparticle core diameter of 4 or 6 nm, a ligand length of 11 or 17 methylenes, and either carboxyl (COOH) or methyl (CH 3) ligand end groups are presented. In carboxyl-terminated ligand systems, structure and interactions are dominated by an end-to-end orientation of ligands. In methyl-terminated ligand systems large ordered ligand structures form, but nanoparticle interactions are dominated by disordered, partially interdigitated ligands. Core size and ligand length also affect both ligand arrangement within the membrane and the membrane's macroscopic mechanical response, but are secondary to the role of the ligand end group. Additionally, the particular end group (COOH or CH 3) alters the nature of how ligand length, in turn, affects the membrane properties. The effect of core size does not depend on the ligand end group, with larger cores always leading to stiffer membranes. Asymmetry in the stress and ligand density is observed in membranes during preparation at a water-vapor interface, with the stress asymmetry persisting in all membranes after drying.« less

  4. Rydberg excitation of cold atoms inside a hollow-core fiber

    NASA Astrophysics Data System (ADS)

    Langbecker, Maria; Noaman, Mohammad; Kjærgaard, Niels; Benabid, Fetah; Windpassinger, Patrick

    2017-10-01

    We report on a versatile, highly controllable hybrid cold Rydberg atom fiber interface, based on laser cooled atoms transported into a hollow-core kagome crystal fiber. Our experiments demonstrate the feasibility of exciting cold Rydberg atoms inside a hollow-core fiber and we study the influence of the fiber on Rydberg electromagnetically induced transparency (EIT) signals. Using a temporally resolved detection method to distinguish between excitation and loss, we observe two different regimes of the Rydberg excitations: one EIT regime and one regime dominated by atom loss. These results are a substantial advancement towards future use of our system for quantum simulation or information.

  5. Stability Estimation of ABWR on the Basis of Noise Analysis

    NASA Astrophysics Data System (ADS)

    Furuya, Masahiro; Fukahori, Takanori; Mizokami, Shinya; Yokoya, Jun

    In order to investigate the stability of a nuclear reactor core with an oxide mixture of uranium and plutonium (MOX) fuel installed, channel stability and regional stability tests were conducted with the SIRIUS-F facility. The SIRIUS-F facility was designed and constructed to provide a highly accurate simulation of thermal-hydraulic (channel) instabilities and coupled thermalhydraulics-neutronics instabilities of the Advanced Boiling Water Reactors (ABWRs). A real-time simulation was performed by modal point kinetics of reactor neutronics and fuel-rod thermal conduction on the basis of a measured void fraction in a reactor core section of the facility. A time series analysis was performed to calculate decay ratio and resonance frequency from a dominant pole of a transfer function by applying auto regressive (AR) methods to the time-series of the core inlet flow rate. Experiments were conducted with the SIRIUS-F facility, which simulates ABWR with MOX fuel installed. The variations in the decay ratio and resonance frequency among the five common AR methods are within 0.03 and 0.01 Hz, respectively. In this system, the appropriate decay ratio and resonance frequency can be estimated on the basis of the Yule-Walker method with the model order of 30.

  6. MIMO signal progressing with RLSCMA algorithm for multi-mode multi-core optical transmission system

    NASA Astrophysics Data System (ADS)

    Bi, Yuan; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Zhang, Qi; Wang, Yong-jun; Tian, Qing-hua; Tian, Feng; Mao, Ya-ya

    2018-01-01

    In the process of transmitting signals of multi-mode multi-core fiber, there will be mode coupling between modes. The mode dispersion will also occur because each mode has different transmission speed in the link. Mode coupling and mode dispersion will cause damage to the useful signal in the transmission link, so the receiver needs to deal received signal with digital signal processing, and compensate the damage in the link. We first analyzes the influence of mode coupling and mode dispersion in the process of transmitting signals of multi-mode multi-core fiber, then presents the relationship between the coupling coefficient and dispersion coefficient. Then we carry out adaptive signal processing with MIMO equalizers based on recursive least squares constant modulus algorithm (RLSCMA). The MIMO equalization algorithm offers adaptive equalization taps according to the degree of crosstalk in cores or modes, which eliminates the interference among different modes and cores in space division multiplexing(SDM) transmission system. The simulation results show that the distorted signals are restored efficiently with fast convergence speed.

  7. Space Launch System Booster Separation Aerodynamic Testing in the NASA Langley Unitary Plan Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Wilcox, Floyd J., Jr.; Pinier, Jeremy T.; Chan, David T.; Crosby, William A.

    2016-01-01

    A wind-tunnel investigation of a 0.009 scale model of the Space Launch System (SLS) was conducted in the NASA Langley Unitary Plan Wind Tunnel to characterize the aerodynamics of the core and solid rocket boosters (SRBs) during booster separation. High-pressure air was used to simulate plumes from the booster separation motors (BSMs) located on the nose and aft skirt of the SRBs. Force and moment data were acquired on the core and SRBs. These data were used to corroborate computational fluid dynamics (CFD) calculations that were used in developing a booster separation database. The SRBs could be remotely positioned in the x-, y-, and z-direction relative to the core. Data were acquired continuously while the SRBs were moved in the axial direction. The primary parameters varied during the test were: core pitch angle; SRB pitch and yaw angles; SRB nose x-, y-, and z-position relative to the core; and BSM plenum pressure. The test was conducted at a free-stream Mach number of 4.25 and a unit Reynolds number of 1.5 million per foot.

  8. High-Fidelity Simulations of Electromagnetic Propagation and RF Communication Systems

    DTIC Science & Technology

    2017-05-01

    addition to high -fidelity RF propagation modeling, lower-fidelity mod- els, which are less computationally burdensome, are available via a C++ API...expensive to perform, requiring roughly one hour of computer time with 36 available cores and ray tracing per- formed by a single high -end GPU...ER D C TR -1 7- 2 Military Engineering Applied Research High -Fidelity Simulations of Electromagnetic Propagation and RF Communication

  9. Atomistic calculations of dislocation core energy in aluminium

    DOE PAGES

    Zhou, X. W.; Sills, R. B.; Ward, D. K.; ...

    2017-02-16

    A robust molecular dynamics simulation method for calculating dislocation core energies has been developed. This method has unique advantages: it does not require artificial boundary conditions, is applicable for mixed dislocations, and can yield highly converged results regardless of the atomistic system size. Utilizing a high-fidelity bond order potential, we have applied this method in aluminium to calculate the dislocation core energy as a function of the angle β between the dislocation line and Burgers vector. These calculations show that, for the face-centred-cubic aluminium explored, the dislocation core energy follows the same functional dependence on β as the dislocation elasticmore » energy: Ec = A·sin 2β + B·cos 2β, and this dependence is independent of temperature between 100 and 300 K. By further analysing the energetics of an extended dislocation core, we elucidate the relationship between the core energy and radius of a perfect versus extended dislocation. With our methodology, the dislocation core energy can be accurately accounted for in models of plastic deformation.« less

  10. Atomistic calculations of dislocation core energy in aluminium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, X. W.; Sills, R. B.; Ward, D. K.

    A robust molecular dynamics simulation method for calculating dislocation core energies has been developed. This method has unique advantages: it does not require artificial boundary conditions, is applicable for mixed dislocations, and can yield highly converged results regardless of the atomistic system size. Utilizing a high-fidelity bond order potential, we have applied this method in aluminium to calculate the dislocation core energy as a function of the angle β between the dislocation line and Burgers vector. These calculations show that, for the face-centred-cubic aluminium explored, the dislocation core energy follows the same functional dependence on β as the dislocation elasticmore » energy: Ec = A·sin 2β + B·cos 2β, and this dependence is independent of temperature between 100 and 300 K. By further analysing the energetics of an extended dislocation core, we elucidate the relationship between the core energy and radius of a perfect versus extended dislocation. With our methodology, the dislocation core energy can be accurately accounted for in models of plastic deformation.« less

  11. Dissipative particle dynamics simulations of polymersomes.

    PubMed

    Ortiz, Vanessa; Nielsen, Steven O; Discher, Dennis E; Klein, Michael L; Lipowsky, Reinhard; Shillcock, Julian

    2005-09-22

    A DPD model of PEO-based block copolymer vesicles in water is developed by introducing a new density based coarse graining and by using experimental data for interfacial tension. Simulated as a membrane patch, the DPD model is in excellent agreement with experimental data for both the area expansion modulus and the scaling of hydrophobic core thickness with molecular weight. Rupture simulations of polymer vesicles, or "polymersomes", are presented to illustrate the system sizes feasible with DPD. The results should provide guidance for theoretical derivations of scaling laws and also illustrate how spherical polymer vesicles might be studied in simulation.

  12. Damaris: Addressing performance variability in data management for post-petascale simulations

    DOE PAGES

    Dorier, Matthieu; Antoniu, Gabriel; Cappello, Franck; ...

    2016-10-01

    With exascale computing on the horizon, reducing performance variability in data management tasks (storage, visualization, analysis, etc.) is becoming a key challenge in sustaining high performance. Here, this variability significantly impacts the overall application performance at scale and its predictability over time. In this article, we present Damaris, a system that leverages dedicated cores in multicore nodes to offload data management tasks, including I/O, data compression, scheduling of data movements, in situ analysis, and visualization. We evaluate Damaris with the CM1 atmospheric simulation and the Nek5000 computational fluid dynamic simulation on four platforms, including NICS’s Kraken and NCSA’s Blue Waters.more » Our results show that (1) Damaris fully hides the I/O variability as well as all I/O-related costs, thus making simulation performance predictable; (2) it increases the sustained write throughput by a factor of up to 15 compared with standard I/O approaches; (3) it allows almost perfect scalability of the simulation up to over 9,000 cores, as opposed to state-of-the-art approaches that fail to scale; and (4) it enables a seamless connection to the VisIt visualization software to perform in situ analysis and visualization in a way that impacts neither the performance of the simulation nor its variability. In addition, we extended our implementation of Damaris to also support the use of dedicated nodes and conducted a thorough comparison of the two approaches—dedicated cores and dedicated nodes—for I/O tasks with the aforementioned applications.« less

  13. Damaris: Addressing performance variability in data management for post-petascale simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorier, Matthieu; Antoniu, Gabriel; Cappello, Franck

    With exascale computing on the horizon, reducing performance variability in data management tasks (storage, visualization, analysis, etc.) is becoming a key challenge in sustaining high performance. Here, this variability significantly impacts the overall application performance at scale and its predictability over time. In this article, we present Damaris, a system that leverages dedicated cores in multicore nodes to offload data management tasks, including I/O, data compression, scheduling of data movements, in situ analysis, and visualization. We evaluate Damaris with the CM1 atmospheric simulation and the Nek5000 computational fluid dynamic simulation on four platforms, including NICS’s Kraken and NCSA’s Blue Waters.more » Our results show that (1) Damaris fully hides the I/O variability as well as all I/O-related costs, thus making simulation performance predictable; (2) it increases the sustained write throughput by a factor of up to 15 compared with standard I/O approaches; (3) it allows almost perfect scalability of the simulation up to over 9,000 cores, as opposed to state-of-the-art approaches that fail to scale; and (4) it enables a seamless connection to the VisIt visualization software to perform in situ analysis and visualization in a way that impacts neither the performance of the simulation nor its variability. In addition, we extended our implementation of Damaris to also support the use of dedicated nodes and conducted a thorough comparison of the two approaches—dedicated cores and dedicated nodes—for I/O tasks with the aforementioned applications.« less

  14. Design Considerations for a Computationally-Lightweight Authentication Mechanism for Passive RFID Tags

    DTIC Science & Technology

    2009-09-01

    suffer the power and complexity requirements of a public key system. 28 In [18], a simulation of the SHA –1 algorithm is performed on a Xilinx FPGA ... 256 bits. Thus, the construction of a hash table would need 2512 independent comparisons. It is known that hash collisions of the SHA –1 algorithm... SHA –1 algorithm for small-core FPGA design. Small-core FPGA design is the process by which a circuit is adapted to use the minimal amount of logic

  15. A 3-D Finite-Volume Non-hydrostatic Icosahedral Model (NIM)

    NASA Astrophysics Data System (ADS)

    Lee, Jin

    2014-05-01

    The Nonhydrostatic Icosahedral Model (NIM) formulates the latest numerical innovation of the three-dimensional finite-volume control volume on the quasi-uniform icosahedral grid suitable for ultra-high resolution simulations. NIM's modeling goal is to improve numerical accuracy for weather and climate simulations as well as to utilize the state-of-art computing architecture such as massive parallel CPUs and GPUs to deliver routine high-resolution forecasts in timely manner. NIM dynamic corel innovations include: * A local coordinate system remapped spherical surface to plane for numerical accuracy (Lee and MacDonald, 2009), * Grid points in a table-driven horizontal loop that allow any horizontal point sequence (A.E. MacDonald, et al., 2010), * Flux-Corrected Transport formulated on finite-volume operators to maintain conservative positive definite transport (J.-L, Lee, ET. Al., 2010), *Icosahedral grid optimization (Wang and Lee, 2011), * All differentials evaluated as three-dimensional finite-volume integrals around the control volume. The three-dimensional finite-volume solver in NIM is designed to improve pressure gradient calculation and orographic precipitation over complex terrain. NIM dynamical core has been successfully verified with various non-hydrostatic benchmark test cases such as internal gravity wave, and mountain waves in Dynamical Cores Model Inter-comparisons Projects (DCMIP). Physical parameterizations suitable for NWP are incorporated into NIM dynamical core and successfully tested with multimonth aqua-planet simulations. Recently, NIM has started real data simulations using GFS initial conditions. Results from the idealized tests as well as real-data simulations will be shown in the conference.

  16. Simulating an Exploding Fission-Bomb Core

    NASA Astrophysics Data System (ADS)

    Reed, Cameron

    2016-03-01

    A time-dependent desktop-computer simulation of the core of an exploding fission bomb (nuclear weapon) has been developed. The simulation models a core comprising a mixture of two isotopes: a fissile one (such as U-235) and an inert one (such as U-238) that captures neutrons and removes them from circulation. The user sets the enrichment percentage and scattering and fission cross-sections of the fissile isotope, the capture cross-section of the inert isotope, the number of neutrons liberated per fission, the number of ``initiator'' neutrons, the radius of the core, and the neutron-reflection efficiency of a surrounding tamper. The simulation, which is predicated on ordinary kinematics, follows the three-dimensional motions and fates of neutrons as they travel through the core. Limitations of time and computer memory render it impossible to model a real-life core, but results of numerous runs clearly demonstrate the existence of a critical mass for a given set of parameters and the dramatic effects of enrichment and tamper efficiency on the growth (or decay) of the neutron population. The logic of the simulation will be described and results of typical runs will be presented and discussed.

  17. Modeling of carbonate reservoir variable secondary pore space based on CT images

    NASA Astrophysics Data System (ADS)

    Nie, X.; Nie, S.; Zhang, J.; Zhang, C.; Zhang, Z.

    2017-12-01

    Digital core technology has brought convenience to us, and X-ray CT scanning is one of the most common way to obtain 3D digital cores. However, it can only provide the original information of the only samples being scanned, and we can't modify the porosity of the scanned cores. For numerical rock physical simulations, a series of cores with variable porosities are needed to determine the relationship between the physical properties and porosity. In carbonate rocks, the secondary pore space including dissolution pores, caves and natural fractures is the key reservoir space, which makes the study of carbonate secondary porosity very important. To achieve the variation of porosities in one rock sample, based on CT scanned digital cores, according to the physical and chemical properties of carbonate rocks, several mathematical methods are chosen to simulate the variation of secondary pore space. We use the erosion and dilation operations of mathematical morphology method to simulate the pore space changes of dissolution pores and caves. We also use the Fractional Brownian Motion model to generate natural fractures with different widths and angles in digital cores to simulate fractured carbonate rocks. The morphological opening-and-closing operations in mathematical morphology method are used to simulate distribution of fluid in the pore space. The established 3D digital core models with different secondary porosities and water saturation status can be used in the study of the physical property numerical simulations of carbonate reservoir rocks.

  18. Toward Microscopic Equations of State for Core-Collapse Supernovae from Chiral Effective Field Theory

    NASA Astrophysics Data System (ADS)

    Aboona, Bassam; Holt, Jeremy

    2017-09-01

    Chiral effective field theory provides a modern framework for understanding the structure and dynamics of nuclear many-body systems. Recent works have had much success in applying the theory to describe the ground- and excited-state properties of light and medium-mass atomic nuclei when combined with ab initio numerical techniques. Our aim is to extend the application of chiral effective field theory to describe the nuclear equation of state required for supercomputer simulations of core-collapse supernovae. Given the large range of densities, temperatures, and proton fractions probed during stellar core collapse, microscopic calculations of the equation of state require large computational resources on the order of one million CPU hours. We investigate the use of graphics processing units (GPUs) to significantly reduce the computational cost of these calculations, which will enable a more accurate and precise description of this important input to numerical astrophysical simulations. Cyclotron Institute at Texas A&M, NSF Grant: PHY 1659847, DOE Grant: DE-FG02-93ER40773.

  19. Optimizing the Performance of Reactive Molecular Dynamics Simulations for Multi-core Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aktulga, Hasan Metin; Coffman, Paul; Shan, Tzu-Ray

    2015-12-01

    Hybrid parallelism allows high performance computing applications to better leverage the increasing on-node parallelism of modern supercomputers. In this paper, we present a hybrid parallel implementation of the widely used LAMMPS/ReaxC package, where the construction of bonded and nonbonded lists and evaluation of complex ReaxFF interactions are implemented efficiently using OpenMP parallelism. Additionally, the performance of the QEq charge equilibration scheme is examined and a dual-solver is implemented. We present the performance of the resulting ReaxC-OMP package on a state-of-the-art multi-core architecture Mira, an IBM BlueGene/Q supercomputer. For system sizes ranging from 32 thousand to 16.6 million particles, speedups inmore » the range of 1.5-4.5x are observed using the new ReaxC-OMP software. Sustained performance improvements have been observed for up to 262,144 cores (1,048,576 processes) of Mira with a weak scaling efficiency of 91.5% in larger simulations containing 16.6 million particles.« less

  20. An Accelerated Method for Testing Soldering Tendency of Core Pins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Qingyou; Xu, Hanbing; Ried, Paul

    2010-01-01

    An accelerated method for testing die soldering has been developed. High intensity ultrasonic vibrations has been used to simulate the die casting conditions such as high pressure and high impingement speed of molten metal on the pin. Soldering tendency of steels and coated pins has been examined. The results indicate that in the low carbon steel/Al system, the onset of soldering is 60 times faster with ultrasonic vibration than that without ultrasonic vibration. In the H13/A380 system, the onset of soldering reaction is accelerated to 30-60 times. Coating significantly reduces the soldering tendency of the core pins.

  1. WASCAL - West African Science Service Center on Climate Change and Adapted Land Use Regional Climate Simulations and Land-Atmosphere Simulations for West Africa at DKRZ and elsewhere

    NASA Astrophysics Data System (ADS)

    Hamann, Ilse; Arnault, Joel; Bliefernicht, Jan; Klein, Cornelia; Heinzeller, Dominikus; Kunstmann, Harald

    2014-05-01

    Changing climate and hydro-meteorological boundary conditions are among the most severe challenges to Africa in the 21st century. In particular West Africa faces an urgent need to develop effective adaptation and mitigation strategies to cope with negative impacts on humans and environment due to climate change, increased hydro-meteorological variability and land use changes. To help meet these challenges, the German Federal Ministry of Education and Research (BMBF) started an initiative with institutions in Germany and West African countries to establish together a West African Science Service Center on Climate Change and Adapted Land Use (WASCAL). This activity is accompanied by an establishment of trans-boundary observation networks, an interdisciplinary core research program and graduate research programs on climate change and related issues for strengthening the analytical capabilities of the Science Service Center. A key research activity of the WASCAL Competence Center is the provision of regional climate simulations in a fine spatio-temporal resolution for the core research sites of WASCAL for the present and the near future. The climate information is needed for subsequent local climate impact studies in agriculture, water resources and further socio-economic sectors. The simulation experiments are performed using regional climate models such as COSMO-CLM, RegCM and WRF and statistical techniques for a further refinement of the projections. The core research sites of WASCAL are located in the Sudanian Savannah belt in Northern Ghana, Southern Burkina Faso and Northern Benin. The climate in this region is semi-arid with six rainy months. Due to the strong population growth in West Africa, many areas of the Sudanian Savannah have been already converted to farmland since the majority of the people are living directly or indirectly from the income produced in agriculture. The simulation experiments of the Competence Center and the Core Research Program are accompanied by the WASCAL Graduate Research Program on the West African Climate System. The GRP-WACS provides ten scholarships per year for West African PhD students with a duration of three years. Present and future WASCAL PhD students will constitute one important user group of the Linux cluster that will be installed at the Competence Center in Ouagadougou, Burkina Faso. Regional Land-Atmosphere Simulations A key research activity of the WASCAL Core Research Program is the analysis of interactions between the land surface and the atmosphere to investigate how land surface changes affect hydro-meteorological surface fluxes such as evapotranspiration. Since current land surface models of global and regional climate models neglect dominant lateral hydrological processes such as surface runoff, a novel land surface model is used, the NCAR Distributed Hydrological Modeling System (NDHMS). This model can be coupled to WRF (WRF-Hydro) to perform two-way coupled atmospheric-hydrological simulations for the watershed of interest. Hardware and network prerequisites include a HPC cluster, network switches, internal storage media, Internet connectivity of sufficient bandwidth. Competences needed are HPC, storage, and visualization systems optimized for climate research, parallelization and optimization of climate models and workflows, efficient management of highest data volumes.

  2. Consequences of realistic embedding for the L 2,3 edge XAS of α-Fe 2 O 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bagus, Paul S.; Nelin, Connie J.; Sassi, Michel

    Cluster models of condensed systems are often used to simulate the core-level spectra obtained with X-ray Photoelectron Spectroscopy, XPS, or with X-ray Absorption Spectroscopy, XAS, especially for near edge features.

  3. Modeling and simulation of CANDU reactor and its regulating system

    NASA Astrophysics Data System (ADS)

    Javidnia, Hooman

    Analytical computer codes are indispensable tools in design, optimization, and control of nuclear power plants. Numerous codes have been developed to perform different types of analyses related to the nuclear power plants. A large number of these codes are designed to perform safety analyses. In the context of safety analyses, the control system is often neglected. Although there are good reasons for such a decision, that does not mean that the study of control systems in the nuclear power plants should be neglected altogether. In this thesis, a proof of concept code is developed as a tool that can be used in the design. optimization. and operation stages of the control system. The main objective in the design of this computer code is providing a tool that is easy to use by its target audience and is capable of producing high fidelity results that can be trusted to design the control system and optimize its performance. Since the overall plant control system covers a very wide range of processes, in this thesis the focus has been on one particular module of the the overall plant control system, namely, the reactor regulating system. The center of the reactor regulating system is the CANDU reactor. A nodal model for the reactor is used to represent the spatial neutronic kinetics of the core. The nodal model produces better results compared to the point kinetics model which is often used in the design and analysis of control system for nuclear reactors. The model can capture the spatial effects to some extent. although it is not as detailed as the finite difference methods. The criteria for choosing a nodal model of the core are: (1) the model should provide more detail than point kinetics and capture spatial effects, (2) it should not be too complex or overly detailed to slow down the simulation and provide details that are extraneous or unnecessary for a control engineer. Other than the reactor itself, there are auxiliary models that describe dynamics of different phenomena related to the transfer of the energy from the core. The main function of the reactor regulating system is to control the power of the reactor. This is achieved by using a set of detectors. reactivity devices. and digital control algorithms. Three main reactivity devices that are activated during short-term or intermediate-term transients are modeled in this thesis. The main elements of the digital control system are implemented in accordance to the program specifications for the actual control system in CANDU reactors. The simulation results are validated against requirements of the reactor regulating system. actual plant data. and pre-validated data from other computer codes. The validation process shows that the simulation results can be trusted in making engineering decisions regarding the reactor regulating system and prediction of the system performance in response to upset conditions or disturbances. KEYWORDS: CANDU reactors. reactor regulating system. nodal model. spatial kinetics. reactivity devices. simulation.

  4. Tinker-HP: a massively parallel molecular dynamics package for multiscale simulations of large complex systems with advanced point dipole polarizable force fields† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c7sc04531j

    PubMed Central

    Lagardère, Louis; Jolly, Luc-Henri; Lipparini, Filippo; Aviat, Félix; Stamm, Benjamin; Jing, Zhifeng F.; Harger, Matthew; Torabifard, Hedieh; Cisneros, G. Andrés; Schnieders, Michael J.; Gresh, Nohad; Maday, Yvon; Ren, Pengyu Y.; Ponder, Jay W.

    2017-01-01

    We present Tinker-HP, a massively MPI parallel package dedicated to classical molecular dynamics (MD) and to multiscale simulations, using advanced polarizable force fields (PFF) encompassing distributed multipoles electrostatics. Tinker-HP is an evolution of the popular Tinker package code that conserves its simplicity of use and its reference double precision implementation for CPUs. Grounded on interdisciplinary efforts with applied mathematics, Tinker-HP allows for long polarizable MD simulations on large systems up to millions of atoms. We detail in the paper the newly developed extension of massively parallel 3D spatial decomposition to point dipole polarizable models as well as their coupling to efficient Krylov iterative and non-iterative polarization solvers. The design of the code allows the use of various computer systems ranging from laboratory workstations to modern petascale supercomputers with thousands of cores. Tinker-HP proposes therefore the first high-performance scalable CPU computing environment for the development of next generation point dipole PFFs and for production simulations. Strategies linking Tinker-HP to Quantum Mechanics (QM) in the framework of multiscale polarizable self-consistent QM/MD simulations are also provided. The possibilities, performances and scalability of the software are demonstrated via benchmarks calculations using the polarizable AMOEBA force field on systems ranging from large water boxes of increasing size and ionic liquids to (very) large biosystems encompassing several proteins as well as the complete satellite tobacco mosaic virus and ribosome structures. For small systems, Tinker-HP appears to be competitive with the Tinker-OpenMM GPU implementation of Tinker. As the system size grows, Tinker-HP remains operational thanks to its access to distributed memory and takes advantage of its new algorithmic enabling for stable long timescale polarizable simulations. Overall, a several thousand-fold acceleration over a single-core computation is observed for the largest systems. The extension of the present CPU implementation of Tinker-HP to other computational platforms is discussed. PMID:29732110

  5. Partnership For Edge Physics Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parashar, Manish

    In this effort, we will extend our prior work as part of CPES (i.e., DART and DataSpaces) to support in-situ tight coupling between application codes that exploits data locality and core-level parallelism to maximize on-chip data exchange and reuse. This will be accomplished by mapping coupled simulations so that the data exchanges are more localized within the nodes. Coupled simulation workflows can more effectively utilize the resources available on emerging HEC platforms if they can be mapped and executed to exploit data locality as well as the communication patterns between application components. Scheduling and running such workflows requires an extendedmore » framework that should provide a unified hybrid abstraction to enable coordination and data sharing across computation tasks that run on the heterogeneous multi-core-based systems, and develop a data-locality based dynamic tasks scheduling approach to increase on-chip or intra-node data exchanges and in-situ execution. This effort will extend our prior work as part of CPES (i.e., DART and DataSpaces), which provided a simple virtual shared-space abstraction hosted at the staging nodes, to support application coordination, data sharing and active data processing services. Moreover, it will transparently manage the low-level operations associated with the inter-application data exchange, such as data redistributions, and will enable running coupled simulation workflow on multi-cores computing platforms.« less

  6. Finite element simulation of core inspection in helicopter rotor blades using guided waves.

    PubMed

    Chakrapani, Sunil Kishore; Barnard, Daniel; Dayal, Vinay

    2015-09-01

    This paper extends the work presented earlier on inspection of helicopter rotor blades using guided Lamb modes by focusing on inspecting the spar-core bond. In particular, this research focuses on structures which employ high stiffness, high density core materials. Wave propagation in such structures deviate from the generic Lamb wave propagation in sandwich panels. To understand the various mode conversions, finite element models of a generalized helicopter rotor blade were created and subjected to transient analysis using a commercial finite element code; ANSYS. Numerical simulations showed that a Lamb wave excited in the spar section of the blade gets converted into Rayleigh wave which travels across the spar-core section and mode converts back into Lamb wave. Dispersion of Rayleigh waves in multi-layered half-space was also explored. Damage was modeled in the form of a notch in the core section to simulate a cracked core, and delamination was modeled between the spar and core material to simulate spar-core disbond. Mode conversions under these damaged conditions were examined numerically. The numerical models help in assessing the difficulty of using nondestructive evaluation for complex structures and also highlight the physics behind the mode conversions which occur at various discontinuities. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Micromagnetics on high-performance workstation and mobile computational platforms

    NASA Astrophysics Data System (ADS)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  8. Space Station Environmental Control and Life Support System Test Facility at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Springer, Darlene

    1989-01-01

    Different aspects of Space Station Environmental Control and Life Support System (ECLSS) testing are currently taking place at Marshall Space Flight Center (MSFC). Unique to this testing is the variety of test areas and the fact that all are located in one building. The north high bay of building 4755, the Core Module Integration Facility (CMIF), contains the following test areas: the Subsystem Test Area, the Comparative Test Area, the Process Material Management System (PMMS), the Core Module Simulator (CMS), the End-use Equipment Facility (EEF), and the Pre-development Operational System Test (POST) Area. This paper addresses the facility that supports these test areas and briefly describes the testing in each area. Future plans for the building and Space Station module configurations will also be discussed.

  9. Microprocessor tester for the treat upgrade reactor trip system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lenkszus, F.R.; Bucher, R.G.

    1984-01-01

    The upgrading of the Transient Reactor Test (TREAT) Facility at ANL-Idaho has been designed to provide additional experimental capabilities for the study of core disruptive accident (CDA) phenomena. In addition, a programmable Automated Reactor Control System (ARCS) will permit high-power transients up to 11,000 MW having a controlled reactor period of from 15 to 0.1 sec. These modifications to the core neutronics will improve simulation of LMFBR accident conditions. Finally, a sophisticated, multiply-redundant safety system, the Reactor Trip System (RTS), will provide safe operation for both steady state and transient production operating modes. To insure that this complex safety systemmore » is functioning properly, a Dedicated Microprocessor Tester (DMT) has been implemented to perform a thorough checkout of the RTS prior to all TREAT operations.« less

  10. Development of a New 47-Group Library for the CASL Neutronics Simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Williams, Mark L; Wiarda, Dorothea

    The CASL core simulator MPACT is under development for the neutronics and thermal-hydraulics coupled simulation for the pressurized light water reactors. The key characteristics of the MPACT code include a subgroup method for resonance self-shielding, and a whole core solver with a 1D/2D synthesis method. The ORNL AMPX/SCALE code packages have been significantly improved to support various intermediate resonance self-shielding approximations such as the subgroup and embedded self-shielding methods. New 47-group AMPX and MPACT libraries based on ENDF/B-VII.0 have been generated for the CASL core simulator MPACT of which group structure comes from the HELIOS library. The new 47-group MPACTmore » library includes all nuclear data required for static and transient core simulations. This study discusses a detailed procedure to generate the 47-group AMPX and MPACT libraries and benchmark results for the VERA progression problems.« less

  11. Earth observing system instrument pointing control modeling for polar orbiting platforms

    NASA Technical Reports Server (NTRS)

    Briggs, H. C.; Kia, T.; Mccabe, S. A.; Bell, C. E.

    1987-01-01

    An approach to instrument pointing control performance assessment for large multi-instrument platforms is described. First, instrument pointing requirements and reference platform control systems for the Eos Polar Platforms are reviewed. Performance modeling tools including NASTRAN models of two large platforms, a modal selection procedure utilizing a balanced realization method, and reduced order platform models with core and instrument pointing control loops added are then described. Time history simulations of instrument pointing and stability performance in response to commanded slewing of adjacent instruments demonstrates the limits of tolerable slew activity. Simplified models of rigid body responses are also developed for comparison. Instrument pointing control methods required in addition to the core platform control system to meet instrument pointing requirements are considered.

  12. Optical interconnection network for parallel access to multi-rank memory in future computing systems.

    PubMed

    Wang, Kang; Gu, Huaxi; Yang, Yintang; Wang, Kun

    2015-08-10

    With the number of cores increasing, there is an emerging need for a high-bandwidth low-latency interconnection network, serving core-to-memory communication. In this paper, aiming at the goal of simultaneous access to multi-rank memory, we propose an optical interconnection network for core-to-memory communication. In the proposed network, the wavelength usage is delicately arranged so that cores can communicate with different ranks at the same time and broadcast for flow control can be achieved. A distributed memory controller architecture that works in a pipeline mode is also designed for efficient optical communication and transaction address processes. The scaling method and wavelength assignment for the proposed network are investigated. Compared with traditional electronic bus-based core-to-memory communication, the simulation results based on the PARSEC benchmark show that the bandwidth enhancement and latency reduction are apparent.

  13. Teaching core competencies of reconstructive microsurgery with the use of standardized patients.

    PubMed

    Son, Ji; Zeidler, Kamakshi R; Echo, Anthony; Otake, Leo; Ahdoot, Michael; Lee, Gordon K

    2013-04-01

    The Accreditation Council of Graduate Medical Education has defined 6 core competencies that residents must master before completing their training. Objective structured clinical examinations (OSCEs) using standardized patients are effective educational tools to assess and teach core competencies. We developed an OSCE specific for microsurgical head and neck reconstruction. Fifteen plastic surgery residents participated in the OSCE simulating a typical new patient consultation, which involved a patient with oral cancer. Residents were scored in all 6 core competencies by the standardized patients and faculty experts. Analysis of participant performance showed that although residents performed well overall, many lacked proficiency in systems-based practice. Junior residents were also more likely to omit critical elements of the physical examination compared to senior residents. We have modified our educational curriculum to specifically address these deficiencies. Our study demonstrates that the OSCE is an effective assessment tool for teaching and assessing all core competencies in microsurgery.

  14. In situ contact angle measurements of liquid CO 2, brine, and Mount Simon sandstone core using micro X-ray CT imaging, sessile drop, and Lattice Boltzmann modeling

    DOE PAGES

    Tudek, John; Crandall, Dustin; Fuchs, Samantha; ...

    2017-01-30

    Three techniques to measure and understand the contact angle, θ, of a CO 2/brine/rock system relevant to geologic carbon storage were performed with Mount Simon sandstone. Traditional sessile drop measurements of CO 2/brine on the sample were conducted and a water-wet system was observed, as is expected. A novel series of measurements inside of a Mount Simon core, using a micro X-ray computed tomography imaging system with the ability to scan samples at elevated pressures, was used to examine the θ of residual bubbles of CO 2. Within the sandstone core the matrix appeared to be neutrally wetting, with anmore » average θ around 90°. A large standard deviation of θ (20.8°) within the core was also observed. To resolve this discrepancy between experimental measurements, a series of Lattice Boltzmann model simulations were performed with differing intrinsic θ values. The model results with a θ = 80° were shown to match the core measurements closely, in both magnitude and variation. The small volume and complex geometry of the pore spaces that CO 2 was trapped in is the most likely explanation of this discrepancy between measured values, though further work is warranted.« less

  15. Evaluation of in vitro push-out bond strengths of different post-luting systems after artificial aging.

    PubMed

    Marigo, Luca; D' Arcangelo, Camillo; DE Angelis, Francesco; Cordaro, Massimo; Vadini, Mirco; Lajolo, Carlo

    2017-02-01

    The purpose of this study was to evaluate the push-out bond strengths of four commercially available adhesive luting systems (two self-adhesive and two etch-and-rinse systems) after mechanical aging. Forty single-rooted anterior teeth were divided into four groups according to the luting cement system used: Cement-One (Group 1); One-Q-adhesive Bond + Axia Core Dual (Group 2); SmartCem® 2 (Group 3); and XP Bond® + Core-X™ Flow (Group 4). Anatomical Post was cemented in groups 1 and 2, and D.T. Light-Post Illusion was cemented in groups 3 and 4. All samples were subjected to masticatory stress simulation consisting of 300,000 cycles applied with a computer-controlled chewing simulator. Push-out bond strength values (MPa) were calculated at cervical, middle, and apical each level, and the total bond strengths were calculated as the averages of the three levels. Statistical analysis was performed with data analysis software and significance was set at P<0.05. Statistically significant differences in total bond strength were detected between the cements (Group 4: 3.28 MPa, Group 1: 2.77 MPa, Group 2: 2.36 MPa, Group 3: 1.13 MPa; P<0.05). Specifically, Group 1 exhibited a lower bond strength in the apical zone, Group 3 exhibited a higher strength in this zone, and groups 2 and 4 exhibited more homogeneous bonding strengths across the different anatomical zones. After artificial aging, etch-and-rinse luting systems exhibited more homogeneous bond strengths; nevertheless, Cement-One exhibited a total bond strength second only to Core-X Flow.

  16. iCrowd: agent-based behavior modeling and crowd simulator

    NASA Astrophysics Data System (ADS)

    Kountouriotis, Vassilios I.; Paterakis, Manolis; Thomopoulos, Stelios C. A.

    2016-05-01

    Initially designed in the context of the TASS (Total Airport Security System) FP-7 project, the Crowd Simulation platform developed by the Integrated Systems Lab of the Institute of Informatics and Telecommunications at N.C.S.R. Demokritos, has evolved into a complete domain-independent agent-based behavior simulator with an emphasis on crowd behavior and building evacuation simulation. Under continuous development, it reflects an effort to implement a modern, multithreaded, data-oriented simulation engine employing latest state-of-the-art programming technologies and paradigms. It is based on an extensible architecture that separates core services from the individual layers of agent behavior, offering a concrete simulation kernel designed for high-performance and stability. Its primary goal is to deliver an abstract platform to facilitate implementation of several Agent-Based Simulation solutions with applicability in several domains of knowledge, such as: (i) Crowd behavior simulation during [in/out] door evacuation. (ii) Non-Player Character AI for Game-oriented applications and Gamification activities. (iii) Vessel traffic modeling and simulation for Maritime Security and Surveillance applications. (iv) Urban and Highway Traffic and Transportation Simulations. (v) Social Behavior Simulation and Modeling.

  17. Virtual Design Method for Controlled Failure in Foldcore Sandwich Panels

    NASA Astrophysics Data System (ADS)

    Sturm, Ralf; Fischer, S.

    2015-12-01

    For certification, novel fuselage concepts have to prove equivalent crashworthiness standards compared to the existing metal reference design. Due to the brittle failure behaviour of CFRP this requirement can only be fulfilled by a controlled progressive crash kinematics. Experiments showed that the failure of a twin-walled fuselage panel can be controlled by a local modification of the core through-thickness compression strength. For folded cores the required change in core properties can be integrated by a modification of the fold pattern. However, the complexity of folded cores requires a virtual design methodology for tailoring the fold pattern according to all static and crash relevant requirements. In this context a foldcore micromodel simulation method is presented to identify the structural response of a twin-walled fuselage panels with folded core under crash relevant loading condition. The simulations showed that a high degree of correlation is required before simulation can replace expensive testing. In the presented studies, the necessary correlation quality could only be obtained by including imperfections of the core material in the micromodel simulation approach.

  18. Ionospheric Simulation System for Satellite Observations and Global Assimilative Modeling Experiments (ISOGAME)

    NASA Technical Reports Server (NTRS)

    Pi, Xiaoqing; Mannucci, Anthony J.; Verkhoglyadova, Olga P.; Stephens, Philip; Wilson, Brian D.; Akopian, Vardan; Komjathy, Attila; Lijima, Byron A.

    2013-01-01

    ISOGAME is designed and developed to assess quantitatively the impact of new observation systems on the capability of imaging and modeling the ionosphere. With ISOGAME, one can perform observation system simulation experiments (OSSEs). A typical OSSE using ISOGAME would involve: (1) simulating various ionospheric conditions on global scales; (2) simulating ionospheric measurements made from a constellation of low-Earth-orbiters (LEOs), particularly Global Navigation Satellite System (GNSS) radio occultation data, and from ground-based global GNSS networks; (3) conducting ionospheric data assimilation experiments with the Global Assimilative Ionospheric Model (GAIM); and (4) analyzing modeling results with visualization tools. ISOGAME can provide quantitative assessment of the accuracy of assimilative modeling with the interested observation system. Other observation systems besides those based on GNSS are also possible to analyze. The system is composed of a suite of software that combines the GAIM, including a 4D first-principles ionospheric model and data assimilation modules, an Internal Reference Ionosphere (IRI) model that has been developed by international ionospheric research communities, observation simulator, visualization software, and orbit design, simulation, and optimization software. The core GAIM model used in ISOGAME is based on the GAIM++ code (written in C++) that includes a new high-fidelity geomagnetic field representation (multi-dipole). New visualization tools and analysis algorithms for the OSSEs are now part of ISOGAME.

  19. Lead Coolant Test Facility Systems Design, Thermal Hydraulic Analysis and Cost Estimate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soli Khericha; Edwin Harvego; John Svoboda

    2012-01-01

    The Idaho National Laboratory prepared a preliminary technical and functional requirements (T&FR), thermal hydraulic design and cost estimate for a lead coolant test facility. The purpose of this small scale facility is to simulate lead coolant fast reactor (LFR) coolant flow in an open lattice geometry core using seven electrical rods and liquid lead or lead-bismuth eutectic coolant. Based on review of current world lead or lead-bismuth test facilities and research needs listed in the Generation IV Roadmap, five broad areas of requirements were identified as listed: (1) Develop and Demonstrate Feasibility of Submerged Heat Exchanger; (2) Develop and Demonstratemore » Open-lattice Flow in Electrically Heated Core; (3) Develop and Demonstrate Chemistry Control; (4) Demonstrate Safe Operation; and (5) Provision for Future Testing. This paper discusses the preliminary design of systems, thermal hydraulic analysis, and simplified cost estimate. The facility thermal hydraulic design is based on the maximum simulated core power using seven electrical heater rods of 420 kW; average linear heat generation rate of 300 W/cm. The core inlet temperature for liquid lead or Pb/Bi eutectic is 4200 C. The design includes approximately seventy-five data measurements such as pressure, temperature, and flow rates. The preliminary estimated cost of construction of the facility is $3.7M (in 2006 $). It is also estimated that the facility will require two years to be constructed and ready for operation.« less

  20. Design and development of data acquisition system for the Louisiana accelerated loading device : final report.

    DOT National Transportation Integrated Search

    1992-09-01

    The Louisiana Transportation Research Center has established a Pavement Research Facility (PRF). The core of the PRF is a testing machine that is capable of conducting full-scale simulated and accelerated load testing of pavement materials, construct...

  1. Liquid-circulating garment controls thermal balance

    NASA Technical Reports Server (NTRS)

    Kuznetz, L. H.

    1977-01-01

    Experimental data and mathematical model of human thermoregulatory system have been used to investigate use of liquid-circulatory garment (LCG) to control thermal balance. Model proved useful as accurate simulator of such variables as sweat rate, skin temperature, core temperature, and radiative, evaporative, and LCG heat loss.

  2. Maintenance Training Simulators Design and Acquisition: Summary of Current Procedures.

    DTIC Science & Technology

    1979-11-01

    of maintenance training and training equipment for new systems . This organization has a core of highly experienced ISD team personnel and has evolved...S LABORATORY AIR FORCE SYSTEMS COMMAND BROOKS AIR FORCE BAbE,TEXAS 78235 ." .~ 8. . NOTI(’F When U.S. Government drawings. specifications. ot otlher...Force personirel in performning 4 Instrutinal Systems Devlopmrent (ISO) analyses to define maintenance training equipment requirements. and byv

  3. Particle-in-cell simulation study on halo formation in anisotropic beams

    NASA Astrophysics Data System (ADS)

    Ikegami, Masanori

    2000-11-01

    In a recent paper (M. Ikegami, Nucl. Instr. and Meth. A 435 (1999) 284), we investigated halo formation processes in transversely anisotropic beams based on the particle-core model. The effect of simultaneous excitation of two normal modes of core oscillation, i.e., high- and low-frequency modes, was examined. In the present study, self-consistent particle simulations are performed to confirm the results obtained in the particle-core analysis. In these simulations, it is confirmed that the particle-core analysis can predict the halo extent accurately even in anisotropic situations. Furthermore, we find that the halo intensity is enhanced in some cases where two normal modes of core oscillation are simultaneously excited as expected in the particle-core analysis. This result is of practical importance because pure high-frequency mode oscillation has frequently been assumed in preceding halo studies. The dependence of halo intensity on the 2:1 fixed point locations is also discussed.

  4. Driven topological systems in the classical limit

    NASA Astrophysics Data System (ADS)

    Duncan, Callum W.; Öhberg, Patrik; Valiente, Manuel

    2017-03-01

    Periodically driven quantum systems can exhibit topologically nontrivial behavior, even when their quasienergy bands have zero Chern numbers. Much work has been conducted on noninteracting quantum-mechanical models where this kind of behavior is present. However, the inclusion of interactions in out-of-equilibrium quantum systems can prove to be quite challenging. On the other hand, the classical counterpart of hard-core interactions can be simulated efficiently via constrained random walks. The noninteracting model, proposed by Rudner et al. [Phys. Rev. X 3, 031005 (2013), 10.1103/PhysRevX.3.031005], has a special point for which the system is equivalent to a classical random walk. We consider the classical counterpart of this model, which is exact at a special point even when hard-core interactions are present, and show how these quantitatively affect the edge currents in a strip geometry. We find that the interacting classical system is well described by a mean-field theory. Using this we simulate the dynamics of the classical system, which show that the interactions play the role of Markovian, or time-dependent disorder. By comparing the evolution of classical and quantum edge currents in small lattices, we find regimes where the classical limit considered gives good insight into the quantum problem.

  5. Tuning the field distribution and fabrication of an Al@ZnO core-shell nanostructure for a SPR-based fiber optic phenyl hydrazine sensor.

    PubMed

    Tabassum, Rana; Kaur, Parvinder; Gupta, Banshi D

    2016-05-27

    We report the fabrication and characterization of a surface plasmon resonance (SPR)-based fiber optic sensor that uses coatings of silver and aluminum (Al)-zinc oxide (ZnO) core-shell nanostructure (Al@ZnO) for the detection of phenyl hydrazine (Ph-Hyd). To optimize the volume fraction (f) of Al in ZnO and the thickness of the core-shell nanostructure layer (d), the electric field intensity along the normal to the multilayer system is simulated using the two-dimensional multilayer matrix method. The Al@ZnO core-shell nanostructure is prepared using the laser ablation technique. Various probes are fabricated with different values of f and an optimized thickness of core-shell nanostructure for the characterization of the Ph-Hyd sensor. The performance of the Ph-Hyd sensor is evaluated in terms of sensitivity. It is found that the Ag/Al@ZnO nanostructure core-shell-coated SPR probe with f = 0.25 and d = 0.040 μm possesses the maximum sensitivity towards Ph-Hyd. These results are in agreement with the simulated ones obtained using electric field intensity. In addition, the performance of the proposed probe is compared with that of probes coated with (i) Al@ZnO nanocomposite, (ii) Al nanoparticles and (iii) ZnO nanoparticles. It is found that the probe coated with an Al@ZnO core-shell nanostructure shows the largest resonance wavelength shift. The detailed mechanism of the sensing (involving chemical reactions) is presented. The sensor also manifests optimum performance at pH 7.

  6. Tolerancing the alignment of large-core optical fibers, fiber bundles and light guides using a Fourier approach.

    PubMed

    Sawyer, Travis W; Petersburg, Ryan; Bohndiek, Sarah E

    2017-04-20

    Optical fiber technology is found in a wide variety of applications to flexibly relay light between two points, enabling information transfer across long distances and allowing access to hard-to-reach areas. Large-core optical fibers and light guides find frequent use in illumination and spectroscopic applications, for example, endoscopy and high-resolution astronomical spectroscopy. Proper alignment is critical for maximizing throughput in optical fiber coupling systems; however, there currently are no formal approaches to tolerancing the alignment of a light-guide coupling system. Here, we propose a Fourier alignment sensitivity (FAS) algorithm to determine the optimal tolerances on the alignment of a light guide by computing the alignment sensitivity. The algorithm shows excellent agreement with both simulated and experimentally measured values and improves on the computation time of equivalent ray-tracing simulations by two orders of magnitude. We then apply FAS to tolerance and fabricate a coupling system, which is shown to meet specifications, thus validating FAS as a tolerancing technique. These results indicate that FAS is a flexible and rapid means to quantify the alignment sensitivity of a light guide, widely informing the design and tolerancing of coupling systems.

  7. Tolerancing the alignment of large-core optical fibers, fiber bundles and light guides using a Fourier approach

    PubMed Central

    Sawyer, Travis W.; Petersburg, Ryan; Bohndiek, Sarah E.

    2017-01-01

    Optical fiber technology is found in a wide variety of applications to flexibly relay light between two points, enabling information transfer across long distances and allowing access to hard-to-reach areas. Large-core optical fibers and light guides find frequent use in illumination and spectroscopic applications; for example, endoscopy and high-resolution astronomical spectroscopy. Proper alignment is critical for maximizing throughput in optical fiber coupling systems, however, there currently are no formal approaches to tolerancing the alignment of a light guide coupling system. Here, we propose a Fourier Alignment Sensitivity (FAS) algorithm to determine the optimal tolerances on the alignment of a light guide by computing the alignment sensitivity. The algorithm shows excellent agreement with both simulated and experimentally measured values and improves on the computation time of equivalent ray tracing simulations by two orders of magnitude. We then apply FAS to tolerance and fabricate a coupling system, which is shown to meet specifications, thus validating FAS as a tolerancing technique. These results indicate that FAS is a flexible and rapid means to quantify the alignment sensitivity of a light guide, widely informing the design and tolerancing of coupling systems. PMID:28430250

  8. Protons and alpha particles in the expanding solar wind: Hybrid simulations

    NASA Astrophysics Data System (ADS)

    Hellinger, Petr; Trávníček, Pavel M.

    2013-09-01

    We present results of a two‒dimensional hybrid expanding box simulation of a plasma system with three ion populations, beam and core protons, and alpha particles (and fluid electrons), drifting with respect to each other. The expansion with a strictly radial magnetic field leads to a decrease of the ion perpendicular to parallel temperature ratios as well as to an increase of the ratio between the ion relative velocities and the local Alfvén velocity creating a free energy for many different instabilities. The system is most of the time marginally stable with respect to kinetic instabilities mainly due to the ion relative velocities; these instabilities determine the system evolution counteracting some effects of the expansion. Nonlinear evolution of these instabilities leads to large modifications of the ion velocity distribution functions. The beam protons and alpha particles are decelerated with respect to the core protons and all the populations are cooled in the parallel direction and heated in the perpendicular one. On the macroscopic level, the kinetic instabilities cause large departures of the system evolution from the double adiabatic prediction and lead to perpendicular heating and parallel cooling rates which are comparable to the heating rates estimated from the Helios observations.

  9. Coilable Crystalline Fiber (CCF) Lasers and their Scalability

    DTIC Science & Technology

    2014-03-01

    Fibers: Double-Clad Design Concept of Tm:YAG-Core Fiber and Mode Simulation. Proc. SPIE 2012, 8237 , 82373M. 8. Beach, R. J.; Mitchell, S. C...Dubinskii, M. True Crystalline Fibers: Double-Clad LMA Design Concept of Tm:YAG-Core Fiber and Mode Simulation. Proc. of SPIE 2012, 8237 , 82373M-1...Tm:YAG-Core Fiber and Mode Simulation. Proc. SPIE 8237 , 82373M, 2012. 8. Beach, R. J.; Mitchell, S. C.; Meissner, H. E.; Meissner, O. R.; Krupke, W

  10. Cardiovascular Deconditioning in Humans: Human Studies Core

    NASA Technical Reports Server (NTRS)

    Williams, Gordon

    1999-01-01

    Major cardiovascular problems, secondary to cardiovascular deconditioning, may occur on extended space missions. While it is generally assumed that the microgravity state is the primary cause of cardiovascular deconditioning, sleep deprivation and disruption of diurnal rhythms may also play an important role. Factors that could be modified by either or both of these perturbations include: autonomic function and short-term cardiovascular reflexes, vasoreactivity, circadian rhythm of cardiovascular hormones (specifically the renin-angiotensin system) and renal sodium handling and hormonal influences on that process, venous compliance, cardiac mass, and cardiac conduction processes. The purpose of the Human Studies Core is to provide the infrastructure to conduct human experiments which will allow for the assessment of the likely role of such factors in the space travel associated cardiovascular deconditioning process and to develop appropriate countermeasures. The Core takes advantage of a newly-created Intensive Physiologic Monitoring (IPM) Unit at the Brigham and Women's Hospital, Boston, MA, to perform these studies. The Core includes two general experimental protocols. The first protocol involves a head down tilt bed-rest study to simulate microgravity. The second protocol includes the addition of a disruption of circadian rhythms to the simulated microgravity environment. Before and after each of these environmental manipulations, the subjects will undergo acute stressors simulating changes in volume and/or stress, which could occur in space and on return to Earth. The subjects are maintained in a rigidly controlled environment with fixed light/dark cycles, activity pattern, and dietary intake of nutrients, fluids, ions and calories.

  11. Ligand structure and mechanical properties of single-nanoparticle thick membranes

    DOE PAGES

    Salerno, Kenneth Michael; Bolintineanu, Dan S.; Lane, J. Matthew D.; ...

    2015-06-16

    We believe that the high mechanical stiffness of single-nanoparticle-thick membranes is the result of the local structure of ligand coatings that mediate interactions between nanoparticles. These ligand structures are not directly observable experimentally. We use molecular dynamics simulations to observe variations in ligand structure and simultaneously measure variations in membrane mechanical properties. We have shown previously that ligand end group has a large impact on ligand structure and membrane mechanical properties. Here we introduce and apply quantitative molecular structure measures to these membranes and extend analysis to multiple nanoparticle core sizes and ligand lengths. Simulations of nanoparticle membranes with amore » nanoparticle core diameter of 4 or 6 nm, a ligand length of 11 or 17 methylenes, and either carboxyl (COOH) or methyl (CH 3) ligand end groups are presented. In carboxyl-terminated ligand systems, structure and interactions are dominated by an end-to-end orientation of ligands. In methyl-terminated ligand systems large ordered ligand structures form, but nanoparticle interactions are dominated by disordered, partially interdigitated ligands. Core size and ligand length also affect both ligand arrangement within the membrane and the membrane's macroscopic mechanical response, but are secondary to the role of the ligand end group. Additionally, the particular end group (COOH or CH 3) alters the nature of how ligand length, in turn, affects the membrane properties. The effect of core size does not depend on the ligand end group, with larger cores always leading to stiffer membranes. Asymmetry in the stress and ligand density is observed in membranes during preparation at a water-vapor interface, with the stress asymmetry persisting in all membranes after drying.« less

  12. Evaluation of HFIR LEU Fuel Using the COMSOL Multiphysics Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Primm, Trent; Ruggles, Arthur; Freels, James D

    2009-03-01

    A finite element computational approach to simulation of the High Flux Isotope Reactor (HFIR) Core Thermal-Fluid behavior is developed. These models were developed to facilitate design of a low enriched core for the HFIR, which will have different axial and radial flux profiles from the current HEU core and thus will require fuel and poison load optimization. This report outlines a stepwise implementation of this modeling approach using the commercial finite element code, COMSOL, with initial assessment of fuel, poison and clad conduction modeling capability, followed by assessment of mating of the fuel conduction models to a one dimensional fluidmore » model typical of legacy simulation techniques for the HFIR core. The model is then extended to fully couple 2-dimensional conduction in the fuel to a 2-dimensional thermo-fluid model of the coolant for a HFIR core cooling sub-channel with additional assessment of simulation outcomes. Finally, 3-dimensional simulations of a fuel plate and cooling channel are presented.« less

  13. Simulating the minimum core for hydrophobic collapse in globular proteins.

    PubMed Central

    Tsai, J.; Gerstein, M.; Levitt, M.

    1997-01-01

    To investigate the nature of hydrophobic collapse considered to be the driving force in protein folding, we have simulated aqueous solutions of two model hydrophobic solutes, methane and isobutylene. Using a novel methodology for determining contacts, we can precisely follow hydrophobic aggregation as it proceeds through three stages: dispersed, transition, and collapsed. Theoretical modeling of the cluster formation observed by simulation indicates that this aggregation is cooperative and that the simulations favor the formation of a single cluster midway through the transition stage. This defines a minimum solute hydrophobic core volume. We compare this with protein hydrophobic core volumes determined from solved crystal structures. Our analysis shows that the solute core volume roughly estimates the minimum core size required for independent hydrophobic stabilization of a protein and defines a limiting concentration of nonpolar residues that can cause hydrophobic collapse. These results suggest that the physical forces driving aggregation of hydrophobic molecules in water is indeed responsible for protein folding. PMID:9416609

  14. Measurement and simulation of thermal neutron flux distribution in the RTP core

    NASA Astrophysics Data System (ADS)

    Rabir, Mohamad Hairie B.; Jalal Bayar, Abi Muttaqin B.; Hamzah, Na'im Syauqi B.; Mustafa, Muhammad Khairul Ariff B.; Karim, Julia Bt. Abdul; Zin, Muhammad Rawi B. Mohamed; Ismail, Yahya B.; Hussain, Mohd Huzair B.; Mat Husin, Mat Zin B.; Dan, Roslan B. Md; Ismail, Ahmad Razali B.; Husain, Nurfazila Bt.; Jalil Khan, Zareen Khan B. Abdul; Yakin, Shaiful Rizaide B. Mohd; Saad, Mohamad Fauzi B.; Masood, Zarina Bt.

    2018-01-01

    The in-core thermal neutron flux distribution was determined using measurement and simulation methods for the Malaysian’s PUSPATI TRIGA Reactor (RTP). In this work, online thermal neutron flux measurement using Self Powered Neutron Detector (SPND) has been performed to verify and validate the computational methods for neutron flux calculation in RTP calculations. The experimental results were used as a validation to the calculations performed with Monte Carlo code MCNP. The detail in-core neutron flux distributions were estimated using MCNP mesh tally method. The neutron flux mapping obtained revealed the heterogeneous configuration of the core. Based on the measurement and simulation, the thermal flux profile peaked at the centre of the core and gradually decreased towards the outer side of the core. The results show a good agreement (relatively) between calculation and measurement where both show the same radial thermal flux profile inside the core: MCNP model over estimation with maximum discrepancy around 20% higher compared to SPND measurement. As our model also predicts well the neutron flux distribution in the core it can be used for the characterization of the full core, that is neutron flux and spectra calculation, dose rate calculations, reaction rate calculations, etc.

  15. A role for self-gravity at multiple length scales in the process of star formation.

    PubMed

    Goodman, Alyssa A; Rosolowsky, Erik W; Borkin, Michelle A; Foster, Jonathan B; Halle, Michael; Kauffmann, Jens; Pineda, Jaime E

    2009-01-01

    Self-gravity plays a decisive role in the final stages of star formation, where dense cores (size approximately 0.1 parsecs) inside molecular clouds collapse to form star-plus-disk systems. But self-gravity's role at earlier times (and on larger length scales, such as approximately 1 parsec) is unclear; some molecular cloud simulations that do not include self-gravity suggest that 'turbulent fragmentation' alone is sufficient to create a mass distribution of dense cores that resembles, and sets, the stellar initial mass function. Here we report a 'dendrogram' (hierarchical tree-diagram) analysis that reveals that self-gravity plays a significant role over the full range of possible scales traced by (13)CO observations in the L1448 molecular cloud, but not everywhere in the observed region. In particular, more than 90 per cent of the compact 'pre-stellar cores' traced by peaks of dust emission are projected on the sky within one of the dendrogram's self-gravitating 'leaves'. As these peaks mark the locations of already-forming stars, or of those probably about to form, a self-gravitating cocoon seems a critical condition for their existence. Turbulent fragmentation simulations without self-gravity-even of unmagnetized isothermal material-can yield mass and velocity power spectra very similar to what is observed in clouds like L1448. But a dendrogram of such a simulation shows that nearly all the gas in it (much more than in the observations) appears to be self-gravitating. A potentially significant role for gravity in 'non-self-gravitating' simulations suggests inconsistency in simulation assumptions and output, and that it is necessary to include self-gravity in any realistic simulation of the star-formation process on subparsec scales.

  16. AMR Studies of Star Formation: Simulations and Simulated Observations

    NASA Astrophysics Data System (ADS)

    Offner, Stella; McKee, C. F.; Klein, R. I.

    2009-01-01

    Molecular clouds are typically observed to be approximately virialized with gravitational and turbulent energy in balance, yielding a star formation rate of a few percent. The origin and characteristics of the observed supersonic turbulence are poorly understood, and without continued energy injection the turbulence is predicted to decay within a cloud dynamical time. Recent observations and analytic work have suggested a strong connection between the initial stellar mass function, the core mass function, and turbulence characteristics. The role of magnetic fields in determining core lifetimes, shapes, and kinematic properties remains hotly debated. Simulations are a formidable tool for studying the complex process of star formation and addressing these puzzles. I present my results modeling low-mass star formation using the ORION adaptive mesh refinement (AMR) code. I investigate the properties of forming cores and protostars in simulations in which the turbulence is driven to maintain virial balance and where it is allowed to decay. I will discuss simulated observations of cores in dust emission and in molecular tracers and compare to observations of local star-forming clouds. I will also present results from ORION cluster simulations including flux-limited diffusion radiative transfer and show that radiative feedback, even from low-mass stars, has a significant effect on core fragmentation, disk properties, and the IMF. Finally, I will discuss the new simulation frontier of AMR multigroup radiative transfer.

  17. Time-dependent simulations of disk-embedded planetary atmospheres

    NASA Astrophysics Data System (ADS)

    Stökl, A.; Dorfi, E. A.

    2014-03-01

    At the early stages of evolution of planetary systems, young Earth-like planets still embedded in the protoplanetary disk accumulate disk gas gravitationally into planetary atmospheres. The established way to study such atmospheres are hydrostatic models, even though in many cases the assumption of stationarity is unlikely to be fulfilled. Furthermore, such models rely on the specification of a planetary luminosity, attributed to a continuous, highly uncertain accretion of planetesimals onto the surface of the solid core. We present for the first time time-dependent, dynamic simulations of the accretion of nebula gas into an atmosphere around a proto-planet and the evolution of such embedded atmospheres while integrating the thermal energy budget of the solid core. The spherical symmetric models computed with the TAPIR-Code (short for The adaptive, implicit RHD-Code) range from the surface of the rocky core up to the Hill radius where the surrounding protoplanetary disk provides the boundary conditions. The TAPIR-Code includes the hydrodynamics equations, gray radiative transport and convective energy transport. The results indicate that diskembedded planetary atmospheres evolve along comparatively simple outlines and in particular settle, dependent on the mass of the solid core, at characteristic surface temperatures and planetary luminosities, quite independent on numerical parameters and initial conditions. For sufficiently massive cores, this evolution ultimately also leads to runaway accretion and the formation of a gas planet.

  18. Convective cooling in a pool-type research reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sipaun, Susan, E-mail: susan@nm.gov.my; Usman, Shoaib, E-mail: usmans@mst.edu

    2016-01-22

    A reactor produces heat arising from fission reactions in the nuclear core. In the Missouri University of Science and Technology research reactor (MSTR), this heat is removed by natural convection where the coolant/moderator is demineralised water. Heat energy is transferred from the core into the coolant, and the heated water eventually evaporates from the open pool surface. A secondary cooling system was installed to actively remove excess heat arising from prolonged reactor operations. The nuclear core consists of uranium silicide aluminium dispersion fuel (U{sub 3}Si{sub 2}Al) in the form of rectangular plates. Gaps between the plates allow coolant to passmore » through and carry away heat. A study was carried out to map out heat flow as well as to predict the system’s performance via STAR-CCM+ simulation. The core was approximated as porous media with porosity of 0.7027. The reactor is rated 200kW and total heat density is approximately 1.07+E7 Wm{sup −3}. An MSTR model consisting of 20% of MSTR’s nuclear core in a third of the reactor pool was developed. At 35% pump capacity, the simulation results for the MSTR model showed that water is drawn out of the pool at a rate 1.28 kg s{sup −1} from the 4” pipe, and predicted pool surface temperature not exceeding 30°C.« less

  19. Development of an Efficient CFD Model for Nuclear Thermal Thrust Chamber Assembly Design

    NASA Technical Reports Server (NTRS)

    Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See

    2007-01-01

    The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed thermo-fluid environments and global characteristics of the internal ballistics for a hypothetical solid-core nuclear thermal thrust chamber assembly (NTTCA). Several numerical and multi-physics thermo-fluid models, such as real fluid, chemically reacting, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver as the underlying computational methodology. The numerical simulations of detailed thermo-fluid environment of a single flow element provide a mechanism to estimate the thermal stress and possible occurrence of the mid-section corrosion of the solid core. In addition, the numerical results of the detailed simulation were employed to fine tune the porosity model mimic the pressure drop and thermal load of the coolant flow through a single flow element. The use of the tuned porosity model enables an efficient simulation of the entire NTTCA system, and evaluating its performance during the design cycle.

  20. A flooding induced station blackout analysis for a pressurized water reactor using the RISMC toolkit

    DOE PAGES

    Mandelli, Diego; Prescott, Steven; Smith, Curtis; ...

    2015-05-17

    In this paper we evaluate the impact of a power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: the RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., component/system activation) and to perform statistical analyses. In our case, the simulation of the flooding is performed by using an advanced smooth particle hydrodynamics code calledmore » NEUTRINO. The obtained results allow the user to investigate and quantify the impact of timing and sequencing of events on system safety. The impact of power uprate is determined in terms of both core damage probability and safety margins.« less

  1. Towards a unified Global Weather-Climate Prediction System

    NASA Astrophysics Data System (ADS)

    Lin, S. J.

    2016-12-01

    The Geophysical Fluid Dynamics Laboratory has been developing a unified regional-global modeling system with variable resolution capabilities that can be used for severe weather predictions and kilometer scale regional climate simulations within a unified global modeling system. The foundation of this flexible modeling system is the nonhydrostatic Finite-Volume Dynamical Core on the Cubed-Sphere (FV3). A unique aspect of FV3 is that it is "vertically Lagrangian" (Lin 2004), essentially reducing the equation sets to two dimensions, and is the single most important reason why FV3 outperforms other non-hydrostatic cores. Owning to its accuracy, adaptability, and computational efficiency, the FV3 has been selected as the "engine" for NOAA's Next Generation Global Prediction System (NGGPS). We have built into the modeling system a stretched grid, a two-way regional-global nested grid, and an optimal combination of the stretched and two-way nests capability, making kilometer-scale regional simulations within a global modeling system feasible. Our main scientific goal is to enable simulations of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously regarded as impossible. In this presentation I will demonstrate that, with the FV3, it is computationally feasible to simulate not only super-cell thunderstorms, but also the subsequent genesis of tornado-like vortices using a global model that was originally designed for climate simulations. The development and tuning strategy between traditional weather and climate models are fundamentally different due to different metrics. We were able to adapt and use traditional "climate" metrics or standards, such as angular momentum conservation, energy conservation, and flux balance at top of the atmosphere, and gain insight into problems of traditional weather prediction model for medium-range weather prediction, and vice versa. Therefore, the unification in weather and climate models can happen not just at the algorithm or parameterization level, but also in the metric and tuning strategy used for both applications, and ultimately, with benefits to both weather and climate applications.

  2. Implementation of a Career Decision Game on a Time Shared Computer: An Exploration of Its Value in a Simulated Guidance Environment. Information System for Vocational Decisions.

    ERIC Educational Resources Information Center

    Roman, Richard Allan

    The Information System for Vocational Decisions (ISVD) places Boocock's (1967) Life Career Game in the core of its operating system. This paper considers the types of interaction that will be required of the system, and discusses the role that a career decision game might play in its total context. The paper takes an into-the-future look at the…

  3. Performance Evaluation of NWChem Ab-Initio Molecular Dynamics (AIMD) Simulations on the Intel® Xeon Phi™ Processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bylaska, Eric J.; Jacquelin, Mathias; De Jong, Wibe A.

    2017-10-20

    Ab-initio Molecular Dynamics (AIMD) methods are an important class of algorithms, as they enable scientists to understand the chemistry and dynamics of molecular and condensed phase systems while retaining a first-principles-based description of their interactions. Many-core architectures such as the Intel® Xeon Phi™ processor are an interesting and promising target for these algorithms, as they can provide the computational power that is needed to solve interesting problems in chemistry. In this paper, we describe the efforts of refactoring the existing AIMD plane-wave method of NWChem from an MPI-only implementation to a scalable, hybrid code that employs MPI and OpenMP tomore » exploit the capabilities of current and future many-core architectures. We describe the optimizations required to get close to optimal performance for the multiplication of the tall-and-skinny matrices that form the core of the computational algorithm. We present strong scaling results on the complete AIMD simulation for a test case that simulates 256 water molecules and that strong-scales well on a cluster of 1024 nodes of Intel Xeon Phi processors. We compare the performance obtained with a cluster of dual-socket Intel® Xeon® E5–2698v3 processors.« less

  4. Parallel Agent-Based Simulations on Clusters of GPUs and Multi-Core Processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaby, Brandon G; Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    An effective latency-hiding mechanism is presented in the parallelization of agent-based model simulations (ABMS) with millions of agents. The mechanism is designed to accommodate the hierarchical organization as well as heterogeneity of current state-of-the-art parallel computing platforms. We use it to explore the computation vs. communication trade-off continuum available with the deep computational and memory hierarchies of extant platforms and present a novel analytical model of the tradeoff. We describe our implementation and report preliminary performance results on two distinct parallel platforms suitable for ABMS: CUDA threads on multiple, networked graphical processing units (GPUs), and pthreads on multi-core processors. Messagemore » Passing Interface (MPI) is used for inter-GPU as well as inter-socket communication on a cluster of multiple GPUs and multi-core processors. Results indicate the benefits of our latency-hiding scheme, delivering as much as over 100-fold improvement in runtime for certain benchmark ABMS application scenarios with several million agents. This speed improvement is obtained on our system that is already two to three orders of magnitude faster on one GPU than an equivalent CPU-based execution in a popular simulator in Java. Thus, the overall execution of our current work is over four orders of magnitude faster when executed on multiple GPUs.« less

  5. U.S. Army Research Laboratory (ARL) XPairIt Simulator for Peptide Docking and Analysis

    DTIC Science & Technology

    2014-07-01

    results from a case study, docking a short peptide to a small protein. For this test we choose the 1RXZ system from the Protein Data Bank, which...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data ...core of XPairIt, which additionally contains many data management and organization options, analysis tools, and custom simulation methodology. Two

  6. Dependence of core heating properties on heating pulse duration and intensity

    NASA Astrophysics Data System (ADS)

    Johzaki, Tomoyuki; Nagatomo, Hideo; Sunahara, Atsushi; Cai, Hongbo; Sakagami, Hitoshi; Mima, Kunioki

    2009-11-01

    In the cone-guiding fast ignition, an imploded core is heated by the energy transport of fast electrons generated by the ultra-intense short-pulse laser at the cone inner surface. The fast core heating (˜800eV) has been demonstrated at integrated experiments with GEKKO-XII+ PW laser systems. As the next step, experiments using more powerful heating laser, FIREX, have been started at ILE, Osaka university. In FIREX-I (phase-I of FIREX), our goal is the demonstration of efficient core heating (Ti ˜ 5keV) using a newly developed 10kJ LFEX laser. In the first integrated experiments, the LFEX laser is operated with low energy mode (˜0.5kJ/4ps) to validate the previous GEKKO+PW experiments. Between the two experiments, though the laser energy is similar (˜0.5kJ), the duration is different; ˜0.5ps in the PW laser and ˜ 4ps in the LFEX laser. In this paper, we evaluate the dependence of core heating properties on the heating pulse duration on the basis of integrated simulations with FI^3 (Fast Ignition Integrated Interconnecting) code system.

  7. Three-phase inductive-coupled structures for contactless PHEV charging system

    NASA Astrophysics Data System (ADS)

    Lee, Jia-You; Shen, Hung-Yu; Li, Cheng-Bin

    2016-07-01

    In this article, a new-type three-phase inductive-coupled structure is proposed for the contactless plug-in hybrid electric vehicle (PHEV) charging system regarding with SAE J-1773. Four possible three-phase core structures are presented and subsequently investigated by the finite element analysis. To study the correlation between the core geometric parameter and the coupling coefficient, the magnetic equivalent circuit model of each structure is also established. In accordance with the simulation results, the low reluctance and the sharing of flux path in the core material are achieved by the proposed inductive-coupled structure with an arc-shape and three-phase symmetrical core material. It results in a compensation of the magnetic flux between each phase and a continuous flow of the output power in the inductive-coupled structure. Higher coupling coefficient between inductive-coupled structures is achieved. A comparison of coupling coefficient, mutual inductance, and self-inductance between theoretical and measured results is also performed to verify the proposed model. A 1 kW laboratory scale prototype of the contactless PHEV charging system with the proposed arc-shape three-phase inductive-coupled structure is implemented and tested. An overall system efficiency of 88% is measured when two series lithium iron phosphate battery packs of 25.6 V/8.4 Ah are charged.

  8. GENASIS: General Astrophysical Simulation System. I. Refinable Mesh and Nonrelativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.; Endeve, Eirik; Mezzacappa, Anthony

    2014-02-01

    GenASiS (General Astrophysical Simulation System) is a new code being developed initially and primarily, though by no means exclusively, for the simulation of core-collapse supernovae on the world's leading capability supercomputers. This paper—the first in a series—demonstrates a centrally refined coordinate patch suitable for gravitational collapse and documents methods for compressible nonrelativistic hydrodynamics. We benchmark the hydrodynamics capabilities of GenASiS against many standard test problems; the results illustrate the basic competence of our implementation, demonstrate the strengths and limitations of the HLLC relative to the HLL Riemann solver in a number of interesting cases, and provide preliminary indications of the code's ability to scale and to function with cell-by-cell fixed-mesh refinement.

  9. Using Discrete Event Simulation for Programming Model Exploration at Extreme-Scale: Macroscale Components for the Structural Simulation Toolkit (SST).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilke, Jeremiah J; Kenny, Joseph P.

    2015-02-01

    Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading frameworkmore » allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.« less

  10. Optimizing the noise characteristics of high-power fiber laser systems

    NASA Astrophysics Data System (ADS)

    Jauregui, Cesar; Müller, Michael; Kienel, Marco; Emaury, Florian; Saraceno, Clara J.; Limpert, Jens; Keller, Ursula; Tünnermann, Andreas

    2017-02-01

    The noise characteristics of high-power fiber lasers, unlike those of other solid-state lasers such as thin-disks, have not been systematically studied up to now. However, novel applications for high-power fiber laser systems, such as attosecond pulse generation, put stringent limits to the maximum noise level of these sources. Therefore, in order to address these applications, a detailed knowledge and understanding of the characteristics of noise and its behavior in a fiber laser system is required. In this work we have carried out a systematic study of the propagation of the relative intensity noise (RIN) along the amplification chain of a state-of-the-art high-power fiber laser system. The most striking feature of these measurements is that the RIN level is progressively attenuated after each amplification stage. In order to understand this unexpected behavior, we have simulated the transfer function of the RIN in a fiber amplification stage ( 80μm core) as a function of the seed power and the frequency. Our simulation model shows that this damping of the amplitude noise is related to saturation. Additionally, we show, for the first time to the best of our knowledge, that the fiber design (e.g. core size, glass composition, doping geometry) can be modified to optimize the noise characteristics of high-power fiber laser systems.

  11. Shiga toxin-producing Escherichia coli in meat: a preliminary simulation study on detection capabilities for three sampling methods

    USDA-ARS?s Scientific Manuscript database

    The objective of this simulation study is to determine which sampling method (Cozzini core sampler, core drill shaving, and N-60 surface excision) will better detect Shiga Toxin-producing Escherichia coli (STEC) at varying levels of contamination when present in the meat. 1000 simulated experiments...

  12. An approach to secure weather and climate models against hardware faults

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; Dawson, Andrew

    2017-03-01

    Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelization to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. In this paper, we present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform model simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13 % for the shallow water model.

  13. An approach to secure weather and climate models against hardware faults

    NASA Astrophysics Data System (ADS)

    Düben, Peter; Dawson, Andrew

    2017-04-01

    Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelisation to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. We present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13% for the shallow water model.

  14. Structural response of 1/20-scale models of the Clinch River Breeder Reactor to a simulated hypothetical core disruptive accident. Technical report 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romander, C. M.; Cagliostro, D. J.

    Five experiments were performed to help evaluate the structural integrity of the reactor vessel and head design and to verify code predictions. In the first experiment (SM 1), a detailed model of the head was loaded statically to determine its stiffness. In the remaining four experiments (SM 2 to SM 5), models of the vessel and head were loaded dynamically under a simulated 661 MW-sec hypothetical core disruptive accident (HCDA). Models SM 2 to SM 4, each of increasing complexity, systematically showed the effects of upper internals structures, a thermal liner, core support platform, and torospherical bottom on vessel response.more » Model SM 5, identical to SM 4 but more heavily instrumented, demonstrated experimental reproducibility and provided more comprehensive data. The models consisted of a Ni 200 vessel and core barrel, a head with shielding and simulated component masses, an upper internals structure (UIS), and, in the more complex models SM 4 and SM 5, a Ni 200 thermal liner and core support structure. Water simulated the liquid sodium coolant and a low-density explosive simulated the HCDA loads.« less

  15. Modeling Large Scale Circuits Using Massively Parallel Descrete-Event Simulation

    DTIC Science & Technology

    2013-06-01

    exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power consumption...grow to exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power...Warp Speed 10.0. 2.0 INTRODUCTION As supercomputer systems approach exascale , the core count will exceed 1024 and number of transistors used in

  16. Dynamic nuclear polarization assisted spin diffusion for the solid effect case.

    PubMed

    Hovav, Yonatan; Feintuch, Akiva; Vega, Shimon

    2011-02-21

    The dynamic nuclear polarization (DNP) process in solids depends on the magnitudes of hyperfine interactions between unpaired electrons and their neighboring (core) nuclei, and on the dipole-dipole interactions between all nuclei in the sample. The polarization enhancement of the bulk nuclei has been typically described in terms of a hyperfine-assisted polarization of a core nucleus by microwave irradiation followed by a dipolar-assisted spin diffusion process in the core-bulk nuclear system. This work presents a theoretical approach for the study of this combined process using a density matrix formalism. In particular, solid effect DNP on a single electron coupled to a nuclear spin system is considered, taking into account the interactions between the spins as well as the main relaxation mechanisms introduced via the electron, nuclear, and cross-relaxation rates. The basic principles of the DNP-assisted spin diffusion mechanism, polarizing the bulk nuclei, are presented, and it is shown that the polarization of the core nuclei and the spin diffusion process should not be treated separately. To emphasize this observation the coherent mechanism driving the pure spin diffusion process is also discussed. In order to demonstrate the effects of the interactions and relaxation mechanisms on the enhancement of the nuclear polarization, model systems of up to ten spins are considered and polarization buildup curves are simulated. A linear chain of spins consisting of a single electron coupled to a core nucleus, which in turn is dipolar coupled to a chain of bulk nuclei, is considered. The interaction and relaxation parameters of this model system were chosen in a way to enable a critical analysis of the polarization enhancement of all nuclei, and are not far from the values of (13)C nuclei in frozen (glassy) organic solutions containing radicals, typically used in DNP at high fields. Results from the simulations are shown, demonstrating the complex dependences of the DNP-assisted spin diffusion process on variations of the relevant parameters. In particular, the effect of the spin lattice relaxation times on the polarization buildup times and the resulting end polarization are discussed, and the quenching of the polarizations by the hyperfine interaction is demonstrated.

  17. ORPHANED PROTOSTARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reipurth, Bo; Connelley, Michael; Mikkola, Seppo

    2010-12-10

    We explore the origin of a population of distant companions ({approx}1000-5000 AU) to Class I protostellar sources recently found by Connelley and coworkers, who noted that the companion fraction diminished as the sources evolved. Here, we present N-body simulations of unstable triple systems embedded in dense cloud cores. Many companions are ejected into unbound orbits and quickly escape, but others are ejected with insufficient momentum to climb out of the potential well of the cloud core and associated binary. These loosely bound companions reach distances of many thousands of AU before falling back and eventually being ejected into escapes asmore » the cloud cores gradually disappear. We use the term orphans to denote protostellar objects that are dynamically ejected from their placental cloud cores, either escaping or for a time being tenuously bound at large separations. Half of all triple systems are found to disintegrate during the protostellar stage, so if multiple systems are a frequent outcome of the collapse of a cloud core, then orphans should be common. Bound orphans are associated with embedded close protostellar binaries, but escaping orphans can travel as far as {approx}0.2 pc during the protostellar phase. The steep climb out of a potential well ensures that orphans are not kinematically distinct from young stars born with a less violent pre-history. The identification of orphans outside their heavily extincted cloud cores will allow the detailed study of protostars high up on their Hayashi tracks at near-infrared and in some cases even at optical wavelengths.« less

  18. Two-phase/two-phase heat exchanger simulation analysis

    NASA Technical Reports Server (NTRS)

    Kim, Rhyn H.

    1992-01-01

    The capillary pumped loop (CPL) system is one of the most desirable devices to dissipate heat energy in the radiation environment of the Space Station providing a relatively easy control of the temperature. A condenser, a component of the CPL system, is linked with a buffer evaporator in the form of an annulus section of a double tube heat exchanger arrangement: the concentric core of the double tube is the condenser; the annulus section is used as a buffer between the conditioned space and the radiation surrounding but works as an evaporator. A CPL system with this type of condenser is modeled to simulate its function numerically. Preliminary results for temperature variations of the system are shown and more investigations are suggested for further improvement.

  19. KSC-2009-2652

    NASA Image and Video Library

    2009-04-14

    CAPE CANAVERAL, Fla. – In high bay 4 of the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida, workers place a crane and straps on the Ares I-X simulated launch abort system to lift and rotate it for assembly with the crew module simulator. Ares I-X is the flight test vehicle for the Ares I, which is part of the Constellation Program to return men to the moon and beyond. Ares I is the essential core of a safe, reliable, cost-effective space transportation system that eventually will carry crewed missions back to the moon, on to Mars and out into the solar system. Ares I-X is targeted for launch in July 2009. Photo credit: NASA/Jack Pfaller

  20. KSC-2009-2651

    NASA Image and Video Library

    2009-04-14

    CAPE CANAVERAL, Fla. – In high bay 4 of the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida, workers prepare the crane that will lift and rotate the Ares I-X simulated launch abort system (center) for assembly with the crew module simulator. Ares I-X is the flight test vehicle for the Ares I, which is part of the Constellation Program to return men to the moon and beyond. Ares I is the essential core of a safe, reliable, cost-effective space transportation system that eventually will carry crewed missions back to the moon, on to Mars and out into the solar system. Ares I-X is targeted for launch in July 2009. Photo credit: NASA/Jack Pfaller

  1. KSC-2009-2660

    NASA Image and Video Library

    2009-04-14

    CAPE CANAVERAL, Fla. – In high bay 4 of the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida, workers keep close watch on the Ares I-X simulated launch abort system, or LAS, as it is lowered onto the crew module simulator for assembly. Ares I-X is the flight test vehicle for the Ares I, which is part of the Constellation Program to return men to the moon and beyond. Ares I is the essential core of a safe, reliable, cost-effective space transportation system that eventually will carry crewed missions back to the moon, on to Mars and out into the solar system. Ares I-X is targeted for launch in July 2009. Photo credit: NASA/Jack Pfaller

  2. KSC-2009-2658

    NASA Image and Video Library

    2009-04-14

    CAPE CANAVERAL, Fla. – In high bay 4 of the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida, workers keep close watch on the Ares I-X simulated launch abort system, or LAS, as it is lowered toward the crew module simulator. Ares I-X is the flight test vehicle for the Ares I, which is part of the Constellation Program to return men to the moon and beyond. Ares I is the essential core of a safe, reliable, cost-effective space transportation system that eventually will carry crewed missions back to the moon, on to Mars and out into the solar system. Ares I-X is targeted for launch in July 2009. Photo credit: NASA/Jack Pfaller

  3. KSC-2009-2657

    NASA Image and Video Library

    2009-04-14

    CAPE CANAVERAL, Fla. – In high bay 4 of the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida, the Ares I-X simulated launch abort system, or LAS, (left of center) is being moved to the crew module simulator (center) for assembly. Ares I-X is the flight test vehicle for the Ares I, which is part of the Constellation Program to return men to the moon and beyond. Ares I is the essential core of a safe, reliable, cost-effective space transportation system that eventually will carry crewed missions back to the moon, on to Mars and out into the solar system. Ares I-X is targeted for launch in July 2009. Photo credit: NASA/Jack Pfaller

  4. KSC-2009-2659

    NASA Image and Video Library

    2009-04-14

    CAPE CANAVERAL, Fla. – In high bay 4 of the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida, workers keep close watch on the Ares I-X simulated launch abort system, or LAS, as it is lowered toward the crew module simulator. Ares I-X is the flight test vehicle for the Ares I, which is part of the Constellation Program to return men to the moon and beyond. Ares I is the essential core of a safe, reliable, cost-effective space transportation system that eventually will carry crewed missions back to the moon, on to Mars and out into the solar system. Ares I-X is targeted for launch in July 2009. Photo credit: NASA/Jack Pfaller

  5. Design and testing of coring bits on drilling lunar rock simulant

    NASA Astrophysics Data System (ADS)

    Li, Peng; Jiang, Shengyuan; Tang, Dewei; Xu, Bo; Ma, Chao; Zhang, Hui; Qin, Hongwei; Deng, Zongquan

    2017-02-01

    Coring bits are widely utilized in the sampling of celestial bodies, and their drilling behaviors directly affect the sampling results and drilling security. This paper introduces a lunar regolith coring bit (LRCB), which is a key component of sampling tools for lunar rock breaking during the lunar soil sampling process. We establish the interaction model between the drill bit and rock at a small cutting depth, and the two main influential parameters (forward and outward rake angles) of LRCB on drilling loads are determined. We perform the parameter screening task of LRCB with the aim to minimize the weight on bit (WOB). We verify the drilling load performances of LRCB after optimization, and the higher penetrations per revolution (PPR) are, the larger drilling loads we gained. Besides, we perform lunar soil drilling simulations to estimate the efficiency on chip conveying and sample coring of LRCB. The results of the simulation and test are basically consistent on coring efficiency, and the chip removal efficiency of LRCB is slightly lower than HIT-H bit from simulation. This work proposes a method for the design of coring bits in subsequent extraterrestrial explorations.

  6. The Search for Subsurface Life on Mars: Results from the MARTE Analog Drill Experiment in Rio Tinto, Spain

    NASA Astrophysics Data System (ADS)

    Stoker, C. R.; Lemke, L. G.; Cannon, H.; Glass, B.; Dunagan, S.; Zavaleta, J.; Miller, D.; Gomez-Elvira, J.

    2006-03-01

    The Mars Analog Research and Technology (MARTE) experiment has developed an automated drilling system on a simulated Mars lander platform including drilling, sample handling, core analysis and down-hole instruments relevant to searching for life in the Martian subsurface.

  7. Performance Evaluation of the Honeywell GG1308 Miniature Ring Laser Gyroscope

    DTIC Science & Technology

    1993-01-01

    information. The final display line provides the current DSB configuration status. An external strobe was established between the Contraves motion...components and systems. The core of the facility is a Contraves -Goerz Model 57CD 2-axis motion simulator capable of highly precise position, rate and

  8. Transient climate simulations of the deglaciation 21-9 thousand years before present (version 1) - PMIP4 Core experiment design and boundary conditions

    NASA Astrophysics Data System (ADS)

    Ivanovic, Ruza F.; Gregoire, Lauren J.; Kageyama, Masa; Roche, Didier M.; Valdes, Paul J.; Burke, Andrea; Drummond, Rosemarie; Peltier, W. Richard; Tarasov, Lev

    2016-07-01

    The last deglaciation, which marked the transition between the last glacial and present interglacial periods, was punctuated by a series of rapid (centennial and decadal) climate changes. Numerical climate models are useful for investigating mechanisms that underpin the climate change events, especially now that some of the complex models can be run for multiple millennia. We have set up a Paleoclimate Modelling Intercomparison Project (PMIP) working group to coordinate efforts to run transient simulations of the last deglaciation, and to facilitate the dissemination of expertise between modellers and those engaged with reconstructing the climate of the last 21 000 years. Here, we present the design of a coordinated Core experiment over the period 21-9 thousand years before present (ka) with time-varying orbital forcing, greenhouse gases, ice sheets and other geographical changes. A choice of two ice sheet reconstructions is given, and we make recommendations for prescribing ice meltwater (or not) in the Core experiment. Additional focussed simulations will also be coordinated on an ad hoc basis by the working group, for example to investigate more thoroughly the effect of ice meltwater on climate system evolution, and to examine the uncertainty in other forcings. Some of these focussed simulations will target shorter durations around specific events in order to understand them in more detail and allow for the more computationally expensive models to take part.

  9. Note: The design of thin gap chamber simulation signal source based on field programmable gate array.

    PubMed

    Hu, Kun; Lu, Houbing; Wang, Xu; Li, Feng; Liang, Futian; Jin, Ge

    2015-01-01

    The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.

  10. Note: The design of thin gap chamber simulation signal source based on field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Kun; Wang, Xu; Li, Feng

    The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.

  11. Specifying the core network supporting episodic simulation and episodic memory by activation likelihood estimation

    PubMed Central

    Benoit, Roland G.; Schacter, Daniel L.

    2015-01-01

    It has been suggested that the simulation of hypothetical episodes and the recollection of past episodes are supported by fundamentally the same set of brain regions. The present article specifies this core network via Activation Likelihood Estimation (ALE). Specifically, a first meta-analysis revealed joint engagement of core network regions during episodic memory and episodic simulation. These include parts of the medial surface, the hippocampus and parahippocampal cortex within the medial temporal lobes, and the lateral temporal and inferior posterior parietal cortices on the lateral surface. Both capacities also jointly recruited additional regions such as parts of the bilateral dorsolateral prefrontal cortex. All of these core regions overlapped with the default network. Moreover, it has further been suggested that episodic simulation may require a stronger engagement of some of the core network’s nodes as wells as the recruitment of additional brain regions supporting control functions. A second ALE meta-analysis indeed identified such regions that were consistently more strongly engaged during episodic simulation than episodic memory. These comprised the core-network clusters located in the left dorsolateral prefrontal cortex and posterior inferior parietal lobe and other structures distributed broadly across the default and fronto-parietal control networks. Together, the analyses determine the set of brain regions that allow us to experience past and hypothetical episodes, thus providing an important foundation for studying the regions’ specialized contributions and interactions. PMID:26142352

  12. Modeling of light absorption in tissue during infrared neural stimulation.

    PubMed

    Thompson, Alexander C; Wade, Scott A; Brown, William G A; Stoddart, Paul R

    2012-07-01

    A Monte Carlo model has been developed to simulate light transport and absorption in neural tissue during infrared neural stimulation (INS). A range of fiber core sizes and numerical apertures are compared illustrating the advantages of using simulations when designing a light delivery system. A range of wavelengths, commonly used for INS, are also compared for stimulation of nerves in the cochlea, in terms of both the energy absorbed and the change in temperature due to a laser pulse. Modeling suggests that a fiber with core diameter of 200 μm and NA=0.22 is optimal for optical stimulation in the geometry used and that temperature rises in the spiral ganglion neurons are as low as 0.1°C. The results show a need for more careful experimentation to allow different proposed mechanisms of INS to be distinguished.

  13. Modeling of light absorption in tissue during infrared neural stimulation

    NASA Astrophysics Data System (ADS)

    Thompson, Alexander C.; Wade, Scott A.; Brown, William G. A.; Stoddart, Paul R.

    2012-07-01

    A Monte Carlo model has been developed to simulate light transport and absorption in neural tissue during infrared neural stimulation (INS). A range of fiber core sizes and numerical apertures are compared illustrating the advantages of using simulations when designing a light delivery system. A range of wavelengths, commonly used for INS, are also compared for stimulation of nerves in the cochlea, in terms of both the energy absorbed and the change in temperature due to a laser pulse. Modeling suggests that a fiber with core diameter of 200 μm and NA=0.22 is optimal for optical stimulation in the geometry used and that temperature rises in the spiral ganglion neurons are as low as 0.1°C. The results show a need for more careful experimentation to allow different proposed mechanisms of INS to be distinguished.

  14. Brightness analysis of an electron beam with a complex profile

    NASA Astrophysics Data System (ADS)

    Maesaka, Hirokazu; Hara, Toru; Togawa, Kazuaki; Inagaki, Takahiro; Tanaka, Hitoshi

    2018-05-01

    We propose a novel analysis method to obtain the core bright part of an electron beam with a complex phase-space profile. This method is beneficial to evaluate the performance of simulation data of a linear accelerator (linac), such as an x-ray free electron laser (XFEL) machine, since the phase-space distribution of a linac electron beam is not simple, compared to a Gaussian beam in a synchrotron. In this analysis, the brightness of undulator radiation is calculated and the core of an electron beam is determined by maximizing the brightness. We successfully extracted core electrons from a complex beam profile of XFEL simulation data, which was not expressed by a set of slice parameters. FEL simulations showed that the FEL intensity was well remained even after extracting the core part. Consequently, the FEL performance can be estimated by this analysis without time-consuming FEL simulations.

  15. Core Physics and Kinetics Calculations for the Fissioning Plasma Core Reactor

    NASA Technical Reports Server (NTRS)

    Butler, C.; Albright, D.

    2007-01-01

    Highly efficient, compact nuclear reactors would provide high specific impulse spacecraft propulsion. This analysis and numerical simulation effort has focused on the technical feasibility issues related to the nuclear design characteristics of a novel reactor design. The Fissioning Plasma Core Reactor (FPCR) is a shockwave-driven gaseous-core nuclear reactor, which uses Magneto Hydrodynamic effects to generate electric power to be used for propulsion. The nuclear design of the system depends on two major calculations: core physics calculations and kinetics calculations. Presently, core physics calculations have concentrated on the use of the MCNP4C code. However, initial results from other codes such as COMBINE/VENTURE and SCALE4a. are also shown. Several significant modifications were made to the ISR-developed QCALC1 kinetics analysis code. These modifications include testing the state of the core materials, an improvement to the calculation of the material properties of the core, the addition of an adiabatic core temperature model and improvement of the first order reactivity correction model. The accuracy of these modifications has been verified, and the accuracy of the point-core kinetics model used by the QCALC1 code has also been validated. Previously calculated kinetics results for the FPCR were described in the ISR report, "QCALC1: A code for FPCR Kinetics Model Feasibility Analysis" dated June 1, 2002.

  16. Terrestrial Planet Formation in Binary Star Systems

    NASA Technical Reports Server (NTRS)

    Lissauer, Jack J.; Quintana, Elisa V.; Chambers, John; Duncan, Martin J.; Adams, Fred

    2003-01-01

    Most stars reside in multiple star systems; however, virtually all models of planetary growth have assumed an isolated single star. Numerical simulations of the collapse of molecular cloud cores to form binary stars suggest that disks will form within such systems. Observations indirectly suggest disk material around one or both components within young binary star systems. If planets form at the right places within such circumstellar disks, they can remain in stable orbits within the binary star systems for eons. We are simulating the late stages of growth of terrestrial planets within binary star systems, using a new, ultrafast, symplectic integrator that we have developed for this purpose. We show that the late stages of terrestrial planet formation can indeed take place in a wide variety of binary systems and we have begun to delineate the range of parameter space for which this statement is true. Results of our initial simulations of planetary growth around each star in the alpha Centauri system and other 'wide' binary systems, as well as around both stars in very close binary systems, will be presented.

  17. RAPID-L Highly Automated Fast Reactor Concept Without Any Control Rods (2) Critical experiment of lithium-6 used in LEM and LIM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsunoda, Hirokazu; Sato, Osamu; Okajima, Shigeaki

    2002-07-01

    In order to achieve fully automated reactor operation of RAPID-L reactor, innovative reactivity control systems LEM, LIM, and LRM are equipped with lithium-6 as a liquid poison. Because lithium-6 has not been used as a neutron absorbing material of conventional fast reactors, measurements of the reactivity worth of Lithium-6 were performed at the Fast Critical Assembly (FCA) of Japan Atomic Energy Research Institute (JAERI). The FCA core was composed of highly enriched uranium and stainless steel samples so as to simulate the core spectrum of RAPID-L. The samples of 95% enriched lithium-6 were inserted into the core parallel to themore » core axis for the measurement of the reactivity worth at each position. It was found that the measured reactivity worth in the core region well agreed with calculated value by the method for the core designs of RAPID-L. Bias factors for the core design method were obtained by comparing between experimental and calculated results. The factors were used to determine the number of LEM and LIM equipped in the core to achieve fully automated operation of RAPID-L. (authors)« less

  18. Confirmation of a realistic reactor model for BNCT dosimetry at the TRIGA Mainz

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziegner, Markus, E-mail: Markus.Ziegner.fl@ait.ac.at; Schmitz, Tobias; Hampel, Gabriele

    2014-11-01

    Purpose: In order to build up a reliable dose monitoring system for boron neutron capture therapy (BNCT) applications at the TRIGA reactor in Mainz, a computer model for the entire reactor was established, simulating the radiation field by means of the Monte Carlo method. The impact of different source definition techniques was compared and the model was validated by experimental fluence and dose determinations. Methods: The depletion calculation code ORIGEN2 was used to compute the burn-up and relevant material composition of each burned fuel element from the day of first reactor operation to its current core. The material composition ofmore » the current core was used in a MCNP5 model of the initial core developed earlier. To perform calculations for the region outside the reactor core, the model was expanded to include the thermal column and compared with the previously established ATTILA model. Subsequently, the computational model is simplified in order to reduce the calculation time. Both simulation models are validated by experiments with different setups using alanine dosimetry and gold activation measurements with two different types of phantoms. Results: The MCNP5 simulated neutron spectrum and source strength are found to be in good agreement with the previous ATTILA model whereas the photon production is much lower. Both MCNP5 simulation models predict all experimental dose values with an accuracy of about 5%. The simulations reveal that a Teflon environment favorably reduces the gamma dose component as compared to a polymethyl methacrylate phantom. Conclusions: A computer model for BNCT dosimetry was established, allowing the prediction of dosimetric quantities without further calibration and within a reasonable computation time for clinical applications. The good agreement between the MCNP5 simulations and experiments demonstrates that the ATTILA model overestimates the gamma dose contribution. The detailed model can be used for the planning of structural modifications in the thermal column irradiation channel or the use of different irradiation sites than the thermal column, e.g., the beam tubes.« less

  19. Synthesis, characterization, and 3D-FDTD simulation of Ag@SiO2 nanoparticles for shell-isolated nanoparticle-enhanced Raman spectroscopy.

    PubMed

    Uzayisenga, Viviane; Lin, Xiao-Dong; Li, Li-Mei; Anema, Jason R; Yang, Zhi-Lin; Huang, Yi-Fan; Lin, Hai-Xin; Li, Song-Bo; Li, Jian-Feng; Tian, Zhong-Qun

    2012-06-19

    Au-seed Ag-growth nanoparticles of controllable diameter (50-100 nm), and having an ultrathin SiO(2) shell of controllable thickness (2-3 nm), were prepared for shell-isolated nanoparticle-enhanced Raman spectroscopy (SHINERS). Their morphological, optical, and material properties were characterized; and their potential for use as a versatile Raman signal amplifier was investigated experimentally using pyridine as a probe molecule and theoretically by the three-dimensional finite-difference time-domain (3D-FDTD) method. We show that a SiO(2) shell as thin as 2 nm can be synthesized pinhole-free on the Ag surface of a nanoparticle, which then becomes the core. The dielectric SiO(2) shell serves to isolate the Raman-signal enhancing core and prevent it from interfering with the system under study. The SiO(2) shell also hinders oxidation of the Ag surface and nanoparticle aggregation. It significantly improves the stability and reproducibility of surface-enhanced Raman scattering (SERS) signal intensity, which is essential for SERS applications. Our 3D-FDTD simulations show that Ag-core SHINERS nanoparticles yield at least 2 orders of magnitude greater enhancement than Au-core ones when excited with green light on a smooth Ag surface, and thus add to the versatility of our SHINERS method.

  20. Effects of Boron and Graphite Uncertainty in Fuel for TREAT Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaughn, Kyle; Mausolff, Zander; Gonzalez, Esteban

    Advanced modeling techniques and current computational capacity make full core TREAT simulations possible, with the goal of such simulations to understand the pre-test core and minimize the number of required calibrations. But, in order to simulate TREAT with a high degree of precision the reactor materials and geometry must also be modeled with a high degree of precision. This paper examines how uncertainty in the reported values of boron and graphite have an effect on simulations of TREAT.

  1. Impact of the dynamical core on the direct simulation of tropical cyclones in a high-resolution global model

    DOE PAGES

    Reed, K. A.; Bacmeister, J. T.; Rosenbloom, N. A.; ...

    2015-05-13

    Our paper examines the impact of the dynamical core on the simulation of tropical cyclone (TC) frequency, distribution, and intensity. The dynamical core, the central fluid flow component of any general circulation model (GCM), is often overlooked in the analysis of a model's ability to simulate TCs compared to the impact of more commonly documented components (e.g., physical parameterizations). The Community Atmosphere Model version 5 is configured with multiple dynamics packages. This analysis demonstrates that the dynamical core has a significant impact on storm intensity and frequency, even in the presence of similar large-scale environments. In particular, the spectral elementmore » core produces stronger TCs and more hurricanes than the finite-volume core using very similar parameterization packages despite the latter having a slightly more favorable TC environment. Furthermore, these results suggest that more detailed investigations into the impact of the GCM dynamical core on TC climatology are needed to fully understand these uncertainties. Key Points The impact of the GCM dynamical core is often overlooked in TC assessments The CAM5 dynamical core has a significant impact on TC frequency and intensity A larger effort is needed to better understand this uncertainty« less

  2. Induction simulation of gas core nuclear engine

    NASA Technical Reports Server (NTRS)

    Poole, J. W.; Vogel, C. E.

    1973-01-01

    The design, construction and operation of an induction heated plasma device known as a combined principles simulator is discussed. This device incorporates the major design features of the gas core nuclear rocket engine such as solid feed, propellant seeding, propellant injection through the walls, and a transpiration cooled, choked flow nozzle. Both argon and nitrogen were used as propellant simulating material, and sodium was used for fuel simulating material. In addition, a number of experiments were conducted utilizing depleted uranium as the fuel. The test program revealed that satisfactory operation of this device can be accomplished over a range of operating conditions and provided additional data to confirm the validity of the gas core concept.

  3. Many-integrated core (MIC) technology for accelerating Monte Carlo simulation of radiation transport: A study based on the code DPM

    NASA Astrophysics Data System (ADS)

    Rodriguez, M.; Brualla, L.

    2018-04-01

    Monte Carlo simulation of radiation transport is computationally demanding to obtain reasonably low statistical uncertainties of the estimated quantities. Therefore, it can benefit in a large extent from high-performance computing. This work is aimed at assessing the performance of the first generation of the many-integrated core architecture (MIC) Xeon Phi coprocessor with respect to that of a CPU consisting of a double 12-core Xeon processor in Monte Carlo simulation of coupled electron-photonshowers. The comparison was made twofold, first, through a suite of basic tests including parallel versions of the random number generators Mersenne Twister and a modified implementation of RANECU. These tests were addressed to establish a baseline comparison between both devices. Secondly, through the p DPM code developed in this work. p DPM is a parallel version of the Dose Planning Method (DPM) program for fast Monte Carlo simulation of radiation transport in voxelized geometries. A variety of techniques addressed to obtain a large scalability on the Xeon Phi were implemented in p DPM. Maximum scalabilities of 84 . 2 × and 107 . 5 × were obtained in the Xeon Phi for simulations of electron and photon beams, respectively. Nevertheless, in none of the tests involving radiation transport the Xeon Phi performed better than the CPU. The disadvantage of the Xeon Phi with respect to the CPU owes to the low performance of the single core of the former. A single core of the Xeon Phi was more than 10 times less efficient than a single core of the CPU for all radiation transport simulations.

  4. Multi-core and GPU accelerated simulation of a radial star target imaged with equivalent t-number circular and Gaussian pupils

    NASA Astrophysics Data System (ADS)

    Greynolds, Alan W.

    2013-09-01

    Results from the GelOE optical engineering software are presented for the through-focus, monochromatic coherent and polychromatic incoherent imaging of a radial "star" target for equivalent t-number circular and Gaussian pupils. The FFT-based simulations are carried out using OpenMP threading on a multi-core desktop computer, with and without the aid of a many-core NVIDIA GPU accessing its cuFFT library. It is found that a custom FFT optimized for the 12-core host has similar performance to a simply implemented 256-core GPU FFT. A more sophisticated version of the latter but tuned to reduce overhead on a 448-core GPU is 20 to 28 times faster than a basic FFT implementation running on one CPU core.

  5. Simulated storm surge effects on freshwater coastal wetland soil porewater salinity and extractable ammonium levels: Implications for marsh recovery after storm surge

    NASA Astrophysics Data System (ADS)

    McKee, M.; White, J. R.; Putnam-Duhon, L. A.

    2016-11-01

    Coastal wetland systems experience both short-term changes in salinity, such as those caused by wind-driven tides and storm surge, and long-term shifts caused by sea level rise. Salinity increases associated with storm surge are known to have significant effects on soil porewater chemistry, but there is little research on the effect of flooding length on salt penetration depth into coastal marsh soils. A simulated storm surge was imposed on intact soil columns collected from a non-vegetated mudflat and a vegetated marsh site in the Wax Lake Delta, LA. Triplicate intact cores were continuously exposed to a 35 salinity water column (practical salinity scale) for 1, 2, and 4 weeks and destructively sampled in order to measure porewater salinity and extractable NH4sbnd N at two cm depth intervals. Salinity was significantly higher in the top 8 cm for both the marsh and mudflat cores after one week of flooding. After four weeks of flooding, salinity was significantly higher in marsh and mudflat cores compared to the control (no salinity) cores throughout the profile for both sites. Extractable ammonium levels increased significantly in the marsh cores throughout the experiment, but there was only a marginally (p < 0.1) significant increase seen in the mudflat cores. Results indicate that porewater salinity levels can become significantly elevated within a coastal marsh soil in just one week. This vertical intrusion of salt can potentially negatively impact macrophytes and associated microbial communities for significantly longer term post-storm surge.

  6. A Perturbation Analysis of Harmonics Generation from Saturated Elements in Power Systems

    NASA Astrophysics Data System (ADS)

    Kumano, Teruhisa

    Nonlinear phenomena such as saturation in magnetic flux give considerable effects in power system analysis. It is reported that a failure in a real 500kV system triggered islanding operation, where resultant even harmonics caused malfunctions in protective relays. It is also reported that the major origin of this wave distortion is nothing but unidirectional magnetization of the transformer iron core. Time simulation is widely used today to analyze this type of phenomena, but it has basically two shortcomings. One is that the time simulation takes two much computing time in the vicinity of inflection points in the saturation characteristic curve because certain iterative procedure such as N-R (Newton-Raphson) should be used and such methods tend to be caught in an ill conditioned numerical hunting. The other is that such simulation methods sometimes do not help intuitive understanding of the studied phenomenon because the whole nonlinear equations are treated in a matrix form and not properly divided into understandable parts as done in linear systems. This paper proposes a new computation scheme which is based on so called perturbation method. Magnetic saturation in iron cores in a generator and a transformer are taken into account. The proposed method has a special feature against the first shortcoming of the N-R based time simulation method stated above. In the proposed method no iterative process is used to reduce the equation residue but uses perturbation series, which means free from the ill condition problem. Users have only to calculate each perturbation terms one by one until he reaches necessary accuracy. In a numerical example treated in the present paper the first order perturbation can make reasonably high accuracy, which means very fast computing. In numerical study three nonlinear elements are considered. Calculated results are almost identical to the conventional Newton-Raphson based time simulation, which shows the validity of the method. The proposed method would be effectively used in a screening where many case studies are needed.

  7. Why simulation can be efficient: on the preconditions of efficient learning in complex technology based practices.

    PubMed

    Hofmann, Bjørn

    2009-07-23

    It is important to demonstrate learning outcomes of simulation in technology based practices, such as in advanced health care. Although many studies show skills improvement and self-reported change to practice, there are few studies demonstrating patient outcome and societal efficiency. The objective of the study is to investigate if and why simulation can be effective and efficient in a hi-tech health care setting. This is important in order to decide whether and how to design simulation scenarios and outcome studies. Core theoretical insights in Science and Technology Studies (STS) are applied to analyze the field of simulation in hi-tech health care education. In particular, a process-oriented framework where technology is characterized by its devices, methods and its organizational setting is applied. The analysis shows how advanced simulation can address core characteristics of technology beyond the knowledge of technology's functions. Simulation's ability to address skilful device handling as well as purposive aspects of technology provides a potential for effective and efficient learning. However, as technology is also constituted by organizational aspects, such as technology status, disease status, and resource constraints, the success of simulation depends on whether these aspects can be integrated in the simulation setting as well. This represents a challenge for future development of simulation and for demonstrating its effectiveness and efficiency. Assessing the outcome of simulation in education in hi-tech health care settings is worthwhile if core characteristics of medical technology are addressed. This challenges the traditional technical versus non-technical divide in simulation, as organizational aspects appear to be part of technology's core characteristics.

  8. Vortex dynamics and frequency splitting in vertically coupled nanomagnets

    DOE PAGES

    Stebliy, M. E.; Jain, S.; Kolesnikov, A. G.; ...

    2017-04-25

    Here, we explored the dynamic response of a vortex core in a circular nanomagnet by manipulating its dipole-dipole interaction with another vortex core confined locally on top of the nanomagnet. A clear frequency splitting is observed corresponding to the gyrofrequencies of the two vortex cores. The peak positions of the two resonance frequencies can be engineered by controlling the magnitude and direction of the external magnetic field. Both experimental and micromagnetic simulations show that the frequency spectra for the combined system is significantly dependent on the chirality of the circular nanomagnet and is asymmetric with respect to the external biasmore » field. We attribute this result to the strong dynamic dipole-dipole interaction between the two vortex cores, which varies with the distance between them. The possibility of having multiple states in a single nanomagnet with vertical coupling could be of interest for magnetoresistive memories.« less

  9. Computer simulations and real-time control of ELT AO systems using graphical processing units

    NASA Astrophysics Data System (ADS)

    Wang, Lianqi; Ellerbroek, Brent

    2012-07-01

    The adaptive optics (AO) simulations at the Thirty Meter Telescope (TMT) have been carried out using the efficient, C based multi-threaded adaptive optics simulator (MAOS, http://github.com/lianqiw/maos). By porting time-critical parts of MAOS to graphical processing units (GPU) using NVIDIA CUDA technology, we achieved a 10 fold speed up for each GTX 580 GPU used compared to a modern quad core CPU. Each time step of full scale end to end simulation for the TMT narrow field infrared AO system (NFIRAOS) takes only 0.11 second in a desktop with two GTX 580s. We also demonstrate that the TMT minimum variance reconstructor can be assembled in matrix vector multiply (MVM) format in 8 seconds with 8 GTX 580 GPUs, meeting the TMT requirement for updating the reconstructor. Analysis show that it is also possible to apply the MVM using 8 GTX 580s within the required latency.

  10. Computational Aerodynamic Simulations of a 1484 ft/sec Tip Speed Quiet High-Speed Fan System Model for Acoustic Methods Assessment and Development

    NASA Technical Reports Server (NTRS)

    Tweedt, Daniel L.

    2014-01-01

    Computational Aerodynamic simulations of a 1484 ft/sec tip speed quiet high-speed fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, low-noise research fan/nacelle model that has undergone experimental testing in the 9- by 15-foot Low Speed Wind Tunnel at the NASA Glenn Research Center. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating points simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, which includes a core duct and a bypass duct that merge upstream of the fan system nozzle. As a result, only fan rotational speed and the system bypass ratio, set by means of a translating nozzle plug, were adjusted in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. Computed blade row flow fields at all fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the computed flow fields reveals no excessive or critical boundary layer separations or related secondary-flow problems, with the exception of the hub boundary layer at the core duct entrance. At that location a significant flow separation is present. The region of local flow recirculation extends through a mixing plane, however, which for the particular mixing-plane model used is now known to exaggerate the recirculation. In any case, the flow separation has relatively little impact on the computed rotor and FEGV flow fields.

  11. Multi-Kepler GPU vs. multi-Intel MIC for spin systems simulations

    NASA Astrophysics Data System (ADS)

    Bernaschi, M.; Bisson, M.; Salvadore, F.

    2014-10-01

    We present and compare the performances of two many-core architectures: the Nvidia Kepler and the Intel MIC both in a single system and in cluster configuration for the simulation of spin systems. As a benchmark we consider the time required to update a single spin of the 3D Heisenberg spin glass model by using the Over-relaxation algorithm. We present data also for a traditional high-end multi-core architecture: the Intel Sandy Bridge. The results show that although on the two Intel architectures it is possible to use basically the same code, the performances of a Intel MIC change dramatically depending on (apparently) minor details. Another issue is that to obtain a reasonable scalability with the Intel Phi coprocessor (Phi is the coprocessor that implements the MIC architecture) in a cluster configuration it is necessary to use the so-called offload mode which reduces the performances of the single system. As to the GPU, the Kepler architecture offers a clear advantage with respect to the previous Fermi architecture maintaining exactly the same source code. Scalability of the multi-GPU implementation remains very good by using the CPU as a communication co-processor of the GPU. All source codes are provided for inspection and for double-checking the results.

  12. An Integrated Modeling Suite for Simulating the Core Induction and Kinetic Effects in Mercury's Magnetosphere

    NASA Astrophysics Data System (ADS)

    Jia, X.; Slavin, J.; Chen, Y.; Poh, G.; Toth, G.; Gombosi, T.

    2018-05-01

    We present results from state-of-the-art global models of Mercury's space environment capable of self-consistently simulating the induction effect at the core and resolving kinetic physics important for magnetic reconnection.

  13. Neural simulations on multi-core architectures.

    PubMed

    Eichner, Hubert; Klug, Tobias; Borst, Alexander

    2009-01-01

    Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing.

  14. Neural Simulations on Multi-Core Architectures

    PubMed Central

    Eichner, Hubert; Klug, Tobias; Borst, Alexander

    2009-01-01

    Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing. PMID:19636393

  15. Supercontinuum generation and analysis in extruded suspended-core As2S3 chalcogenide fibers

    NASA Astrophysics Data System (ADS)

    Si, Nian; Sun, Lihong; Zhao, Zheming; Wang, Xunsi; Zhu, Qingde; Zhang, Peiqing; Liu, Shuo; Pan, Zhanghao; Liu, Zijun; Dai, Shixun; Nie, Qiuhua

    2018-02-01

    Compared with the traditional fluoride fibers and tellurite fibers that can work in the near-infrared region, suspended-core fibers based on chalcogenide glasses have wider transmitting regions and higher nonlinear coefficients, thus the mid-infrared supercontinuum generations can be achieved easily. Rather than adopting the traditional fabrication technique of hole-drilling and air filling, we adopted a totally novel extrusion technique to fabricate As2S3 suspended-core fibers with four holes, and its mid-infrared supercontinuum generation was investigated systematically by integrating theoretical simulation and empirical results. The generalized nonlinear SchrÖdinger equation was used to simulate the supercontinuum generation in the As2S3 suspended-core fibers. The simulated supercontinuum generation in the As2S3 suspended-core fibers with different pump wavelengths (2-5 µm), increasing powers (0.3-4 kW), and various fiber lengths (1-50 cm) was obtained by a simulative software, MATLAB. The experimental results of supercontinuum generation via femtosecond optical parametric amplification (OPA) were recorded by changing fiber lengths (5-25 cm), pump wavelengths (2.9-5 µm), and pump powers (10-200 kW). The simulated consulting spectra are consistent with the experimental results of supercontinuum generation only if the fiber loss is sufficiently low.

  16. Two-dimensional solitons in conservative and parity-time-symmetric triple-core waveguides with cubic-quintic nonlinearity

    NASA Astrophysics Data System (ADS)

    Feijoo, David; Zezyulin, Dmitry A.; Konotop, Vladimir V.

    2015-12-01

    We analyze a system of three two-dimensional nonlinear Schrödinger equations coupled by linear terms and with the cubic-quintic (focusing-defocusing) nonlinearity. We consider two versions of the model: conservative and parity-time (PT ) symmetric. These models describe triple-core nonlinear optical waveguides, with balanced gain and losses in the PT -symmetric case. We obtain families of soliton solutions and discuss their stability. The latter study is performed using a linear stability analysis and checked with direct numerical simulations of the evolutional system of equations. Stable solitons are found in the conservative and PT -symmetric cases. Interactions and collisions between the conservative and PT -symmetric solitons are briefly investigated, as well.

  17. Simulations of Atmospheric Plasma Arcs

    NASA Astrophysics Data System (ADS)

    Pearcy, Jacob; Chopra, Nirbhav; Jaworski, Michael

    2017-10-01

    We present the results of computer simulation of cylindrical plasma arcs with characteristics similar to those predicted to be relevant in magnetohydrodynamic (MHD) power conversion systems. These arcs, with core temperatures on the order of 1 eV, place stringent limitations on the lifetime of conventional electrodes used in such systems, suggesting that a detailed analysis of arc characteristics will be crucial in designing more robust electrode systems. Simulations utilize results from NASA's Chemical Equilibrium with Applications (CEA) program to solve the Elenbaas-Heller equation in a variety of plasma compositions, including approximations of coal-burning plasmas as well as pure gas discharges. The effect of carbon dioxide injection on arc characteristics, emulating discharges from molten carbonate salt electrodes, is also analyzed. Results include radial temperature profiles, composition maps, and current-voltage (IV) characteristics of these arcs. Work supported by DOE contract DE-AC02-09CH11466.

  18. System-level protection and hardware Trojan detection using weighted voting.

    PubMed

    Amin, Hany A M; Alkabani, Yousra; Selim, Gamal M I

    2014-07-01

    The problem of hardware Trojans is becoming more serious especially with the widespread of fabless design houses and design reuse. Hardware Trojans can be embedded on chip during manufacturing or in third party intellectual property cores (IPs) during the design process. Recent research is performed to detect Trojans embedded at manufacturing time by comparing the suspected chip with a golden chip that is fully trusted. However, Trojan detection in third party IP cores is more challenging than other logic modules especially that there is no golden chip. This paper proposes a new methodology to detect/prevent hardware Trojans in third party IP cores. The method works by gradually building trust in suspected IP cores by comparing the outputs of different untrusted implementations of the same IP core. Simulation results show that our method achieves higher probability of Trojan detection over a naive implementation of simple voting on the output of different IP cores. In addition, experimental results show that the proposed method requires less hardware overhead when compared with a simple voting technique achieving the same degree of security.

  19. Fourier heat conduction as a strong kinetic effect in one-dimensional hard-core gases

    NASA Astrophysics Data System (ADS)

    Zhao, Hanqing; Wang, Wen-ge

    2018-01-01

    For a one-dimensional (1D) momentum conserving system, intensive studies have shown that generally its heat current autocorrelation function (HCAF) tends to decay in a power-law manner and results in the breakdown of the Fourier heat conduction law in the thermodynamic limit. This has been recognized to be a dominant hydrodynamic effect. Here we show that, instead, the kinetic effect can be dominant in some cases and leads to the Fourier law for finite-size systems. Usually the HCAF undergoes a fast decaying kinetic stage followed by a long slowly decaying hydrodynamic tail. In a finite range of the system size, we find that whether the system follows the Fourier law depends on whether the kinetic stage dominates. Our Rapid Communication is illustrated by the 1D hard-core gas models with which the HCAF is derived analytically and verified numerically by molecular dynamics simulations.

  20. Experimental strength of restorations with fibre posts at different stages, with and without using a simulated ligament.

    PubMed

    Pérez-González, A; González-Lluch, C; Sancho-Bru, J L; Rodríguez-Cervantes, P J; Barjau-Escribano, A; Forner-Navarro, L

    2012-03-01

    The aim of this study was to analyse the strength and failure mode of teeth restored with fibre posts under retention and flexural-compressive loads at different stages of the restoration and to analyse whether including a simulated ligament in the experimental setup has any effect on the strength or the failure mode. Thirty human maxillary central incisors were distributed in three different groups to be restored with simulation of different restoration stages (1: only post, 2: post and core, 3: post-core and crown), using Rebilda fibre posts. The specimens were inserted in resin blocks and loaded by means of a universal testing machine until failure under tension (stage 1) and 50º flexion (stages 2-3). Half the specimens in each group were restored using a simulated ligament between root dentine and resin block and the other half did not use this element. Failure in stage 1 always occurred at the post-dentine interface, with a mean failure load of 191·2 N. Failure in stage 2 was located mainly in the core or coronal dentine (mean failure load of 505·9 N). Failure in stage 3 was observed in the coronal dentine (mean failure load 397·4 N). Failure loads registered were greater than expected masticatory loads. Fracture modes were mostly reparable, thus indicating that this post is clinically valid at the different stages of restoration studied. The inclusion of the simulated ligament in the experimental system did not show a statistically significant effect on the failure load or the failure mode. © 2011 Blackwell Publishing Ltd.

  1. An Internet Protocol-Based Software System for Real-Time, Closed-Loop, Multi-Spacecraft Mission Simulation Applications

    NASA Technical Reports Server (NTRS)

    Davis, George; Cary, Everett; Higinbotham, John; Burns, Richard; Hogie, Keith; Hallahan, Francis

    2003-01-01

    The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.

  2. A Distributed Simulation Software System for Multi-Spacecraft Missions

    NASA Technical Reports Server (NTRS)

    Burns, Richard; Davis, George; Cary, Everett

    2003-01-01

    The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.

  3. Development of massive multilevel molecular dynamics simulation program, Platypus (PLATform for dYnamic Protein Unified Simulation), for the elucidation of protein functions.

    PubMed

    Takano, Yu; Nakata, Kazuto; Yonezawa, Yasushige; Nakamura, Haruki

    2016-05-05

    A massively parallel program for quantum mechanical-molecular mechanical (QM/MM) molecular dynamics simulation, called Platypus (PLATform for dYnamic Protein Unified Simulation), was developed to elucidate protein functions. The speedup and the parallelization ratio of Platypus in the QM and QM/MM calculations were assessed for a bacteriochlorophyll dimer in the photosynthetic reaction center (DIMER) on the K computer, a massively parallel computer achieving 10 PetaFLOPs with 705,024 cores. Platypus exhibited the increase in speedup up to 20,000 core processors at the HF/cc-pVDZ and B3LYP/cc-pVDZ, and up to 10,000 core processors by the CASCI(16,16)/6-31G** calculations. We also performed excited QM/MM-MD simulations on the chromophore of Sirius (SIRIUS) in water. Sirius is a pH-insensitive and photo-stable ultramarine fluorescent protein. Platypus accelerated on-the-fly excited-state QM/MM-MD simulations for SIRIUS in water, using over 4000 core processors. In addition, it also succeeded in 50-ps (200,000-step) on-the-fly excited-state QM/MM-MD simulations for the SIRIUS in water. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.

  4. Specifying the core network supporting episodic simulation and episodic memory by activation likelihood estimation.

    PubMed

    Benoit, Roland G; Schacter, Daniel L

    2015-08-01

    It has been suggested that the simulation of hypothetical episodes and the recollection of past episodes are supported by fundamentally the same set of brain regions. The present article specifies this core network via Activation Likelihood Estimation (ALE). Specifically, a first meta-analysis revealed joint engagement of expected core-network regions during episodic memory and episodic simulation. These include parts of the medial surface, the hippocampus and parahippocampal cortex within the medial temporal lobes, and the temporal and inferior posterior parietal cortices on the lateral surface. Both capacities also jointly recruited additional regions such as parts of the bilateral dorsolateral prefrontal cortex. All of these core regions overlapped with the default network. Moreover, it has further been suggested that episodic simulation may require a stronger engagement of some of the core network's nodes as well as the recruitment of additional brain regions supporting control functions. A second ALE meta-analysis indeed identified such regions that were consistently more strongly engaged during episodic simulation than episodic memory. These comprised the core-network clusters located in the left dorsolateral prefrontal cortex and posterior inferior parietal lobe and other structures distributed broadly across the default and fronto-parietal control networks. Together, the analyses determine the set of brain regions that allow us to experience past and hypothetical episodes, thus providing an important foundation for studying the regions' specialized contributions and interactions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Mission: Define Computer Literacy. The Illinois-Wisconsin ISACS Computer Coordinators' Committee on Computer Literacy Report (May 1985).

    ERIC Educational Resources Information Center

    Computing Teacher, 1985

    1985-01-01

    Defines computer literacy and describes a computer literacy course which stresses ethics, hardware, and disk operating systems throughout. Core units on keyboarding, word processing, graphics, database management, problem solving, algorithmic thinking, and programing are outlined, together with additional units on spreadsheets, simulations,…

  6. P-CSI v1.0, an accelerated barotropic solver for the high-resolution ocean model component in the Community Earth System Model v2.0

    NASA Astrophysics Data System (ADS)

    Huang, Xiaomeng; Tang, Qiang; Tseng, Yuheng; Hu, Yong; Baker, Allison H.; Bryan, Frank O.; Dennis, John; Fu, Haohuan; Yang, Guangwen

    2016-11-01

    In the Community Earth System Model (CESM), the ocean model is computationally expensive for high-resolution grids and is often the least scalable component for high-resolution production experiments. The major bottleneck is that the barotropic solver scales poorly at high core counts. We design a new barotropic solver to accelerate the high-resolution ocean simulation. The novel solver adopts a Chebyshev-type iterative method to reduce the global communication cost in conjunction with an effective block preconditioner to further reduce the iterations. The algorithm and its computational complexity are theoretically analyzed and compared with other existing methods. We confirm the significant reduction of the global communication time with a competitive convergence rate using a series of idealized tests. Numerical experiments using the CESM 0.1° global ocean model show that the proposed approach results in a factor of 1.7 speed-up over the original method with no loss of accuracy, achieving 10.5 simulated years per wall-clock day on 16 875 cores.

  7. Absolute binding free energy calculations of CBClip host–guest systems in the SAMPL5 blind challenge

    PubMed Central

    Tofoleanu, Florentina; Pickard, Frank C.; König, Gerhard; Huang, Jing; Damjanović, Ana; Baek, Minkyung; Seok, Chaok; Brooks, Bernard R.

    2016-01-01

    Herein, we report the absolute binding free energy calculations of CBClip complexes in the SAMPL5 blind challenge. Initial conformations of CBClip complexes were obtained using docking and molecular dynamics simulations. Free energy calculations were performed using thermodynamic integration (TI) with soft-core potentials and Bennett’s acceptance ratio (BAR) method based on a serial insertion scheme. We compared the results obtained with TI simulations with soft-core potentials and Hamiltonian replica exchange simulations with the serial insertion method combined with the BAR method. The results show that the difference between the two methods can be mainly attributed to the van der Waals free energies, suggesting that either the simulations used for TI or the simulations used for BAR, or both are not fully converged and the two sets of simulations may have sampled difference phase space regions. The penalty scores of force field parameters of the 10 guest molecules provided by CHARMM Generalized Force Field can be an indicator of the accuracy of binding free energy calculations. Among our submissions, the combination of docking and TI performed best, which yielded the root mean square deviation of 2.94 kcal/mol and an average unsigned error of 3.41 kcal/mol for the ten guest molecules. These values were best overall among all participants. However, our submissions had little correlation with experiments. PMID:27677749

  8. Medicanes in an ocean-atmosphere coupled regional climate model

    NASA Astrophysics Data System (ADS)

    Akhtar, N.; Brauch, J.; Dobler, A.; Béranger, K.; Ahrens, B.

    2014-03-01

    So-called medicanes (Mediterranean hurricanes) are meso-scale, marine, and warm-core Mediterranean cyclones that exhibit some similarities to tropical cyclones. The strong cyclonic winds associated with medicanes threaten the highly populated coastal areas around the Mediterranean basin. To reduce the risk of casualties and overall negative impacts, it is important to improve the understanding of medicanes with the use of numerical models. In this study, we employ an atmospheric limited-area model (COSMO-CLM) coupled with a one-dimensional ocean model (1-D NEMO-MED12) to simulate medicanes. The aim of this study is to assess the robustness of the coupled model in simulating these extreme events. For this purpose, 11 historical medicane events are simulated using the atmosphere-only model, COSMO-CLM, and coupled model, with different setups (horizontal atmospheric grid-spacings of 0.44°, 0.22°, and 0.08°; with/without spectral nudging, and an ocean grid-spacing of 1/12°). The results show that at high-resolution, the coupled model is able to not only simulate most of medicane events but also improve the track length, core temperature, and wind speed of simulated medicanes compared to the atmosphere-only simulations. The results suggest that the coupled model is more proficient for systemic and detailed studies of historical medicane events, and that this model can be an effective tool for future projections.

  9. Medicanes in an ocean-atmosphere coupled regional climate model

    NASA Astrophysics Data System (ADS)

    Akhtar, N.; Brauch, J.; Dobler, A.; Béranger, K.; Ahrens, B.

    2014-08-01

    So-called medicanes (Mediterranean hurricanes) are meso-scale, marine, and warm-core Mediterranean cyclones that exhibit some similarities to tropical cyclones. The strong cyclonic winds associated with medicanes threaten the highly populated coastal areas around the Mediterranean basin. To reduce the risk of casualties and overall negative impacts, it is important to improve the understanding of medicanes with the use of numerical models. In this study, we employ an atmospheric limited-area model (COSMO-CLM) coupled with a one-dimensional ocean model (1-D NEMO-MED12) to simulate medicanes. The aim of this study is to assess the robustness of the coupled model in simulating these extreme events. For this purpose, 11 historical medicane events are simulated using the atmosphere-only model, COSMO-CLM, and coupled model, with different setups (horizontal atmospheric grid spacings of 0.44, 0.22, and 0.08°; with/without spectral nudging, and an ocean grid spacing of 1/12°). The results show that at high resolution, the coupled model is able to not only simulate most of medicane events but also improve the track length, core temperature, and wind speed of simulated medicanes compared to the atmosphere-only simulations. The results suggest that the coupled model is more proficient for systemic and detailed studies of historical medicane events, and that this model can be an effective tool for future projections.

  10. Reducing numerical costs for core wide nuclear reactor CFD simulations by the Coarse-Grid-CFD

    NASA Astrophysics Data System (ADS)

    Viellieber, Mathias; Class, Andreas G.

    2013-11-01

    Traditionally complete nuclear reactor core simulations are performed with subchannel analysis codes, that rely on experimental and empirical input. The Coarse-Grid-CFD (CGCFD) intends to replace the experimental or empirical input with CFD data. The reactor core consists of repetitive flow patterns, allowing the general approach of creating a parametrized model for one segment and composing many of those to obtain the entire reactor simulation. The method is based on a detailed and well-resolved CFD simulation of one representative segment. From this simulation we extract so-called parametrized volumetric forces which close, an otherwise strongly under resolved, coarsely-meshed model of a complete reactor setup. While the formulation so far accounts for forces created internally in the fluid others e.g. obstruction and flow deviation through spacers and wire wraps, still need to be accounted for if the geometric details are not represented in the coarse mesh. These are modelled with an Anisotropic Porosity Formulation (APF). This work focuses on the application of the CGCFD to a complete reactor core setup and the accomplishment of the parametrization of the volumetric forces.

  11. Microwave and infrared simulations of an intense convective system and comparison with aircraft observations

    NASA Technical Reports Server (NTRS)

    Prasad, N.; Yeh, Hwa-Young M.; Adler, Robert F.; Tao, Wei-Kuo

    1995-01-01

    A three-dimensional cloud model, radiative transfer model-based simulation system is tested and validated against the aircraft-based radiance observations of an intense convective system in southeastern Virginia on 29 June 1986 during the Cooperative Huntsville Meteorological Experiment. NASA's ER-2, a high-altitude research aircraft with a complement of radiometers operating at 11-micrometer infrared channel and 18-, 37-, 92-, and 183-GHz microwave channels provided data for this study. The cloud model successfully simulated the cloud system with regard to aircraft- and radar-observed cloud-top heights and diameters and with regard to radar-observed reflectivity structure. For the simulation time found to correspond best with the aircraft- and radar-observed structure, brightness temperatures T(sub b) are simulated and compared with observations for all the microwave frequencies along with the 11-micrometer infrared channel. Radiance calculations at the various frequencies correspond well with the aircraft observations in the areas of deep convection. The clustering of 37-147-GHz T(sub b) observations and the isolation of the 18-GHz values over the convective cores are well simulated by the model. The radiative transfer model, in general, is able to simulate the observations reasonably well from 18 GHz through 174 GHz within all convective areas of the cloud system. When the aircraft-observed 18- and 37-GHz, and 90- and 174-GHz T(sub b) are plotted against each other, the relationships have a gradual difference in the slope due to the differences in the ice particle size in the convective and more stratiform areas of the cloud. The model is able to capture these differences observed by the aircraft. Brightness temperature-rain rate relationships compare reasonably well with the aircraft observations in terms of the slope of the relationship. The model calculations are also extended to select high-frequency channels at 220, 340, and 400 GHz to simulate the Millimeter-wave Imaging Radiometer aircraft instrument to be flown in the near future. All three of these frequencies are able to discriminate the convective and anvil portions of the system, providing useful information similar to that from the frequencies below 183 GHz but with potentially enhanced spatial resolution from a satellite platform. In thin clouds, the dominant effect of water vapor is seen at 174, 340, and 400 GHz. In thick cloudy areas, the scattering effect is dominant at 90 and 220 GHz, while the overlaying water vapor can attenuate at 174, 340, and 400 GHz. All frequencies (90-400 GHz) show strong signatures in the core.

  12. Interim MELCOR Simulation of the Fukushima Daiichi Unit 2 Accident Reactor Core Isolation Cooling Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Kyle W.; Gauntt, Randall O.; Cardoni, Jeffrey N.

    2013-11-01

    Data, a brief description of key boundary conditions, and results of Sandia National Laboratories’ ongoing MELCOR analysis of the Fukushima Unit 2 accident are given for the reactor core isolation cooling (RCIC) system. Important assumptions and related boundary conditions in the current analysis additional to or different than what was assumed/imposed in the work of SAND2012-6173 are identified. This work is for the U.S. Department of Energy’s Nuclear Energy University Programs fiscal year 2014 Reactor Safety Technologies Research and Development Program RC-7: RCIC Performance under Severe Accident Conditions.

  13. Nanomechanical Optical Fiber with Embedded Electrodes Actuated by Joule Heating.

    PubMed

    Lian, Zhenggang; Segura, Martha; Podoliak, Nina; Feng, Xian; White, Nicholas; Horak, Peter

    2014-07-31

    Nanomechanical optical fibers with metal electrodes embedded in the jacket were fabricated by a multi-material co-draw technique. At the center of the fibers, two glass cores suspended by thin membranes and surrounded by air form a directional coupler that is highly temperature-dependent. We demonstrate optical switching between the two fiber cores by Joule heating of the electrodes with as little as 0.4 W electrical power, thereby demonstrating an electrically actuated all-fiber microelectromechanical system (MEMS). Simulations show that the main mechanism for optical switching is the transverse thermal expansion of the fiber structure.

  14. The Interplay of Opacities and Rotation in Promoting the Explosion of Core-Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Vartanyan, David; Burrows, Adam; Radice, David

    2018-01-01

    For over five decades, the mechanism of explosion in core-collapse supernovae has been a central unsolved problem in astrophysics, challenging both our computational capabilities and our understanding of relevant physics. Current simulations often produce explosions, but they are at times underenergetic. The neutrino mechanism, wherein a fraction of emitted neutrinos is absorbed in the mantle of the star to reignite the stalled shock, remains the dominant model for reviving explosions in massive stars undergoing core collapse. We present here a diverse suite of 2D axisymmetric simulations produced by FORNAX, a highly parallelizable multidimensional supernova simulation code. We explore the effects of various corrections, including the many-body correction, to neutrino-matter opacities and the possible role of rotation in promoting explosion amongst various core-collapse progenitors.

  15. KSC-2009-2655

    NASA Image and Video Library

    2009-04-14

    CAPE CANAVERAL, Fla. – In high bay 4 of the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida, the crane raises the Ares I-X simulated launch abort system, or LAS, to a vertical position. The LAS will then be ready for assembly with the crew module simulator. Ares I-X is the flight test vehicle for the Ares I, which is part of the Constellation Program to return men to the moon and beyond. Ares I is the essential core of a safe, reliable, cost-effective space transportation system that eventually will carry crewed missions back to the moon, on to Mars and out into the solar system. Ares I-X is targeted for launch in July 2009. Photo credit: NASA/Jack Pfaller

  16. KSC-2009-2653

    NASA Image and Video Library

    2009-04-14

    CAPE CANAVERAL, Fla. – In In high bay 4 of the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida, the crane lifts the Ares I-X simulated launch abort system, or LAS, from its stand. The LAS will be rotated to vertical for assembly with the crew module simulator. Ares I-X is the flight test vehicle for the Ares I, which is part of the Constellation Program to return men to the moon and beyond. Ares I is the essential core of a safe, reliable, cost-effective space transportation system that eventually will carry crewed missions back to the moon, on to Mars and out into the solar system. Ares I-X is targeted for launch in July 2009. Photo credit: NASA/Jack Pfaller

  17. KSC-2009-2654

    NASA Image and Video Library

    2009-04-14

    CAPE CANAVERAL, Fla. – In high bay 4 of the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida, the crane begins to raise the Ares I-X simulated launch abort system, or LAS, to a vertical position. The LAS will then be ready for assembly with the crew module simulator. Ares I-X is the flight test vehicle for the Ares I, which is part of the Constellation Program to return men to the moon and beyond. Ares I is the essential core of a safe, reliable, cost-effective space transportation system that eventually will carry crewed missions back to the moon, on to Mars and out into the solar system. Ares I-X is targeted for launch in July 2009. Photo credit: NASA/Jack Pfaller

  18. Effect of attractive interactions on the water-like anomalies of a core-softened model potential.

    PubMed

    Pant, Shashank; Gera, Tarun; Choudhury, Niharendu

    2013-12-28

    It is now well established that water-like anomalies can be reproduced by a spherically symmetric potential with two length scales, popularly known as core-softened potential. In the present study we aim to investigate the effect of attractive interactions among the particles in a model fluid interacting with core-softened potential on the existence and location of various water-like anomalies in the temperature-pressure plane. We employ extensive molecular dynamic simulations to study anomalous nature of various order parameters and properties under isothermal compression. Order map analyses have also been done for all the potentials. We observe that all the systems with varying depth of attractive wells show structural, dynamic, and thermodynamic anomalies. As many of the previous studies involving model water and a class of core softened potentials have concluded that the structural anomaly region encloses the diffusion anomaly region, which in turn, encloses the density anomaly region, the same pattern has also been observed in the present study for the systems with less depth of attractive well. For the systems with deeper attractive well, we observe that the diffusion anomaly region shifts toward higher densities and is not always enclosed by the structural anomaly region. Also, density anomaly region is not completely enclosed by diffusion anomaly region in this case.

  19. ROLE OF MAGNETIC FIELD STRENGTH AND NUMERICAL RESOLUTION IN SIMULATIONS OF THE HEAT-FLUX-DRIVEN BUOYANCY INSTABILITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avara, Mark J.; Reynolds, Christopher S.; Bogdanovic, Tamara, E-mail: mavara@astro.umd.edu, E-mail: chris@astro.umd.edu, E-mail: tamarab@gatech.edu

    2013-08-20

    The role played by magnetic fields in the intracluster medium (ICM) of galaxy clusters is complex. The weakly collisional nature of the ICM leads to thermal conduction that is channeled along field lines. This anisotropic heat conduction profoundly changes the instabilities of the ICM atmosphere, with convective stabilities being driven by temperature gradients of either sign. Here, we employ the Athena magnetohydrodynamic code to investigate the local non-linear behavior of the heat-flux-driven buoyancy instability (HBI) relevant in the cores of cooling-core clusters where the temperature increases with radius. We study a grid of two-dimensional simulations that span a large rangemore » of initial magnetic field strengths and numerical resolutions. For very weak initial fields, we recover the previously known result that the HBI wraps the field in the horizontal direction, thereby shutting off the heat flux. However, we find that simulations that begin with intermediate initial field strengths have a qualitatively different behavior, forming HBI-stable filaments that resist field-line wrapping and enable sustained vertical conductive heat flux at a level of 10%-25% of the Spitzer value. While astrophysical conclusions regarding the role of conduction in cooling cores require detailed global models, our local study proves that systems dominated by the HBI do not necessarily quench the conductive heat flux.« less

  20. On the Formation of Ultra-Difuse Galaxies as Tidally-Stripped Systems

    NASA Astrophysics Data System (ADS)

    Carleton, Timothy; Cooper, Michael; Kaplinghat, Manoj; Errani, Raphael; Penarrubia, Jorge

    2018-01-01

    The recent identification of a large population of so-called 'Ultra-Diffuse' Galaxies (UDGs), with stellar masses ~108 M⊙, but half light radii over 1.5 kpc, has challenged our understanding of galaxy evolution. Motivated by the environmental dependence of UDG properties and abundance, I present a model for the formation of UDGs through tidal-stripping of dwarf galaxies in cored dark matter halos. To test this scenario, I utilize results from simulations of tidal stripping, which demonstrate that changes in the stellar profile of a tidally stripped galaxy can be written as a function of the amount of tidal stripping experienced by the halo (tidal tracks). These tracks, however, are different for cored and cuspy halos. Additional simulations show how the halo responds to tidal interactions given the halo orbit within a cluster.In particular, dwarf elliptical galaxies, born in 1010-10.5 M⊙ halos, expand significantly as a result of tidal stripping and produce UDGs. Applying these models to the population of halos in the Bolshoi simulation, I am able to follow the effects of tidal stripping on the dwarf galaxy population in clusters. Using tidal tracks for cuspy halos does not reproduce the observed properties of UDGs. However, using the tidal tracks for cored halos, I reproduce the distribution of sizes, stellar masses, and abundance of UDGs in clusters remarkably well.

  1. Rhapsody-G simulations I: the cool cores, hot gas and stellar content of massive galaxy clusters

    DOE PAGES

    Hahn, Oliver; Martizzi, Davide; Wu, Hao -Yi; ...

    2017-01-25

    We present the rhapsody-g suite of cosmological hydrodynamic zoom simulations of 10 massive galaxy clusters at the M vir ~10 15 M ⊙ scale. These simulations include cooling and subresolution models for star formation and stellar and supermassive black hole feedback. The sample is selected to capture the whole gamut of assembly histories that produce clusters of similar final mass. We present an overview of the successes and shortcomings of such simulations in reproducing both the stellar properties of galaxies as well as properties of the hot plasma in clusters. In our simulations, a long-lived cool-core/non-cool-core dichotomy arises naturally, andmore » the emergence of non-cool cores is related to low angular momentum major mergers. Nevertheless, the cool-core clusters exhibit a low central entropy compared to observations, which cannot be alleviated by thermal active galactic nuclei feedback. For cluster scaling relations, we find that the simulations match well the M 500–Y 500 scaling of Planck Sunyaev–Zeldovich clusters but deviate somewhat from the observed X-ray luminosity and temperature scaling relations in the sense of being slightly too bright and too cool at fixed mass, respectively. Stars are produced at an efficiency consistent with abundance-matching constraints and central galaxies have star formation rates consistent with recent observations. In conclusion, while our simulations thus match various key properties remarkably well, we conclude that the shortcomings strongly suggest an important role for non-thermal processes (through feedback or otherwise) or thermal conduction in shaping the intracluster medium.« less

  2. rhapsody-g simulations - I. The cool cores, hot gas and stellar content of massive galaxy clusters

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Martizzi, Davide; Wu, Hao-Yi; Evrard, August E.; Teyssier, Romain; Wechsler, Risa H.

    2017-09-01

    We present the rhapsody-g suite of cosmological hydrodynamic zoom simulations of 10 massive galaxy clusters at the Mvir ˜ 1015 M⊙ scale. These simulations include cooling and subresolution models for star formation and stellar and supermassive black hole feedback. The sample is selected to capture the whole gamut of assembly histories that produce clusters of similar final mass. We present an overview of the successes and shortcomings of such simulations in reproducing both the stellar properties of galaxies as well as properties of the hot plasma in clusters. In our simulations, a long-lived cool-core/non-cool-core dichotomy arises naturally, and the emergence of non-cool cores is related to low angular momentum major mergers. Nevertheless, the cool-core clusters exhibit a low central entropy compared to observations, which cannot be alleviated by thermal active galactic nuclei feedback. For cluster scaling relations, we find that the simulations match well the M500-Y500 scaling of Planck Sunyaev-Zeldovich clusters but deviate somewhat from the observed X-ray luminosity and temperature scaling relations in the sense of being slightly too bright and too cool at fixed mass, respectively. Stars are produced at an efficiency consistent with abundance-matching constraints and central galaxies have star formation rates consistent with recent observations. While our simulations thus match various key properties remarkably well, we conclude that the shortcomings strongly suggest an important role for non-thermal processes (through feedback or otherwise) or thermal conduction in shaping the intracluster medium.

  3. CMS Readiness for Multi-Core Workload Scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides amore » solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.« less

  4. CAM-SE: A scalable spectral element dynamical core for the Community Atmosphere Model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennis, John; Edwards, Jim; Evans, Kate J

    2012-01-01

    The Community Atmosphere Model (CAM) version 5 includes a spectral element dynamical core option from NCAR's High-Order Method Modeling Environment. It is a continuous Galerkin spectral finite element method designed for fully unstructured quadrilateral meshes. The current configurations in CAM are based on the cubed-sphere grid. The main motivation for including a spectral element dynamical core is to improve the scalability of CAM by allowing quasi-uniform grids for the sphere that do not require polar filters. In addition, the approach provides other state-of-the-art capabilities such as improved conservation properties. Spectral elements are used for the horizontal discretization, while most othermore » aspects of the dynamical core are a hybrid of well tested techniques from CAM's finite volume and global spectral dynamical core options. Here we first give a overview of the spectral element dynamical core as used in CAM. We then give scalability and performance results from CAM running with three different dynamical core options within the Community Earth System Model, using a pre-industrial time-slice configuration. We focus on high resolution simulations of 1/4 degree, 1/8 degree, and T340 spectral truncation.« less

  5. CMS readiness for multi-core workload scheduling

    NASA Astrophysics Data System (ADS)

    Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.; Aftab Khan, F.; Letts, J.; Mason, D.; Verguilov, V.

    2017-10-01

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.

  6. A new synoptic scale resolving global climate simulation using the Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Small, R. Justin; Bacmeister, Julio; Bailey, David; Baker, Allison; Bishop, Stuart; Bryan, Frank; Caron, Julie; Dennis, John; Gent, Peter; Hsu, Hsiao-ming; Jochum, Markus; Lawrence, David; Muñoz, Ernesto; diNezio, Pedro; Scheitlin, Tim; Tomas, Robert; Tribbia, Joseph; Tseng, Yu-heng; Vertenstein, Mariana

    2014-12-01

    High-resolution global climate modeling holds the promise of capturing planetary-scale climate modes and small-scale (regional and sometimes extreme) features simultaneously, including their mutual interaction. This paper discusses a new state-of-the-art high-resolution Community Earth System Model (CESM) simulation that was performed with these goals in mind. The atmospheric component was at 0.25° grid spacing, and ocean component at 0.1°. One hundred years of "present-day" simulation were completed. Major results were that annual mean sea surface temperature (SST) in the equatorial Pacific and El-Niño Southern Oscillation variability were well simulated compared to standard resolution models. Tropical and southern Atlantic SST also had much reduced bias compared to previous versions of the model. In addition, the high resolution of the model enabled small-scale features of the climate system to be represented, such as air-sea interaction over ocean frontal zones, mesoscale systems generated by the Rockies, and Tropical Cyclones. Associated single component runs and standard resolution coupled runs are used to help attribute the strengths and weaknesses of the fully coupled run. The high-resolution run employed 23,404 cores, costing 250 thousand processor-hours per simulated year and made about two simulated years per day on the NCAR-Wyoming supercomputer "Yellowstone."

  7. Code Development and Assessment for Reactor Outage Thermal-Hydraulic and Safety Analysis - Midloop Operation with Loss of Residual Heat Removal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Thomas K.S.; Ko, F.-K

    Although only a few percent of residual power remains during plant outages, the associated risk of core uncovery and corresponding fuel overheating has been identified to be relatively high, particularly under midloop operation (MLO) in pressurized water reactors. However, to analyze the system behavior during outages, the tools currently available, such as RELAP5, RETRAN, etc., cannot easily perform the task. Therefore, a medium-sized program aiming at reactor outage simulation and evaluation, such as MLO with the loss of residual heat removal (RHR), was developed. All important thermal-hydraulic processes involved during MLO with the loss of RHR will be properly simulatedmore » by the newly developed reactor outage simulation and evaluation (ROSE) code. Important processes during MLO with loss of RHR involve a pressurizer insurge caused by the hot-leg flooding, reflux condensation, liquid holdup inside the steam generator, loop-seal clearance, core-level depression, etc. Since the accuracy of the pressure distribution from the classical nodal momentum approach will be degraded when the system is stratified and under atmospheric pressure, the two-region approach with a modified two-fluid model will be the theoretical basis of the new program to analyze the nuclear steam supply system during plant outages. To verify the analytical model in the first step, posttest calculations against the closed integral midloop experiments with loss of RHR were performed. The excellent simulation capacity of the ROSE code against the Institute of Nuclear Energy Research Integral System Test Facility (IIST) test data is demonstrated.« less

  8. Ab initio MD simulations of Mg2SiO4 liquid at high pressures and temperatures relevant to the Earth's mantle

    NASA Astrophysics Data System (ADS)

    Martin, G. B.; Kirtman, B.; Spera, F. J.

    2010-12-01

    Computational studies implementing Density Functional Theory (DFT) methods have become very popular in the Materials Sciences in recent years. DFT codes are now used routinely to simulate properties of geomaterials—mainly silicates and geochemically important metals such as Fe. These materials are ubiquitous in the Earth’s mantle and core and in terrestrial exoplanets. Because of computational limitations, most First Principles Molecular Dynamics (FPMD) calculations are done on systems of only 100 atoms for a few picoseconds. While this approach can be useful for calculating physical quantities related to crystal structure, vibrational frequency, and other lattice-scale properties (especially in crystals), it would be useful to be able to compute larger systems especially for extracting transport properties and coordination statistics. Previous studies have used codes such as VASP where CPU time increases as N2, making calculations on systems of more than 100 atoms computationally very taxing. SIESTA (Soler, et al. 2002) is a an order-N (linear-scaling) DFT code that enables electronic structure and MD computations on larger systems (N 1000) by making approximations such as localized numerical orbitals. Here we test the applicability of SIESTA to simulate geosilicates in the liquid and glass state. We have used SIESTA for MD simulations of liquid Mg2SiO4 at various state points pertinent to the Earth’s mantle and congruous with those calculated in a previous DFT study using the VASP code (DeKoker, et al. 2008). The core electronic wave functions of Mg, Si, and O were approximated using pseudopotentials with a core cutoff radius of 1.38, 1.0, and 0.61 Angstroms respectively. The Ceperly-Alder parameterization of the Local Density Approximation (LDA) was used as the exchange-correlation functional. Known systematic overbinding of LDA was corrected with the addition of a pressure term, P 1.6 GPa, which is the pressure calculated by SIESTA at the experimental zero-pressure volume of forsterite under static conditions (Stixrude and Lithgow-Bertollini 2005). Results are reported here that show SIESTA calculations of T and P on densities in the range of 2.7 - 5.0 g/cc of liquid Mg2SiO4 are similar to the VASP calculations of DeKoker et al. (2008), which used the same functional. This opens the possibility of conducting fast /emph{ab initio} MD simulations of geomaterials with a hundreds of atoms.

  9. Thermal behavior of cylindrical buckling restrained braces at elevated temperatures.

    PubMed

    Talebi, Elnaz; Tahir, Mahmood Md; Zahmatkesh, Farshad; Yasreen, Airil; Mirza, Jahangir

    2014-01-01

    The primary focus of this investigation was to analyze sequentially coupled nonlinear thermal stress, using a three-dimensional model. It was meant to shed light on the behavior of Buckling Restraint Brace (BRB) elements with circular cross section, at elevated temperature. Such bracing systems were comprised of a cylindrical steel core encased in a strong concrete-filled steel hollow casing. A debonding agent was rubbed on the core's surface to avoid shear stress transition to the restraining system. The numerical model was verified by the analytical solutions developed by the other researchers. Performance of BRB system under seismic loading at ambient temperature has been well documented. However, its performance in case of fire has yet to be explored. This study showed that the failure of brace may be attributed to material strength reduction and high compressive forces, both due to temperature rise. Furthermore, limiting temperatures in the linear behavior of steel casing and concrete in BRB element for both numerical and analytical simulations were about 196°C and 225°C, respectively. Finally it is concluded that the performance of BRB at elevated temperatures was the same as that seen at room temperature; that is, the steel core yields prior to the restraining system.

  10. A novel concept of fault current limiter based on saturable core in high voltage DC transmission system

    NASA Astrophysics Data System (ADS)

    Yuan, Jiaxin; Zhou, Hang; Gan, Pengcheng; Zhong, Yongheng; Gao, Yanhui; Muramatsu, Kazuhiro; Du, Zhiye; Chen, Baichao

    2018-05-01

    To develop mechanical circuit breaker in high voltage direct current (HVDC) system, a fault current limiter is required. Traditional method to limit DC fault current is to use superconducting technology or power electronic devices, which is quite difficult to be brought to practical use under high voltage circumstances. In this paper, a novel concept of high voltage DC transmission system fault current limiter (DCSFCL) based on saturable core was proposed. In the DCSFCL, the permanent magnets (PM) are added on both up and down side of the core to generate reverse magnetic flux that offset the magnetic flux generated by DC current and make the DC winding present a variable inductance to the DC system. In normal state, DCSFCL works as a smoothing reactor and its inductance is within the scope of the design requirements. When a fault occurs, the inductance of DCSFCL rises immediately and limits the steepness of the fault current. Magnetic field simulations were carried out, showing that compared with conventional smoothing reactor, DCSFCL can decrease the high steepness of DC fault current by 17% in less than 10ms, which verifies the feasibility and effectiveness of this method.

  11. Interpretation of the results of the CORA-33 dry core BWR test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ott, L.J.; Hagen, S.

    All BWR degraded core experiments performed prior to CORA-33 were conducted under ``wet`` core degradation conditions for which water remains within the core and continuous steaming feeds metal/steam oxidation reactions on the in-core metallic surfaces. However, one dominant set of accident scenarios would occur with reduced metal oxidation under ``dry`` core degradation conditions and, prior to CORA-33, this set had been neglected experimentally. The CORA-33 experiment was designed specifically to address this dominant set of BWR ``dry`` core severe accident scenarios and to partially resolve phenomenological uncertainties concerning the behavior of relocating metallic melts draining into the lower regions ofmore » a ``dry`` BWR core. CORA-33 was conducted on October 1, 1992, in the CORA tests facility at KfK. Review of the CORA-33 data indicates that the test objectives were achieved; that is, core degradation occurred at a core heatup rate and a test section axial temperature profile that are prototypic of full-core nuclear power plant (NPP) simulations at ``dry`` core conditions. Simulations of the CORA-33 test at ORNL have required modification of existing control blade/canister materials interaction models to include the eutectic melting of the stainless steel/Zircaloy interaction products and the heat of mixing of stainless steel and Zircaloy. The timing and location of canister failure and melt intrusion into the fuel assembly appear to be adequately simulated by the ORNL models. This paper will present the results of the posttest analyses carried out at ORNL based upon the experimental data and the posttest examination of the test bundle at KfK. The implications of these results with respect to degraded core modeling and the associated safety issues are also discussed.« less

  12. Numerical study of core formation of asymmetrically driven cone-guided targets

    DOE PAGES

    Sawada, Hiroshi; Sakagami, Hitoshi

    2017-09-22

    Compression of a directly driven fast ignition cone-sphere target with a finite number of laser beams is numerically studied using a three-dimensional hydrodynamics code IMPACT-3D. The formation of a dense plasma core is simulated for 12-, 9-, 6-, and 4-beam configurations of the GEKKO XII laser. The complex 3D shapes of the cores are analyzed by elucidating synthetic 2D x-ray radiographic images in two orthogonal directions. Finally, the simulated x-ray images show significant differences in the core shape between the two viewing directions and rotation of the stagnating core axis in the top view for the axisymmetric 9- and 6-beammore » configurations.« less

  13. Numerical study of core formation of asymmetrically driven cone-guided targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sawada, Hiroshi; Sakagami, Hitoshi

    Compression of a directly driven fast ignition cone-sphere target with a finite number of laser beams is numerically studied using a three-dimensional hydrodynamics code IMPACT-3D. The formation of a dense plasma core is simulated for 12-, 9-, 6-, and 4-beam configurations of the GEKKO XII laser. The complex 3D shapes of the cores are analyzed by elucidating synthetic 2D x-ray radiographic images in two orthogonal directions. Finally, the simulated x-ray images show significant differences in the core shape between the two viewing directions and rotation of the stagnating core axis in the top view for the axisymmetric 9- and 6-beammore » configurations.« less

  14. Controlled generation of different orbital angular momentum states in a hybrid optical fiber

    NASA Astrophysics Data System (ADS)

    Heng, Xiaobo; Gan, Jiulin; Zhang, Zhishen; Qian, Qi; Xu, Shanhui; Yang, Zhongmin

    2017-11-01

    A new kind of hybrid optical fiber for different orbital angular momentum (OAM) states generation is proposed and investigated by simulation. The hybrid fiber is composed of three main regions: the core, the cladding and the bow-tie-shaped stress-applying zones (SAZs). The SAZs are symmetrically distributed on both sides of the core and filled with piezoelectric material PZT-5H which would generate radial mechanical movement when subjected to an electric field. The strain applied by the SAZs introduces anisotropic variation of the material permittivity which affect the propagation of the guided modes along the fiber core. The OAM modes of | l | = 1 , 2 , 3 can be generated by setting the appropriate electric potential applied in the SAZs. This fiber-based structure and electric control design enable the generation and adjustment of OAM states with the merits of accuracy, compactness and practicality, which would have potential application in OAM optical fiber communication systems and other systems utilizing OAM light.

  15. High temperature UF6 RF plasma experiments applicable to uranium plasma core reactors

    NASA Technical Reports Server (NTRS)

    Roman, W. C.

    1979-01-01

    An investigation was conducted using a 1.2 MW RF induction heater facility to aid in developing the technology necessary for designing a self critical fissioning uranium plasma core reactor. Pure, high temperature uranium hexafluoride (UF6) was injected into an argon fluid mechanically confined, steady state, RF heated plasma while employing different exhaust systems and diagnostic techniques to simulate and investigate some potential characteristics of uranium plasma core nuclear reactors. The development of techniques and equipment for fluid mechanical confinement of RF heated uranium plasmas with a high density of uranium vapor within the plasma, while simultaneously minimizing deposition of uranium and uranium compounds on the test chamber peripheral wall, endwall surfaces, and primary exhaust ducts, is discussed. The material tests and handling techniques suitable for use with high temperature, high pressure, gaseous UF6 are described and the development of complementary diagnostic instrumentation and measurement techniques to characterize the uranium plasma, effluent exhaust gases, and residue deposited on the test chamber and exhaust system components is reported.

  16. Space Launch System Booster Separation Aerodynamic Database Development and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Chan, David T.; Pinier, Jeremy T.; Wilcox, Floyd J., Jr.; Dalle, Derek J.; Rogers, Stuart E.; Gomez, Reynaldo J.

    2016-01-01

    The development of the aerodynamic database for the Space Launch System (SLS) booster separation environment has presented many challenges because of the complex physics of the ow around three independent bodies due to proximity e ects and jet inter- actions from the booster separation motors and the core stage engines. This aerodynamic environment is dicult to simulate in a wind tunnel experiment and also dicult to simu- late with computational uid dynamics. The database is further complicated by the high dimensionality of the independent variable space, which includes the orientation of the core stage, the relative positions and orientations of the solid rocket boosters, and the thrust lev- els of the various engines. Moreover, the clearance between the core stage and the boosters during the separation event is sensitive to the aerodynamic uncertainties of the database. This paper will present the development process for Version 3 of the SLS booster separa- tion aerodynamic database and the statistics-based uncertainty quanti cation process for the database.

  17. Can we teach core clinical obstetrics and gynaecology skills using low fidelity simulation in an interprofessional setting?

    PubMed

    Kumar, Arunaz; Gilmour, Carole; Nestel, Debra; Aldridge, Robyn; McLelland, Gayle; Wallace, Euan

    2014-12-01

    Core clinical skills acquisition is an essential component of undergraduate medical and midwifery education. Although interprofessional education is an increasingly common format for learning efficient teamwork in clinical medicine, its value in undergraduate education is less clear. We present a collaborative effort from the medical and midwifery schools of Monash University, Melbourne, towards the development of an educational package centred around a core skills-based workshop using low fidelity simulation models in an interprofessional setting. Detailed feedback on the package was positive with respect to the relevance of the teaching content, whether the topic was well taught by task trainers and simulation models used, pitch of level of teaching and perception of confidence gained in performing the skill on a real patient after attending the workshop. Overall, interprofessional core skills training using low fidelity simulation models introduced at an undergraduate level in medicine and midwifery had a good acceptance. © 2014 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.

  18. Demonstration of fully coupled simplified extended station black-out accident simulation with RELAP-7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Haihua; Zhang, Hongbin; Zou, Ling

    2014-10-01

    The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). The RELAP-7 code develop-ment effort started in October of 2011 and by the end of the second development year, a number of physical components with simplified two phase flow capability have been de-veloped to support the simplified boiling water reactor (BWR) extended station blackout (SBO) analyses. The demonstration case includes the major components for the primary system of a BWR, as well as the safety system components for the safety relief valve (SRV), the reactor core isolation cooling (RCIC)more » system, and the wet well. Three scenar-ios for the SBO simulations have been considered. Since RELAP-7 is not a severe acci-dent analysis code, the simulation stops when fuel clad temperature reaches damage point. Scenario I represents an extreme station blackout accident without any external cooling and cooling water injection. The system pressure is controlled by automatically releasing steam through SRVs. Scenario II includes the RCIC system but without SRV. The RCIC system is fully coupled with the reactor primary system and all the major components are dynamically simulated. The third scenario includes both the RCIC system and the SRV to provide a more realistic simulation. This paper will describe the major models and dis-cuss the results for the three scenarios. The RELAP-7 simulations for the three simplified SBO scenarios show the importance of dynamically simulating the SRVs, the RCIC sys-tem, and the wet well system to the reactor safety during extended SBO accidents.« less

  19. NIF laboratory astrophysics simulations investigating the effects of a radiative shock on hydrodynamic instabilities

    NASA Astrophysics Data System (ADS)

    Angulo, A. A.; Kuranz, C. C.; Drake, R. P.; Huntington, C. M.; Park, H.-S.; Remington, B. A.; Kalantar, D.; MacLaren, S.; Raman, K.; Miles, A.; Trantham, Matthew; Kline, J. L.; Flippo, K.; Doss, F. W.; Shvarts, D.

    2016-10-01

    This poster will describe simulations based on results from ongoing laboratory astrophysics experiments at the National Ignition Facility (NIF) relevant to the effects of radiative shock on hydrodynamically unstable surfaces. The experiments performed on NIF uniquely provide the necessary conditions required to emulate radiative shock that occurs in astrophysical systems. The core-collapse explosions of red supergiant stars is such an example wherein the interaction between the supernova ejecta and the circumstellar medium creates a region susceptible to Rayleigh-Taylor (R-T) instabilities. Radiative and nonradiative experiments were performed to show that R-T growth should be reduced by the effects of the radiative shocks that occur during this core-collapse. Simulations were performed using the radiation hydrodynamics code Hyades using the experimental conditions to find the mean interface acceleration of the instability and then further analyzed in the buoyancy drag model to observe how the material expansion contributes to the mix-layer growth. This work is funded by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas under Grant Number DE-FG52-09NA29548.

  20. A study of the required Rayleigh number to sustain dynamo with various inner core radius

    NASA Astrophysics Data System (ADS)

    Nishida, Y.; Katoh, Y.; Matsui, H.; Kumamoto, A.

    2017-12-01

    It is widely accepted that the geomagnetic field is sustained by thermal and compositional driven convections of a liquid iron alloy in the outer core. The generation process of the geomagnetic field has been studied by a number of MHD dynamo simulations. Recent studies of the ratio of the Earth's core evolution suggest that the inner solid core radius ri to the outer liquid core radius ro changed from ri/ro = 0 to 0.35 during the last one billion years. There are some studies of dynamo in the early Earth with smaller inner core than the present. Heimpel et al. (2005) revealed the Rayleigh number Ra of the onset of dynamo process as a function of ri/ro from simulation, while paleomagnetic observation shows that the geomagnetic field has been sustained for 3.5 billion years. While Heimpel and Evans (2013) studied dynamo processes taking into account the thermal history of the Earth's interior, there were few cases corresponding to the early Earth. Driscoll (2016) performed a series of dynamo based on a thermal evolution model. Despite a number of dynamo simulations, dynamo process occurring in the interior of the early Earth has not been fully understood because the magnetic Prandtl numbers in these simulations are much larger than that for the actual outer core.In the present study, we performed thermally driven dynamo simulations with different aspect ratio ri/ro = 0.15, 0.25 and 0.35 to evaluate the critical Ra for the thermal convection and required Ra to maintain the dynamo. For this purpose, we performed simulations with various Ra and fixed the other control parameters such as the Ekman, Prandtl, and magnetic Prandtl numbers. For the initial condition and boundary conditions, we followed the dynamo benchmark case 1 by Christensen et al. (2001). The results show that the critical Ra increases with the smaller aspect ratio ri/ro. It is confirmed that larger amplitude of buoyancy is required in the smaller inner core to maintain dynamo.

  1. Modified optical fiber daylighting system with sunlight transportation in free space.

    PubMed

    Vu, Ngoc-Hai; Pham, Thanh-Tuan; Shin, Seoyong

    2016-12-26

    We present the design, optical simulation, and experiment of a modified optical fiber daylighting system (M-OFDS) for indoor lighting. The M-OFDS is comprised of three sub-systems: concentration, collimation, and distribution. The concentration part is formed by coupling a Fresnel lens with a large-core plastic optical fiber. The sunlight collected by the concentration sub-system is propagated in a plastic optical fiber and then collimated by the collimator, which is a combination of a parabolic mirror and a convex lens. The collimated beam of sunlight travels in free space and is guided to the interior by directing flat mirrors, where it is diffused uniformly by a distributor. All parameters of the system are calculated theoretically. Based on the designed system, our simulation results demonstrated a maximum optical efficiency of 71%. The simulation results also showed that sunlight could be delivered to the illumination destination at distance of 30 m. A prototype of the M-OFDS was fabricated, and preliminary experiments were performed outdoors. The simulation results and experimental results confirmed that the M-OFDS was designed effectively. A large-scale system constructed by several M-OFDSs is also proposed. The results showed that the presented optical fiber daylighting system is a strong candidate for an inexpensive and highly efficient application of solar energy in buildings.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rouxelin, Pascal Nicolas; Strydom, Gerhard

    Best-estimate plus uncertainty analysis of reactors is replacing the traditional conservative (stacked uncertainty) method for safety and licensing analysis. To facilitate uncertainty analysis applications, a comprehensive approach and methodology must be developed and applied. High temperature gas cooled reactors (HTGRs) have several features that require techniques not used in light-water reactor analysis (e.g., coated-particle design and large graphite quantities at high temperatures). The International Atomic Energy Agency has therefore launched the Coordinated Research Project on HTGR Uncertainty Analysis in Modeling to study uncertainty propagation in the HTGR analysis chain. The benchmark problem defined for the prismatic design is represented bymore » the General Atomics Modular HTGR 350. The main focus of this report is the compilation and discussion of the results obtained for various permutations of Exercise I 2c and the use of the cross section data in Exercise II 1a of the prismatic benchmark, which is defined as the last and first steps of the lattice and core simulation phases, respectively. The report summarizes the Idaho National Laboratory (INL) best estimate results obtained for Exercise I 2a (fresh single-fuel block), Exercise I 2b (depleted single-fuel block), and Exercise I 2c (super cell) in addition to the first results of an investigation into the cross section generation effects for the super-cell problem. The two dimensional deterministic code known as the New ESC based Weighting Transport (NEWT) included in the Standardized Computer Analyses for Licensing Evaluation (SCALE) 6.1.2 package was used for the cross section evaluation, and the results obtained were compared to the three dimensional stochastic SCALE module KENO VI. The NEWT cross section libraries were generated for several permutations of the current benchmark super-cell geometry and were then provided as input to the Phase II core calculation of the stand alone neutronics Exercise II 1a. The steady state core calculations were simulated with the INL coupled-code system known as the Parallel and Highly Innovative Simulation for INL Code System (PHISICS) and the system thermal-hydraulics code known as the Reactor Excursion and Leak Analysis Program (RELAP) 5 3D using the nuclear data libraries previously generated with NEWT. It was observed that significant differences in terms of multiplication factor and neutron flux exist between the various permutations of the Phase I super-cell lattice calculations. The use of these cross section libraries only leads to minor changes in the Phase II core simulation results for fresh fuel but shows significantly larger discrepancies for spent fuel cores. Furthermore, large incongruities were found between the SCALE NEWT and KENO VI results for the super cells, and while some trends could be identified, a final conclusion on this issue could not yet be reached. This report will be revised in mid 2016 with more detailed analyses of the super-cell problems and their effects on the core models, using the latest version of SCALE (6.2). The super-cell models seem to show substantial improvements in terms of neutron flux as compared to single-block models, particularly at thermal energies.« less

  3. A Comparison of HWRF, ARW and NMM Models in Hurricane Katrina (2005) Simulation

    PubMed Central

    Dodla, Venkata B.; Desamsetti, Srinivas; Yerramilli, Anjaneyulu

    2011-01-01

    The life cycle of Hurricane Katrina (2005) was simulated using three different modeling systems of Weather Research and Forecasting (WRF) mesoscale model. These are, HWRF (Hurricane WRF) designed specifically for hurricane studies and WRF model with two different dynamic cores as the Advanced Research WRF (ARW) model and the Non-hydrostatic Mesoscale Model (NMM). The WRF model was developed and sourced from National Center for Atmospheric Research (NCAR), incorporating the advances in atmospheric simulation system suitable for a broad range of applications. The HWRF modeling system was developed at the National Centers for Environmental Prediction (NCEP) based on the NMM dynamic core and the physical parameterization schemes specially designed for tropics. A case study of Hurricane Katrina was chosen as it is one of the intense hurricanes that caused severe destruction along the Gulf Coast from central Florida to Texas. ARW, NMM and HWRF models were designed to have two-way interactive nested domains with 27 and 9 km resolutions. The three different models used in this study were integrated for three days starting from 0000 UTC of 27 August 2005 to capture the landfall of hurricane Katrina on 29 August. The initial and time varying lateral boundary conditions were taken from NCEP global FNL (final analysis) data available at 1 degree resolution for ARW and NMM models and from NCEP GFS data at 0.5 degree resolution for HWRF model. The results show that the models simulated the intensification of Hurricane Katrina and the landfall on 29 August 2005 agreeing with the observations. Results from these experiments highlight the superior performance of HWRF model over ARW and NMM models in predicting the track and intensification of Hurricane Katrina. PMID:21776239

  4. A comparison of HWRF, ARW and NMM models in Hurricane Katrina (2005) simulation.

    PubMed

    Dodla, Venkata B; Desamsetti, Srinivas; Yerramilli, Anjaneyulu

    2011-06-01

    The life cycle of Hurricane Katrina (2005) was simulated using three different modeling systems of Weather Research and Forecasting (WRF) mesoscale model. These are, HWRF (Hurricane WRF) designed specifically for hurricane studies and WRF model with two different dynamic cores as the Advanced Research WRF (ARW) model and the Non-hydrostatic Mesoscale Model (NMM). The WRF model was developed and sourced from National Center for Atmospheric Research (NCAR), incorporating the advances in atmospheric simulation system suitable for a broad range of applications. The HWRF modeling system was developed at the National Centers for Environmental Prediction (NCEP) based on the NMM dynamic core and the physical parameterization schemes specially designed for tropics. A case study of Hurricane Katrina was chosen as it is one of the intense hurricanes that caused severe destruction along the Gulf Coast from central Florida to Texas. ARW, NMM and HWRF models were designed to have two-way interactive nested domains with 27 and 9 km resolutions. The three different models used in this study were integrated for three days starting from 0000 UTC of 27 August 2005 to capture the landfall of hurricane Katrina on 29 August. The initial and time varying lateral boundary conditions were taken from NCEP global FNL (final analysis) data available at 1 degree resolution for ARW and NMM models and from NCEP GFS data at 0.5 degree resolution for HWRF model. The results show that the models simulated the intensification of Hurricane Katrina and the landfall on 29 August 2005 agreeing with the observations. Results from these experiments highlight the superior performance of HWRF model over ARW and NMM models in predicting the track and intensification of Hurricane Katrina.

  5. LIDAR TS for ITER core plasma. Part II: simultaneous two wavelength LIDAR TS

    NASA Astrophysics Data System (ADS)

    Gowers, C.; Nielsen, P.; Salzmann, H.

    2017-12-01

    We have shown recently, and in more detail at this conference (Salzmann et al) that the LIDAR approach to ITER core TS measurements requires only two mirrors in the inaccessible port plug area of the machine. This leads to simplified and robust alignment, lower risk of mirror damage by plasma contamination and much simpler calibration, compared with the awkward and vulnerable optical geometry of the conventional imaging TS approach, currently under development by ITER. In the present work we have extended the simulation code used previously to include the case of launching two laser pulses, of different wavelengths, simultaneously in LIDAR geometry. The aim of this approach is to broaden the choice of lasers available for the diagnostic. In the simulation code it is assumed that two short duration (300 ps) laser pulses of different wavelengths, from an Nd:YAG laser are launched through the plasma simultaneously. The temperature and density profiles are deduced in the usual way but from the resulting combined scattered signals in the different spectral channels of the single spectrometer. The spectral response and quantum efficiencies of the detectors used in the simulation are taken from catalogue data for commercially available Hamamatsu MCP-PMTs. The response times, gateability and tolerance to stray light levels of this type of photomultiplier have already been demonstrated in the JET LIDAR system and give sufficient spatial resolution to meet the ITER specification. Here we present the new simulation results from the code. They demonstrate that when the detectors are combined with this two laser, LIDAR approach, the full range of the specified ITER core plasma Te and ne can be measured with sufficient accuracy. So, with commercially available detectors and a simple modification of a Nd:YAG laser similar to that currently being used in the design of the conventional ITER core TS design mentioned above, the ITER requirements can be met.

  6. A novel method for calculating relative free energy of similar molecules in two environments

    NASA Astrophysics Data System (ADS)

    Farhi, Asaf; Singh, Bipin

    2017-03-01

    Calculating relative free energies is a topic of substantial interest and has many applications including solvation and binding free energies, which are used in computational drug discovery. However, there remain the challenges of accuracy, simple implementation, robustness and efficiency, which prevent the calculations from being automated and limit their use. Here we present an exact and complete decoupling analysis in which the partition functions of the compared systems decompose into the partition functions of the common and different subsystems. This decoupling analysis is applicable to submolecules with coupled degrees of freedom such as the methyl group and to any potential function (including the typical dihedral potentials), enabling to remove less terms in the transformation which results in a more efficient calculation. Then we show mathematically, in the context of partition function decoupling, that the two compared systems can be simulated separately, eliminating the need to design a composite system. We demonstrate the decoupling analysis and the separate transformations in a relative free energy calculation using MD simulations for a general force field and compare to another calculation and to experimental results. We present a unified soft-core technique that ensures the monotonicity of the numerically integrated function (analytical proof) which is important for the selection of intermediates. We show mathematically that in this soft-core technique the numerically integrated function can be non-steep only when we transform the systems separately, which can simplify the numerical integration. Finally, we show that when the systems have rugged energy landscape they can be equilibrated without introducing another sampling dimension which can also enable to use the simulation results for other free energy calculations.

  7. High speed data transmission coaxial-cable in the space communication system

    NASA Astrophysics Data System (ADS)

    Su, Haohang; Huang, Jing

    2018-01-01

    An effective method is proved based on the scattering parameter of high speed 8-core coaxial-cable measured by vector network analyzer, and the semi-physical simulation is made to receive the eye diagram at different data transmission rate. The result can be apply to analysis decay and distortion of the signal through the coaxial-cable at high frequency, and can extensively design for electromagnetic compatibility of high-speed data transmission system.

  8. A Next Generation Atmospheric Prediction System for the Navy

    DTIC Science & Technology

    2015-09-30

    by DOE and NSF , while the HiRAM system has primarily been supported by NOAA, although both models have leveraged considerably from indirect and...Neptune scalability (blue line) with the increasing number of cores compared to a perfect simulation rate (black line). Horizontal distance (km...draw on the community expertise with both MPAS and HIRAM. NRL is a no- cost collaborator with a number of proposals for the ONR Seasonal Prediction

  9. Advanced and flexible multi-carrier receiver architecture for high-count multi-core fiber based space division multiplexed applications

    PubMed Central

    Asif, Rameez

    2016-01-01

    Space division multiplexing (SDM), incorporating multi-core fibers (MCFs), has been demonstrated for effectively maximizing the data capacity in an impending capacity crunch. To achieve high spectral-density through multi-carrier encoding while simultaneously maintaining transmission reach, benefits from inter-core crosstalk (XT) and non-linear compensation must be utilized. In this report, we propose a proof-of-concept unified receiver architecture that jointly compensates optical Kerr effects, intra- and inter-core XT in MCFs. The architecture is analysed in multi-channel 512 Gbit/s dual-carrier DP-16QAM system over 800 km 19-core MCF to validate the digital compensation of inter-core XT. Through this architecture: (a) we efficiently compensates the inter-core XT improving Q-factor by 4.82 dB and (b) achieve a momentous gain in transmission reach, increasing the maximum achievable distance from 480 km to 1208 km, via analytical analysis. Simulation results confirm that inter-core XT distortions are more relentless for cores fabricated around the central axis of cladding. Predominantly, XT induced Q-penalty can be suppressed to be less than 1 dB up-to −11.56 dB of inter-core XT over 800 km MCF, offering flexibility to fabricate dense core structures with same cladding diameter. Moreover, this report outlines the relationship between core pitch and forward-error correction (FEC). PMID:27270381

  10. Parallel Transport with Sheath and Collisional Effects in Global Electrostatic Turbulent Transport in FRCs

    NASA Astrophysics Data System (ADS)

    Bao, Jian; Lau, Calvin; Kuley, Animesh; Lin, Zhihong; Fulton, Daniel; Tajima, Toshiki; Tri Alpha Energy, Inc. Team

    2017-10-01

    Collisional and turbulent transport in a field reversed configuration (FRC) is studied in global particle simulation by using GTC (gyrokinetic toroidal code). The global FRC geometry is incorporated in GTC by using a field-aligned mesh in cylindrical coordinates, which enables global simulation coupling core and scrape-off layer (SOL) across the separatrix. Furthermore, fully kinetic ions are implemented in GTC to treat magnetic-null point in FRC core. Both global simulation coupling core and SOL regions and independent SOL region simulation have been carried out to study turbulence. In this work, the ``logical sheath boundary condition'' is implemented to study parallel transport in the SOL. This method helps to relax time and spatial steps without resolving electron plasma frequency and Debye length, which enables turbulent transports simulation with sheath effects. We will study collisional and turbulent SOL parallel transport with mirror geometry and sheath boundary condition in C2-W divertor.

  11. Hydrodynamical simulations of the stream-core interaction in the slow merger of massive stars

    NASA Astrophysics Data System (ADS)

    Ivanova, N.; Podsiadlowski, Ph.; Spruit, H.

    2002-08-01

    We present detailed simulations of the interaction of a stream emanating from a mass-losing secondary with the core of a massive supergiant in the slow merger of two stars inside a common envelope. The dynamics of the stream can be divided into a ballistic phase, starting at the L1 point, and a hydrodynamical phase, where the stream interacts strongly with the core. Considering the merger of a 1- and 5-Msolar star with a 20-Msolar evolved supergiant, we present two-dimensional hydrodynamical simulations using the PROMETHEUS code to demonstrate how the penetration depth and post-impact conditions depend on the initial properties of the stream material (e.g. entropy, angular momentum, stream width) and the properties of the core (e.g. density structure and rotation rate). Using these results, we present a fitting formula for the entropy generated in the stream-core interaction and a recipe for the determination of the penetration depth based on a modified Bernoulli integral.

  12. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  13. Understanding the core-halo relation of quantum wave dark matter from 3D simulations.

    PubMed

    Schive, Hsi-Yu; Liao, Ming-Hsuan; Woo, Tak-Pong; Wong, Shing-Kwong; Chiueh, Tzihong; Broadhurst, Tom; Hwang, W-Y Pauchy

    2014-12-31

    We examine the nonlinear structure of gravitationally collapsed objects that form in our simulations of wavelike cold dark matter, described by the Schrödinger-Poisson (SP) equation with a particle mass ∼10(-22)  eV. A distinct gravitationally self-bound solitonic core is found at the center of every halo, with a profile quite different from cores modeled in the warm or self-interacting dark matter scenarios. Furthermore, we show that each solitonic core is surrounded by an extended halo composed of large fluctuating dark matter granules which modulate the halo density on a scale comparable to the diameter of the solitonic core. The scaling symmetry of the SP equation and the uncertainty principle tightly relate the core mass to the halo specific energy, which, in the context of cosmological structure formation, leads to a simple scaling between core mass (Mc) and halo mass (Mh), Mc∝a(-1/2)Mh(1/3), where a is the cosmic scale factor. We verify this scaling relation by (i) examining the internal structure of a statistical sample of virialized halos that form in our 3D cosmological simulations and by (ii) merging multiple solitons to create individual virialized objects. Sufficient simulation resolution is achieved by adaptive mesh refinement and graphic processing units acceleration. From this scaling relation, present dwarf satellite galaxies are predicted to have kiloparsec-sized cores and a minimum mass of ∼10(8)M⊙, capable of solving the small-scale controversies in the cold dark matter model. Moreover, galaxies of 2×10(12)M⊙ at z=8 should have massive solitonic cores of ∼2×10(9)M⊙ within ∼60  pc. Such cores can provide a favorable local environment for funneling the gas that leads to the prompt formation of early stellar spheroids and quasars.

  14. The Destabilization of Protected Soil Organic Carbon Following Experimental Drought at the Pore and Core scale

    NASA Astrophysics Data System (ADS)

    Smith, A. P.; Bond-Lamberty, B. P.; Tfaily, M. M.; Todd-Brown, K. E.; Bailey, V. L.

    2015-12-01

    The movement of water and solutes through the pore matrix controls the distribution and transformation of carbon (C) in soils. Thus, a change in the hydrologic connectivity, such as increased saturation, disturbance or drought, may alter C mineralization and greenhouse gas (GHG) fluxes to the atmosphere. While these processes occur at the pore scale, they are often investigated at coarser scale. This project investigates pore- and core-scale soil C dynamics with varying hydrologic factors (simulated precipitation, groundwater-led saturation, and drought) to assess how climate-change induced shifts in hydrologic connectivity influences the destabilization of protected C in soils. Surface soil cores (0-15 cm depth) were collected from the Disney Wilderness Preserve, Florida, USA where water dynamics, particularly water table rise and fall, appear to exert a strong control on the emissions of GHGs and the persistence of soil organic matter in these soils. We measured CO2 and CH4 from soils allowed to freely imbibe water from below to a steady state starting from either field moist conditions or following experimental drought. Parallel treatments included the addition of similar quantities of water from above to simulate precipitation. Overall respiration increased in soil cores subjected to drought compared to field moist cores independent of wetting type. Cumulative CH4 production was higher in drought-induced soils, especially in the soils subjected to experimental groundwater-led saturation. Overall, the more C (from CO2 and CH4) was lost in drought-induced soils compared to field moist cores. Our results indicate that future drought events could have profound effects on the destabilization of protected C, especially in groundwater-fed soils. Our next steps focus on how to accurately capture drought-induced C destabilization mechanisms in earth system models.

  15. Solubilization Behavior of Polyene Antibiotics in Nanomicellar System: Insights from Molecular Dynamics Simulation of the Amphotericin B and Nystatin Interactions with Polysorbate 80.

    PubMed

    Mobasheri, Meysam; Attar, Hossein; Rezayat Sorkhabadi, Seyed Mehdi; Khamesipour, Ali; Jaafari, Mahmoud Reza

    2015-12-24

    Amphotericin B (AmB) and Nystatin (Nys) are the drugs of choice for treatment of systemic and superficial mycotic infections, respectively, with their full clinical potential unrealized due to the lack of high therapeutic index formulations for their solubilized delivery. In the present study, using a coarse-grained (CG) molecular dynamics (MD) simulation approach, we investigated the interaction of AmB and Nys with Polysorbate 80 (P80) to gain insight into the behavior of these polyene antibiotics (PAs) in nanomicellar solution and derive potential implications for their formulation development. While the encapsulation process was predominantly governed by hydrophobic forces, the dynamics, hydration, localization, orientation, and solvation of PAs in the micelle were largely controlled by hydrophilic interactions. Simulation results rationalized the experimentally observed capability of P80 in solubilizing PAs by indicating (i) the dominant kinetics of drugs encapsulation over self-association; (ii) significantly lower hydration of the drugs at encapsulated state compared with aggregated state; (iii) monomeric solubilization of the drugs; (iv) contribution of drug-micelle interactions to the solubilization; (v) suppressed diffusivity of the encapsulated drugs; (vi) high loading capacity of the micelle; and (vii) the structural robustness of the micelle against drug loading. Supported from the experimental data, our simulations determined the preferred location of PAs to be the core-shell interface at the relatively shallow depth of 75% of micelle radius. Deeper penetration of PAs was impeded by the synergistic effects of (i) limited diffusion of water; and (ii) perpendicular orientation of these drug molecules with respect to the micelle radius. PAs were solvated almost exclusively in the aqueous poly-oxyethylene (POE) medium due to the distance-related lack of interaction with the core, explaining the documented insensitivity of Nys solubilization to drug-core compatibility in detergent micelles. Based on the obtained results, the dearth of water at interior sites of micelle and the large lateral occupation space of PAs lead to shallow insertion, broad radial distribution, and lack of core interactions of the amphiphilic drugs. Hence, controlled promotion of micelle permeability and optimization of chain crowding in palisade layer may help to achieve more efficient solubilization of the PAs.

  16. Simulation for Operational Readiness in a New Freestanding Emergency Department

    PubMed Central

    Kerner, Robert L.; Gallo, Kathleen; Cassara, Michael; D'Angelo, John; Egan, Anthony; Simmons, John Galbraith

    2016-01-01

    Summary Statement Simulation in multiple contexts over the course of a 10-week period served as a core learning strategy to orient experienced clinicians before opening a large new urban freestanding emergency department. To ensure technical and procedural skills of all team members, who would provide care without on-site recourse to specialty backup, we designed a comprehensive interprofessional curriculum to verify and regularize a wide range of competencies and best practices for all clinicians. Formulated under the rubric of systems integration, simulation activities aimed to instill a shared culture of patient safety among the entire cohort of 43 experienced emergency physicians, physician assistants, nurses, and patient technicians, most newly hired to the health system, who had never before worked together. Methods throughout the preoperational term included predominantly hands-on skills review, high-fidelity simulation, and simulation with standardized patients. We also used simulation during instruction in disaster preparedness, sexual assault forensics, and community outreach. Our program culminated with 2 days of in-situ simulation deployed in simultaneous and overlapping timeframes to challenge system response capabilities, resilience, and flexibility; this work revealed latent safety threats, lapses in communication, issues of intake procedure and patient flow, and the persistence of inapt or inapplicable mental models in responding to clinical emergencies. PMID:27607095

  17. The effect of core configuration on temperature coefficient of reactivity in IRR-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bettan, M.; Silverman, I.; Shapira, M.

    1997-08-01

    Experiments designed to measure the effect of coolant moderator temperature on core reactivity in an HEU swimming pool type reactor were performed. The moderator temperature coefficient of reactivity ({alpha}{sub {omega}}) was obtained and found to be different in two core loadings. The measured {alpha}{sub {omega}} of one core loading was {minus}13 pcm/{degrees}C at the temperature range of 23-30{degrees}C. This value of {alpha}{sub {omega}} is comparable to the data published by the IAEA. The {alpha}{sub {omega}} measured in the second core loading was found to be {minus}8 pcm/{degrees}C at the same temperature range. Another phenomenon considered in this study is coremore » behavior during reactivity insertion transient. The results were compared to a core simulation using the Dynamic Simulator for Nuclear Power Plants. It was found that in the second core loading factors other than the moderator temperature influence the core reactivity more than expected. These effects proved to be extremely dependent on core configuration and may in certain core loadings render the reactor`s reactivity coefficient undesirable.« less

  18. COMPUTERIZED TRAINING OF CRYOSURGERY – A SYSTEM APPROACH

    PubMed Central

    Keelan, Robert; Yamakawa, Soji; Shimada, Kenji; Rabin, Yoed

    2014-01-01

    The objective of the current study is to provide the foundation for a computerized training platform for cryosurgery. Consistent with clinical practice, the training process targets the correlation of the frozen region contour with the target region shape, using medical imaging and accepted criteria for clinical success. The current study focuses on system design considerations, including a bioheat transfer model, simulation techniques, optimal cryoprobe layout strategy, and a simulation core framework. Two fundamentally different approaches were considered for the development of a cryosurgery simulator, based on a finite-elements (FE) commercial code (ANSYS) and a proprietary finite-difference (FD) code. Results of this study demonstrate that the FE simulator is superior in terms of geometric modeling, while the FD simulator is superior in terms of runtime. Benchmarking results further indicate that the FD simulator is superior in terms of usage of memory resources, pre-processing, parallel processing, and post-processing. It is envisioned that future integration of a human-interface module and clinical data into the proposed computer framework will make computerized training of cryosurgery a practical reality. PMID:23995400

  19. Supercomputers ready for use as discovery machines for neuroscience.

    PubMed

    Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus

    2012-01-01

    NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 10(8) neurons and 10(12) synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience.

  20. Supercomputers Ready for Use as Discovery Machines for Neuroscience

    PubMed Central

    Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus

    2012-01-01

    NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 108 neurons and 1012 synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience. PMID:23129998

  1. TREAT Modeling and Simulation Strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeHart, Mark David

    2015-09-01

    This report summarizes a four-phase process used to describe the strategy in developing modeling and simulation software for the Transient Reactor Test Facility. The four phases of this research and development task are identified as (1) full core transient calculations with feedback, (2) experiment modeling, (3) full core plus experiment simulation and (4) quality assurance. The document describes the four phases, the relationship between these research phases, and anticipated needs within each phase.

  2. Adsorption of hairy particles with mobile ligands: Molecular dynamics and density functional study

    NASA Astrophysics Data System (ADS)

    Borówko, M.; Sokołowski, S.; Staszewski, T.; Pizio, O.

    2018-01-01

    We study models of hairy nanoparticles in contact with a hard wall. Each particle is built of a spherical core with a number of ligands attached to it and each ligand is composed of several spherical, tangentially jointed segments. The number of segments is the same for all ligands. Particular models differ by the numbers of ligands and of segments per ligand, but the total number of segments is constant. Moreover, our model assumes that the ligands are tethered to the core in such a manner that they can "slide" over the core surface. Using molecular dynamics simulations we investigate the differences in the structure of a system close to the wall. In order to characterize the distribution of the ligands around the core, we have calculated the end-to-end distances of the ligands and the lengths and orientation of the mass dipoles. Additionally, we also employed a density functional approach to obtain the density profiles. We have found that if the number of ligands is not too high, the proposed version of the theory is capable to predict the structure of the system with a reasonable accuracy.

  3. Optimization and development of a core-in-cup tablet for modulated release of theophylline in simulated gastrointestinal fluids.

    PubMed

    Danckwerts, M P

    2000-07-01

    A triple-layer core-in-cup tablet that can release theophylline in simulated gastrointestinal (GI) fluids at three distinct rates has been developed. The first layer is an immediate-release layer; the second layer is a sustained-release layer; and the last layer is a boost layer, which was designed to coincide with a higher nocturnal dose of theophylline. The study consisted of two stages. The first stage optimized the sustained-release layer of the tablet to release theophylline over a period of 12 hr. Results from this stage indicated that 30% w/w acacia gum was the best polymer and concentration to use when compressed to a hardness of 50 N/m2. The second stage of the study involved the investigation of the final triple-layer core-in-cup tablet to release theophylline at three different rates in simulated GI fluids. The triple-layer modulated core-in-cup tablet successfully released drug in simulated fluids at an initial rate of 40 mg/min, followed by a rate of 0.4085 mg/min, in simulated gastric fluid TS, 0.1860 mg/min in simulated intestinal fluid TS, and finally by a boosted rate of 0.6952 mg/min.

  4. Pellet-clad mechanical interaction screening using VERA applied to Watts Bar Unit 1, Cycles 1–3

    DOE PAGES

    Stimpson, Shane; Powers, Jeffrey; Clarno, Kevin; ...

    2017-12-22

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) aims to provide high-fidelity multiphysics simulations of light water nuclear reactors. To accomplish this, CASL is developing the Virtual Environment for Reactor Applications (VERA), which is a suite of code packages for thermal hydraulics, neutron transport, fuel performance, and coolant chemistry. As VERA continues to grow and expand, there has been an increased focus on incorporating fuel performance analysis methods. One of the primary goals of CASL is to estimate local cladding failure probability through pellet-clad interaction, which consists of both pellet-clad mechanical interaction (PCMI) and stress corrosion cracking. Estimatingmore » clad failure is important to preventing release of fission products to the primary system and accurate estimates could prove useful in establishing less conservative power ramp rates or when considering load-follow operations.While this capability is being pursued through several different approaches, the procedure presented in this article focuses on running independent fuel performance calculations with BISON using a file-based one-way coupling based on multicycle output data from high fidelity, pin-resolved coupled neutron transport–thermal hydraulics simulations. This type of approach is consistent with traditional fuel performance analysis methods, which are typically separate from core simulation analyses. A more tightly coupled approach is currently being developed, which is the ultimate target application in CASL.Recent work simulating 12 cycles of Watts Bar Unit 1 with VERA core simulator are capitalized upon, and quarter-core BISON results for parameters of interest to PCMI (maximum centerline fuel temperature, maximum clad hoop stress, and minimum gap size) are presented for Cycles 1–3. In conclusion, based on these results, this capability demonstrates its value and how it could be used as a screening tool for gathering insight into PCMI, singling out limiting rods for further, more detailed analysis.« less

  5. Pellet-clad mechanical interaction screening using VERA applied to Watts Bar Unit 1, Cycles 1–3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stimpson, Shane; Powers, Jeffrey; Clarno, Kevin

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) aims to provide high-fidelity multiphysics simulations of light water nuclear reactors. To accomplish this, CASL is developing the Virtual Environment for Reactor Applications (VERA), which is a suite of code packages for thermal hydraulics, neutron transport, fuel performance, and coolant chemistry. As VERA continues to grow and expand, there has been an increased focus on incorporating fuel performance analysis methods. One of the primary goals of CASL is to estimate local cladding failure probability through pellet-clad interaction, which consists of both pellet-clad mechanical interaction (PCMI) and stress corrosion cracking. Estimatingmore » clad failure is important to preventing release of fission products to the primary system and accurate estimates could prove useful in establishing less conservative power ramp rates or when considering load-follow operations.While this capability is being pursued through several different approaches, the procedure presented in this article focuses on running independent fuel performance calculations with BISON using a file-based one-way coupling based on multicycle output data from high fidelity, pin-resolved coupled neutron transport–thermal hydraulics simulations. This type of approach is consistent with traditional fuel performance analysis methods, which are typically separate from core simulation analyses. A more tightly coupled approach is currently being developed, which is the ultimate target application in CASL.Recent work simulating 12 cycles of Watts Bar Unit 1 with VERA core simulator are capitalized upon, and quarter-core BISON results for parameters of interest to PCMI (maximum centerline fuel temperature, maximum clad hoop stress, and minimum gap size) are presented for Cycles 1–3. In conclusion, based on these results, this capability demonstrates its value and how it could be used as a screening tool for gathering insight into PCMI, singling out limiting rods for further, more detailed analysis.« less

  6. Simulation of two-dimensional adjustable liquid gradient refractive index (L-GRIN) microlens

    NASA Astrophysics Data System (ADS)

    Le, Zichun; Wu, Xiang; Sun, Yunli; Du, Ying

    2017-07-01

    In this paper, a two-dimensional liquid gradient refractive index (L-GRIN) microlens is designed which can be used in adjusting focusing direction and focal spot of light beam. Finite element method (FEM) is used to simulate the convection diffusion process happening in core inlet flow and cladding inlet flow. And the ray tracing method shows us the light beam focusing effect including the extrapolation of focal length and output beam spot size. When the flow rates of the core and cladding fluids are held the same between the internal and external, left and right, and upper and lower inlets, the focal length varied from 313 μm to 53.3 μm while the flow rate of liquids ranges from 500 pL/s to 10,000 pL/s. While the core flow rate is bigger than the cladding inlet flow rate, the light beam will focus on a light spot with a tunable size. By adjusting the ratio of cladding inlet flow rate including Qright/Qleft and Qup/Qdown, we get the adjustable two-dimensional focus direction rather than the one-dimensional focusing. In summary, by adjusting the flow rate of core inlet and cladding inlet, the focal length, output beam spot and focusing direction of the input light beam can be manipulated. We suppose this kind of flexible microlens can be used in integrated optics and lab-on-a-chip system.

  7. Three-dimensional investigations of the threading regime in a microfluidic flow-focusing channel

    NASA Astrophysics Data System (ADS)

    Gowda, Krishne; Brouzet, Christophe; Lefranc, Thibault; Soderberg, L. Daniel; Lundell, Fredrik

    2017-11-01

    We study the flow dynamics of the threading regime in a microfluidic flow-focusing channel through 3D numerical simulations and experiments. Making strong filaments from cellulose nano-fibrils (CNF) could potentially steer to new high-performance bio-based composites competing with conventional glass fibre composites. CNF filaments can be obtained through hydrodynamic alignment of dispersed CNF by using the concept of flow-focusing. The aligned structure is locked by diffusion of ions resulting in a dispersion-gel transition. Flow-focusing typically refers to a microfluidic channel system where the core fluid is focused by the two sheath fluids, thereby creating an extensional flow at the intersection. In this study, threading regime corresponds to an extensional flow field generated by the water sheath fluid stretching the dispersed CNF core fluid and leading to formation of long threads. The experimental measurements are performed using optical coherence tomography (OCT) and 3D numerical simulations with OpenFOAM. The prime focus is laid on the 3D characteristics of thread formation such as wetting length of core fluid, shape, aspect ratio of the thread and velocity flow-field in the microfluidic channel.

  8. A Lattice Boltzmann Framework for the simulation of boiling hydrodynamics in BWRs.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, P. K.; Tentner, A.; Uddin, R.

    2008-01-01

    Multi phase and multi component flows are ubiquitous in nature as well as in many man-made processes. A specific example is the Boiling Water Reactor (BWR) core, in which the coolant enters the core as liquid, undergoes a phase change as it traverses the core and exits as a high quality two-phase mixture. Two-phase flows in BWRs typically manifest a wide variety of geometrical patterns of the co-existing phases depending on the local system conditions. Modeling of such flows currently relies on empirical correlations (for example, in the simulation of bubble nucleation, bubble growth and coalescence, and inter-phase surface topologymore » transitions) that hinder the accurate simulation of two-phase phenomena using Computational Fluid Dynamics (CFD) approaches. The Lattice Boltzmann Method (LBM) is in rapid development as a modeling tool to understand these macro-phenomena by coupling them with their underlying micro-dynamics. This paper presents a consistent LBM formulation for the simulation of a two-phase water-steam system. Results of initial model validation in a range of thermodynamic conditions typical for BWRs are also shown. The interface between the two coexisting phases is captured from the dynamics of the model itself, i.e., no interface tracking is needed. The model is based on the Peng-Robinson (P-R) non-ideal equation of state and can quantitatively approximate the phase-coexistence curve for water at different temperatures ranging from 125 to 325 oC. Consequently, coexisting phases with large density ratios (up to {approx}1000) may be simulated. Two-phase models in the 200-300 C temperature range are of significant importance to nuclear engineers since most BWRs operate under similar thermodynamic conditions. Simulation of bubbles and droplets in a gravity-free environment of the corresponding coexisting phase until steady state is reached satisfies Laplace law at different temperatures and thus, yield the surface tension of the fluid. Comparing the LBM surface tension thus calculated using the LBM to the corresponding experimental values for water, the LBM lattice unit (lu) can be scaled to the physical units. Using this approach, spatial scaling of the LBM emerges from the model itself and is not imposed externally.« less

  9. Learning theories and tools for the assessment of core nursing competencies in simulation: A theoretical review.

    PubMed

    Lavoie, Patrick; Michaud, Cécile; Bélisle, Marilou; Boyer, Louise; Gosselin, Émilie; Grondin, Myrian; Larue, Caroline; Lavoie, Stéphan; Pepin, Jacinthe

    2018-02-01

    To identify the theories used to explain learning in simulation and to examine how these theories guided the assessment of learning outcomes related to core competencies in undergraduate nursing students. Nurse educators face the challenge of making explicit the outcomes of competency-based education, especially when competencies are conceptualized as holistic and context dependent. Theoretical review. Research papers (N = 182) published between 1999-2015 describing simulation in nursing education. Two members of the research team extracted data from the papers, including theories used to explain how simulation could engender learning and tools used to assess simulation outcomes. Contingency tables were created to examine the associations between theories, outcomes and tools. Some papers (N = 79) did not provide an explicit theory. The 103 remaining papers identified one or more learning or teaching theories; the most frequent were the National League for Nursing/Jeffries Simulation Framework, Kolb's theory of experiential learning and Bandura's social cognitive theory and concept of self-efficacy. Students' perceptions of simulation, knowledge and self-confidence were the most frequently assessed, mainly via scales designed for the study where they were used. Core competencies were mostly assessed with an observational approach. This review highlighted the fact that few studies examined the use of simulation in nursing education through learning theories and via assessment of core competencies. It also identified observational tools used to assess competencies in action, as holistic and context-dependent constructs. © 2017 John Wiley & Sons Ltd.

  10. The 3D Death of a Massive Star

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2015-07-01

    What happens at the very end of a massive star's life, just before its core's collapse? A group led by Sean Couch (California Institute of Technology and Michigan State University) claim to have carried out the first three-dimensional simulations of these final few minutes — revealing new clues about the factors that can lead a massive star to explode in a catastrophic supernova at the end of its life. A Giant Collapses In dying massive stars, in-falling matter bounces off the of collapsed core, creating a shock wave. If the shock wave loses too much energy as it expands into the star, it can stall out — but further energy input can revive it and result in a successful explosion of the star as a core-collapse supernova. In simulations of this process, however, theorists have trouble getting the stars to consistently explode: the shocks often stall out and fail to revive. Couch and his group suggest that one reason might be that these simulations usually start at core collapse assuming spherical symmetry of the progenitor star. Adding Turbulence Couch and his collaborators suspect that the key is in the final minutes just before the star collapses. Models that assume a spherically-symmetric star can't include the effects of convection as the final shell of silicon is burned around the core — and those effects might have a significant impact! To test this hypothesis, the group ran fully 3D simulations of the final three minutes of the life of a 15 solar-mass star, ending with core collapse, bounce, and shock-revival. The outcome was striking: the 3D modeling introduced powerful turbulent convection (with speeds of several hundred km/s!) in the last few minutes of silicon-shell burning. As a result, the initial structure and motions in the star just before core collapse were very different from those in core-collapse simulations that use spherically-symmetric initial conditions. The turbulence was then further amplified during collapse and formation of the shock, generating pressure that aided the shock expansion — which should ultimately help the star explode! The group cautions that their simulations are still very idealized, but these results clearly indicate that the 3D structure of massive stellar cores has an important impact on the core-collapse supernova mechanism. Citation Sean M. Couch et al. 2015 ApJ 808 L21 doi:10.1088/2041-8205/808/1/L21

  11. Stellar encounters involving neutron stars in globular cluster cores

    NASA Technical Reports Server (NTRS)

    Davies, M. B.; Benz, W.; Hills, J. G.

    1992-01-01

    Encounters between a 1.4 solar mass neutron star and a 0.8 solar mass red giant (RG) and between a 1.4 solar mass neutron star (NS) and an 0.8 solar mass main-sequence (MS) star have been successfully simulated. In the case of encounters involving an RG, bound systems are produced when the separation at periastron passage R(MIN) is less than about 2.5 R(RG). At least 70 percent of these bound systems are composed of the RG core and NS forming a binary engulfed in a common envelope of what remains of the former RG envelope. Once the envelope is ejected, a tight white dwarf-NS binary remains. For MS stars, encounters with NSs will produce bound systems when R(MIN) is less than about 3.5 R(MS). Some 50 percent of these systems will be single objects with the NS engulfed in a thick disk of gas almost as massive as the original MS star. The ultimate fate of such systems is unclear.

  12. Using large eddy simulations to reveal the size, strength, and phase of updraft and downdraft cores of an Arctic mixed-phase stratocumulus cloud

    DOE PAGES

    Roesler, Erika L.; Posselt, Derek J.; Rood, Richard B.

    2017-04-06

    Three-dimensional large eddy simulations (LES) are used to analyze a springtime Arctic mixed-phase stratocumulus observed on 26 April 2008 during the Indirect and Semi-Direct Aerosol Campaign. Two subgrid-scale turbulence parameterizations are compared. The first scheme is a 1.5-order turbulent kinetic energy (1.5-TKE) parameterization that has been previously applied to boundary layer cloud simulations. The second scheme, Cloud Layers Unified By Binormals (CLUBB), provides higher-order turbulent closure with scale awareness. The simulations, in comparisons with observations, show that both schemes produce the liquid profiles within measurement variability but underpredict ice water mass and overpredict ice number concentration. The simulation using CLUBBmore » underpredicted liquid water path more than the simulation using the 1.5-TKE scheme, so the turbulent length scale and horizontal grid box size were increased to increase liquid water path and reduce dissipative energy. The LES simulations show this stratocumulus cloud to maintain a closed cellular structure, similar to observations. The updraft and downdraft cores self-organize into a larger meso-γ-scale convective pattern with the 1.5-TKE scheme, but the cores remain more isotropic with the CLUBB scheme. Additionally, the cores are often composed of liquid and ice instead of exclusively containing one or the other. Furthermore, these results provide insight into traditionally unresolved and unmeasurable aspects of an Arctic mixed-phase cloud. From analysis, this cloud's updraft and downdraft cores appear smaller than other closed-cell stratocumulus such as midlatitude stratocumulus and Arctic autumnal mixed-phase stratocumulus due to the weaker downdrafts and lower precipitation rates.« less

  13. Phase-Shifting Zernike Interferometer Wavefront Sensor

    NASA Technical Reports Server (NTRS)

    Wallace, J. Kent; Rao, Shanti; Jensen-Clemb, Rebecca M.; Serabyn, Gene

    2011-01-01

    The canonical Zernike phase-contrast technique1,2,3,4 transforms a phase object in one plane into an intensity object in the conjugate plane. This is done by applying a static pi/2 phase shift to the central core (approx. lambda/D) of the PSF which is intermediate between the input and output planes. Here we present a new architecture for this sensor. First, the optical system is simple and all reflective. Second, the phase shift in the central core of the PSF is dynamic and or arbitrary size. This common-path, all-reflective design makes it minimally sensitive to vibration, polarization and wavelength. We review the theory of operation, describe the optical system, summarize numerical simulations and sensitivities and review results from a laboratory demonstration of this novel instrument

  14. Phase-Shifting Zernike Interferometer Wavefront Sensor

    NASA Technical Reports Server (NTRS)

    Wallace, J. Kent; Rao, Shanti; Jensen-Clem, Rebecca M.

    2011-01-01

    The canonical Zernike phase-contrast technique transforms a phase object in one plane into an intensity object in the conjugate plane. This is done by applying a static pi/2 phase shift to the central core (approx. lambda/diameter) of the PSF which is intermediate between the input and output plane. Here we present a new architecture for this sensor. First, the optical system is simple and all reflective, and second the phase shift in the central core of the PSF is dynamic and can be made arbitrarily large. This common-path, all-reflective design makes it minimally sensitive to vibration, polarization and wavelength. We review the theory of operation, describe the optical system, summarize numerical simulations and sensitivities and review results from a laboratory demonstration of this novel instrument.

  15. Changes in soil hydraulic properties caused by construction of a simulated waste trench at the Idaho National Engineering Laboratory, Idaho

    USGS Publications Warehouse

    Shakofsky, S.M.

    1995-01-01

    In order to assess the effect of filled waste disposal trenches on transport-governing soil properties, comparisons were made between profiles of undisturbed soil and disturbed soil in a simulated waste trench. The changes in soil properties induced by the construction of a simulated waste trench were measured near the Radioactive Waste Management Complex at the Idaho National Engineering Laboratory (INEL) in the semi-arid southeast region of Idaho. The soil samples were collected, using a hydraulically- driven sampler to minimize sample disruption, from both a simulated waste trench and an undisturbed area nearby. Results show that the undisturbed profile has distinct layers whose properties differ significantly, whereas the soil profile in the simulated waste trench is. by comparison, homogeneous. Porosity was increased in the disturbed cores, and, correspondingly, saturated hydraulic conductivities were on average three times higher. With higher soil-moisture contents (greater than 0.32), unsaturated hydraulic conductivities for the undisturbed cores were typically greater than those for the disturbed cores. With lower moisture contents, most of the disturbed cores had greater hydraulic conductivities. The observed differences in hydraulic conductivities are interpreted and discussed as changes in the soil pore geometry.

  16. Multidimensional neutrino-transport simulations of the core-collapse supernova central engine

    NASA Astrophysics Data System (ADS)

    O'Connor, Evan; Couch, Sean

    2017-01-01

    Core-collapse supernovae (CCSNe) mark the explosive death of a massive star. The explosion itself is triggered by the collapse of the iron core that forms near the end of a massive star's life. The core collapses to nuclear densities where the stiff nuclear equation of state halts the collapse and leads to the formation of the supernova shock. In many cases, this shock will eventually propagate throughout the entire star and produces a bright optical display. However, the path from shock formation to explosion has proven difficult to recreate in simulations. Soon after the shock forms, its outward propagation is stagnated and must be revived in order for the CCSNe to be successful. The leading theory for the mechanism that reenergizes the shock is the deposition of energy by neutrinos. In 1D simulations this mechanism fails. However, there is growing evidence that in 2D and 3D, hydrodynamic instabilities can assist the neutrino heating in reviving the shock. In this talk, I will present new multi-D neutrino-radiation-hydrodynamic simulations of CCSNe performed with the FLASH hydrodynamics package. I will discuss the efficacy of neutrino heating in our simulations and show the impact of the multi-D hydrodynamic instabilities.

  17. Climate Simulations with an Isentropic Finite Volume Dynamical Core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chih-Chieh; Rasch, Philip J.

    2012-04-15

    This paper discusses the impact of changing the vertical coordinate from a hybrid pressure to a hybrid-isentropic coordinate within the finite volume dynamical core of the Community Atmosphere Model (CAM). Results from a 20-year climate simulation using the new model coordinate configuration are compared to control simulations produced by the Eulerian spectral and FV dynamical cores of CAM which both use a pressure-based ({sigma}-p) coordinate. The same physical parameterization package is employed in all three dynamical cores. The isentropic modeling framework significantly alters the simulated climatology and has several desirable features. The revised model produces a better representation of heatmore » transport processes in the atmosphere leading to much improved atmospheric temperatures. We show that the isentropic model is very effective in reducing the long standing cold temperature bias in the upper troposphere and lower stratosphere, a deficiency shared among most climate models. The warmer upper troposphere and stratosphere seen in the isentropic model reduces the global coverage of high clouds which is in better agreement with observations. The isentropic model also shows improvements in the simulated wintertime mean sea-level pressure field in the northern hemisphere.« less

  18. Three dimensional core-collapse supernova simulated using a 15 M ⊙ progenitor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lentz, Eric J.; Bruenn, Stephen W.; Hix, W. Raphael

    We have performed ab initio neutrino radiation hydrodynamics simulations in three and two spatial dimensions (3D and 2D) of core-collapse supernovae from the same 15 M⊙ progenitor through 440 ms after core bounce. Both 3D and 2D models achieve explosions; however, the onset of explosion (shock revival) is delayed by ~100 ms in 3D relative to the 2D counterpart and the growth of the diagnostic explosion energy is slower. This is consistent with previously reported 3D simulations utilizing iron-core progenitors with dense mantles. In the ~100 ms before the onset of explosion, diagnostics of neutrino heating and turbulent kinetic energymore » favor earlier explosion in 2D. During the delay, the angular scale of convective plumes reaching the shock surface grows and explosion in 3D is ultimately lead by a single, large-angle plume, giving the expanding shock a directional orientation not dissimilar from those imposed by axial symmetry in 2D simulations. Finally, we posit that shock revival and explosion in the 3D simulation may be delayed until sufficiently large plumes form, whereas such plumes form more rapidly in 2D, permitting earlier explosions.« less

  19. Three dimensional core-collapse supernova simulated using a 15 M ⊙ progenitor

    DOE PAGES

    Lentz, Eric J.; Bruenn, Stephen W.; Hix, W. Raphael; ...

    2015-07-10

    We have performed ab initio neutrino radiation hydrodynamics simulations in three and two spatial dimensions (3D and 2D) of core-collapse supernovae from the same 15 M⊙ progenitor through 440 ms after core bounce. Both 3D and 2D models achieve explosions; however, the onset of explosion (shock revival) is delayed by ~100 ms in 3D relative to the 2D counterpart and the growth of the diagnostic explosion energy is slower. This is consistent with previously reported 3D simulations utilizing iron-core progenitors with dense mantles. In the ~100 ms before the onset of explosion, diagnostics of neutrino heating and turbulent kinetic energymore » favor earlier explosion in 2D. During the delay, the angular scale of convective plumes reaching the shock surface grows and explosion in 3D is ultimately lead by a single, large-angle plume, giving the expanding shock a directional orientation not dissimilar from those imposed by axial symmetry in 2D simulations. Finally, we posit that shock revival and explosion in the 3D simulation may be delayed until sufficiently large plumes form, whereas such plumes form more rapidly in 2D, permitting earlier explosions.« less

  20. Incorporating the Impacts of Small Scale Rock Heterogeneity into Models of Flow and Trapping in Target UK CO2 Storage Systems

    NASA Astrophysics Data System (ADS)

    Jackson, S. J.; Reynolds, C.; Krevor, S. C.

    2017-12-01

    Predictions of the flow behaviour and storage capacity of CO2 in subsurface reservoirs are dependent on accurate modelling of multiphase flow and trapping. A number of studies have shown that small scale rock heterogeneities have a significant impact on CO2flow propagating to larger scales. The need to simulate flow in heterogeneous reservoir systems has led to the development of numerical upscaling techniques which are widely used in industry. Less well understood, however, is the best approach for incorporating laboratory characterisations of small scale heterogeneities into models. At small scales, heterogeneity in the capillary pressure characteristic function becomes significant. We present a digital rock workflow that combines core flood experiments with numerical simulations to characterise sub-core scale capillary pressure heterogeneities within rock cores from several target UK storage reservoirs - the Bunter, Captain and Ormskirk sandstone formations. Measured intrinsic properties (permeability, capillary pressure, relative permeability) and 3D saturations maps from steady-state core flood experiments were the primary inputs to construct a 3D digital rock model in CMG IMEX. We used vertical end-point scaling to iteratively update the voxel by voxel capillary pressure curves from the average MICP curve; with each iteration more closely predicting the experimental saturations and pressure drops. Once characterised, the digital rock cores were used to predict equivalent flow functions, such as relative permeability and residual trapping, across the range of flow conditions estimated to prevail in the CO2 storage reservoirs. In the case of the Captain sandstone, rock cores were characterised across an entire 100m vertical transect of the reservoir. This allowed analysis of the upscaled impact of small scale heterogeneity on flow and trapping. Figure 1 shows the varying degree to which heterogeneity impacted flow depending on the capillary number in the Captain sandstone. At low capillary numbers, typical of regions where flow is dominated by buoyancy, fluid flow is impeded and trapping enhanced. At high capillary numbers, typical of the near wellbore environment, the fluid distributed homogeneously and the equivalent relative permeability was higher leading to improved injectivity.

  1. Terrestrial Planet Formation Around Close Binary Stars

    NASA Technical Reports Server (NTRS)

    Lissauer, Jack J.; Quintana, Elisa V.

    2003-01-01

    Most stars reside in multiple star systems; however, virtually all models of planetary growth have assumed an isolated single star. Numerical simulations of the collapse of molecular cloud cores to form binary stars suggest that disks will form within such systems. Observations indirectly suggest disk material around one or both components within young binary star systems. If planets form at the right places within such circumstellar disks, they can remain in stable orbits within the binary star systems for eons. We are simulating the late stages of growth of terrestrial planets around close binary stars, using a new, ultrafast, symplectic integrator that we have developed for this purpose. The sum of the masses of the two stars is one solar mass, and the initial disk of planetary embryos is the same as that used for simulating the late stages of terrestrial planet growth within our Solar System and in the Alpha Centauri wide binary star system. Giant planets &are included in the simulations, as they are in most simulations of the late stages of terrestrial planet accumulation in our Solar System. When the stars travel on a circular orbit with semimajor axis of up to 0.1 AU about their mutual center of mass, the planetary embryos grow into a system of terrestrial planets that is statistically identical to those formed about single stars, but a larger semimajor axis and/or a significantly eccentric binary orbit can lead to significantly more dynamically hot terrestrial planet systems.

  2. Off-Center Collisions between Clusters of Galaxies

    NASA Astrophysics Data System (ADS)

    Ricker, P. M.

    1998-03-01

    We present numerical simulations of off-center collisions between galaxy clusters made using a new hydrodynamical code based on the piecewise-parabolic method (PPM) and an isolated multigrid potential solver. The current simulations follow only the intracluster gas. We have performed three high-resolution (256 × 1282) simulations of collisions between equal-mass clusters using a nonuniform grid with different values of the impact parameter (0, 5, and 10 times the cluster core radius). Using these simulations, we have studied the variation in equilibration time, luminosity enhancement during the collision, and structure of the merger remnant with varying impact parameter. We find that in off-center collisions the cluster cores (the inner regions where the pressure exceeds the ram pressure) behave quite differently from the clusters' outer regions. A strong, roughly ellipsoidal shock front, similar to that noted in previous simulations of head-on collisions, enables the cores to become bound to each other by dissipating their kinetic energy as heat in the surrounding gas. These cores survive well into the collision, dissipating their orbital angular momentum via spiral bow shocks. After the ellipsoidal shock has passed well outside the interaction region, the material left in its wake falls back onto the merger remnant formed through the inspiral of the cluster cores, creating a roughly spherical accretion shock. For less than one-half of a sound crossing time after the cores first interact, the total X-ray luminosity increases by a large factor; the magnitude of this increase depends sensitively on the size of the impact parameter. Observational evidence of the ongoing collision, in the form of bimodality and distortion in projected X-ray surface brightness and temperature maps, is present for one to two sound crossing times after the collision but only for special viewing angles. The remnant actually requires at least five crossing times to reach virial equilibrium. Since the sound crossing time can be as large as 1-2 Gyr, the equilibration time can thus be a substantial fraction of the age of the universe. The final merger remnant is very similar for impact parameters of 0 and 5 core radii. It possesses a roughly isothermal core with central density and temperature twice the initial values for the colliding clusters. Outside the core, the temperature drops as r-1, and the density roughly as r-3.8. The core radius shows a small increase due to shock heating during the merger. For an impact parameter of 10 core radii, the core of the remnant possesses a more flattened density profile with a steeper drop-off outside the core. In both off-center cases, the merger remnant rotates, but only for the 10 core-radius case does this appear to have an effect on the structure of the remnant.

  3. Accelerating 3D Hall MHD Magnetosphere Simulations with Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Bard, C.; Dorelli, J.

    2017-12-01

    The resolution required to simulate planetary magnetospheres with Hall magnetohydrodynamics result in program sizes approaching several hundred million grid cells. These would take years to run on a single computational core and require hundreds or thousands of computational cores to complete in a reasonable time. However, this requires access to the largest supercomputers. Graphics processing units (GPUs) provide a viable alternative: one GPU can do the work of roughly 100 cores, bringing Hall MHD simulations of Ganymede within reach of modest GPU clusters ( 8 GPUs). We report our progress in developing a GPU-accelerated, three-dimensional Hall magnetohydrodynamic code and present Hall MHD simulation results for both Ganymede (run on 8 GPUs) and Mercury (56 GPUs). We benchmark our Ganymede simulation with previous results for the Galileo G8 flyby, namely that adding the Hall term to ideal MHD simulations changes the global convection pattern within the magnetosphere. Additionally, we present new results for the G1 flyby as well as initial results from Hall MHD simulations of Mercury and compare them with the corresponding ideal MHD runs.

  4. VERA Core Simulator Methodology for PWR Cycle Depletion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kochunas, Brendan; Collins, Benjamin S; Jabaay, Daniel

    2015-01-01

    This paper describes the methodology developed and implemented in MPACT for performing high-fidelity pressurized water reactor (PWR) multi-cycle core physics calculations. MPACT is being developed primarily for application within the Consortium for the Advanced Simulation of Light Water Reactors (CASL) as one of the main components of the VERA Core Simulator, the others being COBRA-TF and ORIGEN. The methods summarized in this paper include a methodology for performing resonance self-shielding and computing macroscopic cross sections, 2-D/1-D transport, nuclide depletion, thermal-hydraulic feedback, and other supporting methods. These methods represent a minimal set needed to simulate high-fidelity models of a realistic nuclearmore » reactor. Results demonstrating this are presented from the simulation of a realistic model of the first cycle of Watts Bar Unit 1. The simulation, which approximates the cycle operation, is observed to be within 50 ppm boron (ppmB) reactivity for all simulated points in the cycle and approximately 15 ppmB for a consistent statepoint. The verification and validation of the PWR cycle depletion capability in MPACT is the focus of two companion papers.« less

  5. Study of the L-mode tokamak plasma “shortfall” with local and global nonlinear gyrokinetic δf particle-in-cell simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhury, J.; Wan, Weigang; Chen, Yang

    2014-11-15

    The δ f particle-in-cell code GEM is used to study the transport “shortfall” problem of gyrokinetic simulations. In local simulations, the GEM results confirm the previously reported simulation results of DIII-D [Holland et al., Phys. Plasmas 16, 052301 (2009)] and Alcator C-Mod [Howard et al., Nucl. Fusion 53, 123011 (2013)] tokamaks with the continuum code GYRO. Namely, for DIII-D the simulations closely predict the ion heat flux at the core, while substantially underpredict transport towards the edge; while for Alcator C-Mod, the simulations show agreement with the experimental values of ion heat flux, at least within the range of experimental error.more » Global simulations are carried out for DIII-D L-mode plasmas to study the effect of edge turbulence on the outer core ion heat transport. The edge turbulence enhances the outer core ion heat transport through turbulence spreading. However, this edge turbulence spreading effect is not enough to explain the transport underprediction.« less

  6. NASA Center for Climate Simulation (NCCS) Presentation

    NASA Technical Reports Server (NTRS)

    Webster, William P.

    2012-01-01

    The NASA Center for Climate Simulation (NCCS) offers integrated supercomputing, visualization, and data interaction technologies to enhance NASA's weather and climate prediction capabilities. It serves hundreds of users at NASA Goddard Space Flight Center, as well as other NASA centers, laboratories, and universities across the US. Over the past year, NCCS has continued expanding its data-centric computing environment to meet the increasingly data-intensive challenges of climate science. We doubled our Discover supercomputer's peak performance to more than 800 teraflops by adding 7,680 Intel Xeon Sandy Bridge processor-cores and most recently 240 Intel Xeon Phi Many Integrated Core (MIG) co-processors. A supercomputing-class analysis system named Dali gives users rapid access to their data on Discover and high-performance software including the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT), with interfaces from user desktops and a 17- by 6-foot visualization wall. NCCS also is exploring highly efficient climate data services and management with a new MapReduce/Hadoop cluster while augmenting its data distribution to the science community. Using NCCS resources, NASA completed its modeling contributions to the Intergovernmental Panel on Climate Change (IPCG) Fifth Assessment Report this summer as part of the ongoing Coupled Modellntercomparison Project Phase 5 (CMIP5). Ensembles of simulations run on Discover reached back to the year 1000 to test model accuracy and projected climate change through the year 2300 based on four different scenarios of greenhouse gases, aerosols, and land use. The data resulting from several thousand IPCC/CMIP5 simulations, as well as a variety of other simulation, reanalysis, and observationdatasets, are available to scientists and decision makers through an enhanced NCCS Earth System Grid Federation Gateway. Worldwide downloads have totaled over 110 terabytes of data.

  7. Spectral element simulation of precession driven flows in the outer cores of spheroidal planets

    NASA Astrophysics Data System (ADS)

    Vormann, Jan; Hansen, Ulrich

    2015-04-01

    A common feature of the planets in the solar system is the precession of the rotation axes, driven by the gravitational influence of another body (e.g. the Earth's moon). In a precessing body, the rotation axis itself is rotating around another axis, describing a cone during one precession period. Similar to the coriolis and centrifugal force appearing from the transformation to a rotating system, the addition of precession adds another term to the Navier-Stokes equation, the so called Poincaré force. The main geophysical motivation in studying precession driven flows comes from their ability to act as magnetohydrodynamic dynamos in planets and moons. Precession may either act as the only driving force or operate together with other forces such as thermochemical convection. One of the challenges in direct numerical simulations of such flows lies in the spheroidal shape of the fluid volume, which should not be neglected since it contributes an additional forcing trough pressure torques. Codes developed for the simulation of flows in spheres mostly use efficient global spectral algorithms that converge fast, but lack geometric flexibility, while local methods are usable in more complex shapes, but often lack high accuracy. We therefore adapted the spectral element code Nek5000, developed at Argonne National Laboratory, to the problem. The spectral element method is capable of solving for the flow in arbitrary geometries while still offering spectral convergence. We present first results for the simulation of a purely hydrodynamic, precession-driven flow in a spheroid with no-slip boundaries and an inner core. The driving by the Poincaré force is in a range where theoretical work predicts multiple solutions for a laminar flow. Our simulations indicate a transition to turbulent flows for Ekman numbers of 10-6 and lower.

  8. Numerical simulations of targeted delivery of magnetic drug aerosols in the human upper and central respiratory system: a validation study.

    PubMed

    Kenjereš, Saša; Tjin, Jimmy Leroy

    2017-12-01

    In the present study, we investigate the concept of the targeted delivery of pharmaceutical drug aerosols in an anatomically realistic geometry of the human upper and central respiratory system. The geometry considered extends from the mouth inlet to the eighth generation of the bronchial bifurcations and is identical to the phantom model used in the experimental studies of Banko et al. (2015 Exp. Fluids 56 , 1-12 (doi:10.1007/s00348-015-1966-y)). In our computer simulations, we combine the transitional Reynolds-averaged Navier-Stokes (RANS) and the wall-resolved large eddy simulation (LES) methods for the air phase with the Lagrangian approach for the particulate (aerosol) phase. We validated simulations against recently obtained magnetic resonance velocimetry measurements of Banko et al. (2015 Exp. Fluids 56 , 1-12. (doi:10.1007/s00348-015-1966-y)) that provide a full three-dimensional mean velocity field for steady inspiratory conditions. Both approaches produced good agreement with experiments, and the transitional RANS approach is selected for the multiphase simulations of aerosols transport, because of significantly lower computational costs. The local and total deposition efficiency are calculated for different classes of pharmaceutical particles (in the 0.1 μm≤ d p ≤10 μm range) without and with a paramagnetic core (the shell-core particles). For the latter, an external magnetic field is imposed. The source of the imposed magnetic field was placed in the proximity of the first bronchial bifurcation. We demonstrated that both total and local depositions of aerosols at targeted locations can be significantly increased by an applied magnetization force. This finding confirms the possible potential for further advancement of the magnetic drug targeting technique for more efficient treatments for respiratory diseases.

  9. High-performance, scalable optical network-on-chip architectures

    NASA Astrophysics Data System (ADS)

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of GWOR in optical communication and BFT in non-uniform traffic communication and three-dimension (3D) implementation. 5. A cycle-accurate NoC simulator is developed to evaluate the performance of proposed HONoC architectures. It is a comprehensive platform that can simulate both electronic and optical NoCs. Different size HONoC architectures are evaluated in terms of throughput, latency and energy dissipation. Simulation results confirm that HONoC achieves good network performance with lower power consumption.

  10. Research of the key technology in satellite communication networks

    NASA Astrophysics Data System (ADS)

    Zeng, Yuan

    2018-02-01

    According to the prediction, in the next 10 years the wireless data traffic will be increased by 500-1000 times. Not only the wireless data traffic will be increased exponentially, and the demand for diversified traffic will be increased. Higher requirements for future mobile wireless communication system had brought huge market space for satellite communication system. At the same time, the space information networks had been greatly developed with the depth of human exploration of space activities, the development of space application, the expansion of military and civilian application. The core of spatial information networks is the satellite communication. The dissertation presented the communication system architecture, the communication protocol, the routing strategy, switch scheduling algorithm and the handoff strategy based on the satellite communication system. We built the simulation platform of the LEO satellites networks and simulated the key technology using OPNET.

  11. Development of Residential Prototype Building Models and Analysis System for Large-Scale Energy Efficiency Studies Using EnergyPlus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendon, Vrushali V.; Taylor, Zachary T.

    ABSTRACT: Recent advances in residential building energy efficiency and codes have resulted in increased interest in detailed residential building energy models using the latest energy simulation software. One of the challenges of developing residential building models to characterize new residential building stock is to allow for flexibility to address variability in house features like geometry, configuration, HVAC systems etc. Researchers solved this problem in a novel way by creating a simulation structure capable of creating fully-functional EnergyPlus batch runs using a completely scalable residential EnergyPlus template system. This system was used to create a set of thirty-two residential prototype buildingmore » models covering single- and multifamily buildings, four common foundation types and four common heating system types found in the United States (US). A weighting scheme with detailed state-wise and national weighting factors was designed to supplement the residential prototype models. The complete set is designed to represent a majority of new residential construction stock. The entire structure consists of a system of utility programs developed around the core EnergyPlus simulation engine to automate the creation and management of large-scale simulation studies with minimal human effort. The simulation structure and the residential prototype building models have been used for numerous large-scale studies, one of which is briefly discussed in this paper.« less

  12. Effect of a core-softened O-O interatomic interaction on the shock compression of fused silica

    NASA Astrophysics Data System (ADS)

    Izvekov, Sergei; Weingarten, N. Scott; Byrd, Edward F. C.

    2018-03-01

    Isotropic soft-core potentials have attracted considerable attention due to their ability to reproduce thermodynamic, dynamic, and structural anomalies observed in tetrahedral network-forming compounds such as water and silica. The aim of the present work is to assess the relevance of effective core-softening pertinent to the oxygen-oxygen interaction in silica to the thermodynamics and phase change mechanisms that occur in shock compressed fused silica. We utilize the MD simulation method with a recently published numerical interatomic potential derived from an ab initio MD simulation of liquid silica via force-matching. The resulting potential indicates an effective shoulder-like core-softening of the oxygen-oxygen repulsion. To better understand the role of the core-softening we analyze two derivative force-matching potentials in which the soft-core is replaced with a repulsive core either in the three-body potential term or in all the potential terms. Our analysis is further augmented by a comparison with several popular empirical models for silica that lack an explicit core-softening. The first outstanding feature of shock compressed glass reproduced with the soft-core models but not with the other models is that the shock compression values at pressures above 20 GPa are larger than those observed under hydrostatic compression (an anomalous shock Hugoniot densification). Our calculations indicate the occurrence of a phase transformation along the shock Hugoniot that we link to the O-O repulsion core-softening. The phase transformation is associated with a Hugoniot temperature reversal similar to that observed experimentally. With the soft-core models, the phase change is an isostructural transformation between amorphous polymorphs with no associated melting event. We further examine the nature of the structural transformation by comparing it to the Hugoniot calculations for stishovite. For stishovite, the Hugoniot exhibits temperature reversal and associated phase transformation, which is a transition to a disordered phase (liquid or dense amorphous), regardless of whether or not the model accounts for core-softening. The onset pressures of the transformation predicted by different models show a wide scatter within 60-110 GPa; for potentials without core-softening, the onset pressure is much higher than 110 GPa. Our results show that the core-softening of the interaction in the oxygen subsystem of silica is the key mechanism for the structural transformation and thermodynamics in shock compressed silica. These results may provide an important contribution to a unified picture of anomalous response to shock compression observed in other network-forming oxides and single-component systems with core-softening of effective interactions.

  13. A hybrid simulation approach for integrating safety behavior into construction planning: An earthmoving case study.

    PubMed

    Goh, Yang Miang; Askar Ali, Mohamed Jawad

    2016-08-01

    One of the key challenges in improving construction safety and health is the management of safety behavior. From a system point of view, workers work unsafely due to system level issues such as poor safety culture, excessive production pressure, inadequate allocation of resources and time and lack of training. These systemic issues should be eradicated or minimized during planning. However, there is a lack of detailed planning tools to help managers assess the impact of their upstream decisions on worker safety behavior. Even though simulation had been used in construction planning, the review conducted in this study showed that construction safety management research had not been exploiting the potential of simulation techniques. Thus, a hybrid simulation framework is proposed to facilitate integration of safety management considerations into construction activity simulation. The hybrid framework consists of discrete event simulation (DES) as the core, but heterogeneous, interactive and intelligent (able to make decisions) agents replace traditional entities and resources. In addition, some of the cognitive processes and physiological aspects of agents are captured using system dynamics (SD) approach. The combination of DES, agent-based simulation (ABS) and SD allows a more "natural" representation of the complex dynamics in construction activities. The proposed hybrid framework was demonstrated using a hypothetical case study. In addition, due to the lack of application of factorial experiment approach in safety management simulation, the case study demonstrated sensitivity analysis and factorial experiment to guide future research. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Parallelization of a Monte Carlo particle transport simulation code

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  15. Integration of Basic Knowledge Models for the Simulation of Cereal Foods Processing and Properties.

    PubMed

    Kristiawan, Magdalena; Kansou, Kamal; Valle, Guy Della

    Cereal processing (breadmaking, extrusion, pasting, etc.) covers a range of mechanisms that, despite their diversity, can be often reduced to a succession of two core phenomena: (1) the transition from a divided solid medium (the flour) to a continuous one through hydration, mechanical, biochemical, and thermal actions and (2) the expansion of a continuous matrix toward a porous structure as a result of the growth of bubble nuclei either by yeast fermentation or by water vaporization after a sudden pressure drop. Modeling them is critical for the domain, but can be quite challenging to address with mechanistic approaches relying on partial differential equations. In this chapter we present alternative approaches through basic knowledge models (BKM) that integrate scientific and expert knowledge, and possess operational interest for domain specialists. Using these BKMs, simulations of two cereal foods processes, extrusion and breadmaking, are provided by focusing on the two core phenomena. To support the use by non-specialists, these BKMs are implemented as computer tools, a Knowledge-Based System developed for the modeling of the flour mixing operation or Ludovic ® , a simulation software for twin screw extrusion. They can be applied to a wide domain of compositions, provided that the data on product rheological properties are available. Finally, it is stated that the use of such systems can help food engineers to design cereal food products and predict their texture properties.

  16. Integration of Weather Avoidance and Traffic Separation

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.; Chamberlain, James P.; Wilson, Sara R.

    2011-01-01

    This paper describes a dynamic convective weather avoidance concept that compensates for weather motion uncertainties; the integration of this weather avoidance concept into a prototype 4-D trajectory-based Airborne Separation Assurance System (ASAS) application; and test results from a batch (non-piloted) simulation of the integrated application with high traffic densities and a dynamic convective weather model. The weather model can simulate a number of pseudo-random hazardous weather patterns, such as slow- or fast-moving cells and opening or closing weather gaps, and also allows for modeling of onboard weather radar limitations in range and azimuth. The weather avoidance concept employs nested "core" and "avoid" polygons around convective weather cells, and the simulations assess the effectiveness of various avoid polygon sizes in the presence of different weather patterns, using traffic scenarios representing approximately two times the current traffic density in en-route airspace. Results from the simulation experiment show that the weather avoidance concept is effective over a wide range of weather patterns and cell speeds. Avoid polygons that are only 2-3 miles larger than their core polygons are sufficient to account for weather uncertainties in almost all cases, and traffic separation performance does not appear to degrade with the addition of weather polygon avoidance. Additional "lessons learned" from the batch simulation study are discussed in the paper, along with insights for improving the weather avoidance concept. Introduction

  17. Improving energy efficiency of Embedded DRAM Caches for High-end Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S; Li, Dong

    2014-01-01

    With increasing system core-count, the size of last level cache (LLC) has increased and since SRAM consumes high leakage power, power consumption of LLCs is becoming a significant fraction of processor power consumption. To address this, researchers have used embedded DRAM (eDRAM) LLCs which consume low-leakage power. However, eDRAM caches consume a significant amount of energy in the form of refresh energy. In this paper, we propose ESTEEM, an energy saving technique for embedded DRAM caches. ESTEEM uses dynamic cache reconfiguration to turn-off a portion of the cache to save both leakage and refresh energy. It logically divides the cachemore » sets into multiple modules and turns-off possibly different number of ways in each module. Microarchitectural simulations confirm that ESTEEM is effective in improving performance and energy efficiency and provides better results compared to a recently-proposed eDRAM cache energy saving technique, namely Refrint. For single and dual-core simulations, the average saving in memory subsystem (LLC+main memory) on using ESTEEM is 25.8% and 32.6%, respectively and average weighted speedup are 1.09X and 1.22X, respectively. Additional experiments confirm that ESTEEM works well for a wide-range of system parameters.« less

  18. Nitric Oxide PLIF Measurements in the Hypersonic Materials Environmental Test System (HYMETS)

    NASA Technical Reports Server (NTRS)

    Inman, Jennifer A.; Bathel, Brett F.; Johansen, Craig T.; Danehy, Paul M.; Jones, Stephen B.; Gragg, Jeffrey G.; Splinter, Scott C.

    2011-01-01

    A nonintrusive laser-based measurement system has been applied for the first time in the HYMETS (Hypersonic Materials Environmental Test System) 400 kW arc-heated wind tunnel at NASA Langley Research Center. Planar laser-induced fluorescence of naturally occurring nitric oxide (NO) has been used to obtain instantaneous flow visualization images, and to make both radial and axial velocity measurements. Results are presented at selected facility run conditions, including some in simulated Earth atmosphere (75% nitrogen, 20% oxygen, 5% argon) and others in simulated Martian atmosphere (71% carbon dioxide, 24% nitrogen, 5% argon), for bulk enthalpies ranging from 6.5 MJ/kg to 18.4 MJ/kg. Flow visualization images reveal the presence of large scale unsteady flow structures, and indicate nitric oxide fluorescence signal over more than 70% of the core flow for bulk enthalpies below about 11 MJ/kg, but over less than 10% of the core flow for bulk enthalpies above about 16 MJ/kg. Axial velocimetry was performed using molecular tagging velocimetry (MTV). Axial velocities of about 3 km/s were measured along the centerline. Radial velocimetry was performed by scanning the wavelength of the narrowband laser and analyzing the resulting Doppler shift. Radial velocities of 0.5km/s were measured.

  19. Forward Modeling of Oxygen Isotope Variability in Tropical Andean Ice Cores

    NASA Astrophysics Data System (ADS)

    Vuille, M. F.; Hurley, J. V.; Hardy, D. R.

    2016-12-01

    Ice core records from the tropical Andes serve as important archives of past tropical Pacific SST variability and changes in monsoon intensity upstream over the Amazon basin. Yet the interpretation of the oxygen isotopic signal in these ice cores remains controversial. Based on 10 years of continuous on-site glaciologic, meteorologic and isotopic measurements at the summit of the world's largest tropical ice cap, Quelccaya, in southern Peru, we developed a process-based physical forward model (proxy system model), capable of simulating intraseasonal, seasonal and interannual variability in delta-18O as observed in snow pits and short cores. Our results highlight the importance of taking into account post-depositional effects (sublimation and isotopic enrichment) to properly simulate the seasonal cycle. Intraseasonal variability is underestimated in our model unless the effects of cold air incursions, triggering significant monsoonal snowfall and more negative delta-18O values, are included. A number of sensitivity test highlight the influence of changing boundary conditions on the final snow isotopic profile. Such tests also show that our model provides much more realistic data than applying direct model output of precipitation delta-18O from isotope-enabled climate models (SWING ensemble). The forward model was calibrated with and run under present-day conditions, but it can also be driven with past climate forcings to reconstruct paleo-monsoon variability and investigate the influence of changes in radiative forcings (solar, volcanic) on delta-18O variability in Andean snow. The model is transferable and may be used to render a paleoclimatic context at other ice core locations.

  20. Circumstellar Disks and Outflows in Turbulent Molecular Cloud Cores: Possible Formation Mechanism for Misaligned Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsumoto, Tomoaki; Machida, Masahiro N.; Inutsuka, Shu-ichiro, E-mail: matsu@hosei.ac.jp

    2017-04-10

    We investigate the formation of circumstellar disks and outflows subsequent to the collapse of molecular cloud cores with the magnetic field and turbulence. Numerical simulations are performed by using an adaptive mesh refinement to follow the evolution up to ∼1000 years after the formation of a protostar. In the simulations, circumstellar disks are formed around the protostars; those in magnetized models are considerably smaller than those in nonmagnetized models, but their size increases with time. The models with stronger magnetic fields tend to produce smaller disks. During evolution in the magnetized models, the mass ratios of a disk to amore » protostar is approximately constant at ∼1%–10%. The circumstellar disks are aligned according to their angular momentum, and the outflows accelerate along the magnetic field on the 10–100 au scale; this produces a disk that is misaligned with the outflow. The outflows are classified into two types: a magnetocentrifugal wind and a spiral flow. In the latter, because of the geometry, the axis of rotation is misaligned with the magnetic field. The magnetic field has an internal structure in the cloud cores, which also causes misalignment between the outflows and the magnetic field on the scale of the cloud core. The distribution of the angular momentum vectors in a core also has a non-monotonic internal structure. This should create a time-dependent accretion of angular momenta onto the circumstellar disk. Therefore, the circumstellar disks are expected to change their orientation as well as their sizes in the long-term evolutions.« less

  1. Aliphatic peptides show similar self-assembly to amyloid core sequences, challenging the importance of aromatic interactions in amyloidosis.

    PubMed

    Lakshmanan, Anupama; Cheong, Daniel W; Accardo, Angelo; Di Fabrizio, Enzo; Riekel, Christian; Hauser, Charlotte A E

    2013-01-08

    The self-assembly of abnormally folded proteins into amyloid fibrils is a hallmark of many debilitating diseases, from Alzheimer's and Parkinson diseases to prion-related disorders and diabetes type II. However, the fundamental mechanism of amyloid aggregation remains poorly understood. Core sequences of four to seven amino acids within natural amyloid proteins that form toxic fibrils have been used to study amyloidogenesis. We recently reported a class of systematically designed ultrasmall peptides that self-assemble in water into cross-β-type fibers. Here we compare the self-assembly of these peptides with natural core sequences. These include core segments from Alzheimer's amyloid-β, human amylin, and calcitonin. We analyzed the self-assembly process using circular dichroism, electron microscopy, X-ray diffraction, rheology, and molecular dynamics simulations. We found that the designed aliphatic peptides exhibited a similar self-assembly mechanism to several natural sequences, with formation of α-helical intermediates being a common feature. Interestingly, the self-assembly of a second core sequence from amyloid-β, containing the diphenylalanine motif, was distinctly different from all other examined sequences. The diphenylalanine-containing sequence formed β-sheet aggregates without going through the α-helical intermediate step, giving a unique fiber-diffraction pattern and simulation structure. Based on these results, we propose a simplified aliphatic model system to study amyloidosis. Our results provide vital insight into the nature of early intermediates formed and suggest that aromatic interactions are not as important in amyloid formation as previously postulated. This information is necessary for developing therapeutic drugs that inhibit and control amyloid formation.

  2. Intricacies of modern supercomputing illustrated with recent advances in simulations of strongly correlated electron systems

    NASA Astrophysics Data System (ADS)

    Schulthess, Thomas C.

    2013-03-01

    The continued thousand-fold improvement in sustained application performance per decade on modern supercomputers keeps opening new opportunities for scientific simulations. But supercomputers have become very complex machines, built with thousands or tens of thousands of complex nodes consisting of multiple CPU cores or, most recently, a combination of CPU and GPU processors. Efficient simulations on such high-end computing systems require tailored algorithms that optimally map numerical methods to particular architectures. These intricacies will be illustrated with simulations of strongly correlated electron systems, where the development of quantum cluster methods, Monte Carlo techniques, as well as their optimal implementation by means of algorithms with improved data locality and high arithmetic density have gone hand in hand with evolving computer architectures. The present work would not have been possible without continued access to computing resources at the National Center for Computational Science of Oak Ridge National Laboratory, which is funded by the Facilities Division of the Office of Advanced Scientific Computing Research, and the Swiss National Supercomputing Center (CSCS) that is funded by ETH Zurich.

  3. Tracking Blade Tip Vortices for Numerical Flow Simulations of Hovering Rotorcraft

    NASA Technical Reports Server (NTRS)

    Kao, David L.

    2016-01-01

    Blade tip vortices generated by a helicopter rotor blade are a major source of rotor noise and airframe vibration. This occurs when a vortex passes closely by, and interacts with, a rotor blade. The accurate prediction of Blade Vortex Interaction (BVI) continues to be a challenge for Computational Fluid Dynamics (CFD). Though considerable research has been devoted to BVI noise reduction and experimental techniques for measuring the blade tip vortices in a wind tunnel, there are only a handful of post-processing tools available for extracting vortex core lines from CFD simulation data. In order to calculate the vortex core radius, most of these tools require the user to manually select a vortex core to perform the calculation. Furthermore, none of them provide the capability to track the growth of a vortex core, which is a measure of how quickly the vortex diffuses over time. This paper introduces an automated approach for tracking the core growth of a blade tip vortex from CFD simulations of rotorcraft in hover. The proposed approach offers an effective method for the quantification and visualization of blade tip vortices in helicopter rotor wakes. Keywords: vortex core, feature extraction, CFD, numerical flow visualization

  4. Fluid pressure responses for a Devil's Slide-like system: problem formulation and simulation

    USGS Publications Warehouse

    Thomas, Matthew A.; Loague, Keith; Voss, Clifford I.

    2015-01-01

    This study employs a hydrogeologic simulation approach to investigate subsurface fluid pressures for a landslide-prone section of the central California, USA, coast known as Devil's Slide. Understanding the relative changes in subsurface fluid pressures is important for systems, such as Devil's Slide, where slope creep can be interrupted by episodic slip events. Surface mapping, exploratory core, tunnel excavation records, and dip meter data were leveraged to conceptualize the parameter space for three-dimensional (3D) Devil's Slide-like simulations. Field observations (i.e. seepage meter, water retention, and infiltration experiments; well records; and piezometric data) and groundwater flow simulation (i.e. one-dimensional vertical, transient, and variably saturated) were used to design the boundary conditions for 3D Devil's Slide-like problems. Twenty-four simulations of steady-state saturated subsurface flow were conducted in a concept-development mode. Recharge, heterogeneity, and anisotropy are shown to increase fluid pressures for failure-prone locations by up to 18.1, 4.5, and 1.8% respectively. Previous estimates of slope stability, driven by simple water balances, are significantly improved upon with the fluid pressures reported here. The results, for a Devil's Slide-like system, provide a foundation for future investigations

  5. ANNarchy: a code generation approach to neural simulations on parallel hardware

    PubMed Central

    Vitay, Julien; Dinkelbach, Helge Ü.; Hamker, Fred H.

    2015-01-01

    Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions. PMID:26283957

  6. Using Discrete Event Simulation to Model Attacker Interactions with Cyber and Physical Security Systems

    DOE PAGES

    Perkins, Casey; Muller, George

    2015-10-08

    The number of connections between physical and cyber security systems is rapidly increasing due to centralized control from automated and remotely connected means. As the number of interfaces between systems continues to grow, the interactions and interdependencies between them cannot be ignored. Historically, physical and cyber vulnerability assessments have been performed independently. This independent evaluation omits important aspects of the integrated system, where the impacts resulting from malicious or opportunistic attacks are not easily known or understood. Here, we describe a discrete event simulation model that uses information about integrated physical and cyber security systems, attacker characteristics and simple responsemore » rules to identify key safeguards that limit an attacker's likelihood of success. Key features of the proposed model include comprehensive data generation to support a variety of sophisticated analyses, and full parameterization of safeguard performance characteristics and attacker behaviours to evaluate a range of scenarios. Lastly, we also describe the core data requirements and the network of networks that serves as the underlying simulation structure.« less

  7. Using Discrete Event Simulation to Model Attacker Interactions with Cyber and Physical Security Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkins, Casey; Muller, George

    The number of connections between physical and cyber security systems is rapidly increasing due to centralized control from automated and remotely connected means. As the number of interfaces between systems continues to grow, the interactions and interdependencies between them cannot be ignored. Historically, physical and cyber vulnerability assessments have been performed independently. This independent evaluation omits important aspects of the integrated system, where the impacts resulting from malicious or opportunistic attacks are not easily known or understood. Here, we describe a discrete event simulation model that uses information about integrated physical and cyber security systems, attacker characteristics and simple responsemore » rules to identify key safeguards that limit an attacker's likelihood of success. Key features of the proposed model include comprehensive data generation to support a variety of sophisticated analyses, and full parameterization of safeguard performance characteristics and attacker behaviours to evaluate a range of scenarios. Lastly, we also describe the core data requirements and the network of networks that serves as the underlying simulation structure.« less

  8. An efficient spectral method for the simulation of dynamos in Cartesian geometry and its implementation on massively parallel computers

    NASA Astrophysics Data System (ADS)

    Stellmach, Stephan; Hansen, Ulrich

    2008-05-01

    Numerical simulations of the process of convection and magnetic field generation in planetary cores still fail to reach geophysically realistic control parameter values. Future progress in this field depends crucially on efficient numerical algorithms which are able to take advantage of the newest generation of parallel computers. Desirable features of simulation algorithms include (1) spectral accuracy, (2) an operation count per time step that is small and roughly proportional to the number of grid points, (3) memory requirements that scale linear with resolution, (4) an implicit treatment of all linear terms including the Coriolis force, (5) the ability to treat all kinds of common boundary conditions, and (6) reasonable efficiency on massively parallel machines with tens of thousands of processors. So far, algorithms for fully self-consistent dynamo simulations in spherical shells do not achieve all these criteria simultaneously, resulting in strong restrictions on the possible resolutions. In this paper, we demonstrate that local dynamo models in which the process of convection and magnetic field generation is only simulated for a small part of a planetary core in Cartesian geometry can achieve the above goal. We propose an algorithm that fulfills the first five of the above criteria and demonstrate that a model implementation of our method on an IBM Blue Gene/L system scales impressively well for up to O(104) processors. This allows for numerical simulations at rather extreme parameter values.

  9. Accelerating cardiac bidomain simulations using graphics processing units.

    PubMed

    Neic, A; Liebmann, M; Hoetzl, E; Mitchell, L; Vigmond, E J; Haase, G; Plank, G

    2012-08-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6-20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20 GPUs, 476 CPU cores were required on a national supercomputing facility.

  10. Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units

    PubMed Central

    Neic, Aurel; Liebmann, Manfred; Hoetzl, Elena; Mitchell, Lawrence; Vigmond, Edward J.; Haase, Gundolf

    2013-01-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6–20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20GPUs, 476 CPU cores were required on a national supercomputing facility. PMID:22692867

  11. Proton transfer mediated by the vibronic coupling in oxygen core ionized states of glyoxalmonoxime studied by infrared-X-ray pump-probe spectroscopy.

    PubMed

    Felicíssimo, V C; Guimarães, F F; Cesar, A; Gel'mukhanov, F; Agren, H

    2006-11-30

    The theory of IR-X-ray pump-probe spectroscopy beyond the Born-Oppenheimer approximation is developed and applied to the study of the dynamics of intramolecular proton transfer in glyoxalmonoxime leading to the formation of the tautomer 2-nitrosoethenol. Due to the IR pump pulses the molecule gains sufficient energy to promote a proton to a weakly bound well. A femtosecond X-ray pulse snapshots the wave packet route and, hence, the dynamics of the proton transfer. The glyoxalmonoxime molecule contains two chemically nonequivalent oxygen atoms that possess distinct roles in the hydrogen bond, a hydrogen donor and an acceptor. Core ionizations of these form two intersecting core-ionized states, the vibronic coupling between which along the OH stretching mode partially delocalizes the core hole, resulting in a hopping of the core hole from one site to another. This, in turn, affects the dynamics of the proton transfer in the core-ionized state. The quantum dynamical simulations of X-ray photoelectron spectra of glyoxalmonoxime driven by strong IR pulses demonstrate the general applicability of the technique for studies of intramolecular proton transfer in systems with vibronic coupling.

  12. A Numerical Study on the Effect of Facesheet-Core Disbonds on the Buckling Load of Curved Honeycomb Sandwich Panels

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Myers, David E.; Bednarcyk, Brett A.; Krivanek, Thomas M.

    2015-01-01

    A numerical study on the effect of facesheet-core disbonds on the post-buckling response of curved honeycomb sandwich panels is presented herein. This work was conducted as part of the development of a damage tolerance approach for the next-generation Space Launch System heavy lift vehicle payload fairing. As such, the study utilized full-scale fairing barrel segments as the structure of interest. The panels were composed of carbon fiber reinforced polymer facesheets and aluminum honeycomb core. The panels were analyzed numerically using the finite element method. Facesheet and core nodes in a predetermined circular region were detached to simulate a disbond induced via low-speed impact between the outer mold line facesheet and honeycomb core. Surface-to-surface contact in the disbonded region was invoked to prevent interpenetration of the facesheet and core elements. The diameter of this disbonded region was varied and the effect of the size of the disbond on the post-buckling response was observed. A significant change in the slope of the edge load-deflection response was used to determine the onset of global buckling and corresponding buckling load.

  13. Enlightened Multiscale Simulation of Biochemical Networks. Core Theory, Validating Experiments, and Implementation in Open Software

    DTIC Science & Technology

    2006-10-01

    organisms that can either be in the lysogenic (latent) or lytic (active) state. If following its infection of E . coli , the λ-phage virus enters the...and unfolded proteins (b) in the heat shock response system . . . . . 31 3 Robust stability of the model of Heat Shock in E - coli ...stochastic reachability analysis, all in the context of two biologically motivated and functionally important systems: the heat shock response in E . coli and

  14. Restricted active space calculations of L-edge X-ray absorption spectra: from molecular orbitals to multiplet states.

    PubMed

    Pinjari, Rahul V; Delcey, Mickaël G; Guo, Meiyuan; Odelius, Michael; Lundberg, Marcus

    2014-09-28

    The metal L-edge (2p → 3d) X-ray absorption spectra are affected by a number of different interactions: electron-electron repulsion, spin-orbit coupling, and charge transfer between metal and ligands, which makes the simulation of spectra challenging. The core restricted active space (RAS) method is an accurate and flexible approach that can be used to calculate X-ray spectra of a wide range of medium-sized systems without any symmetry constraints. Here, the applicability of the method is tested in detail by simulating three ferric (3d(5)) model systems with well-known electronic structure, viz., atomic Fe(3+), high-spin [FeCl6](3-) with ligand donor bonding, and low-spin [Fe(CN)6](3-) that also has metal backbonding. For these systems, the performance of the core RAS method, which does not require any system-dependent parameters, is comparable to that of the commonly used semi-empirical charge-transfer multiplet model. It handles orbitally degenerate ground states, accurately describes metal-ligand interactions, and includes both single and multiple excitations. The results are sensitive to the choice of orbitals in the active space and this sensitivity can be used to assign spectral features. A method has also been developed to analyze the calculated X-ray spectra using a chemically intuitive molecular orbital picture.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gehin, Jess C; Godfrey, Andrew T; Evans, Thomas M

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications, including a core simulation capability called VERA-CS. A key milestone for this endeavor is to validate VERA against measurements from operating nuclear power reactors. The first step in validation against plant data is to determine the ability of VERA to accurately simulate the initial startup physics tests for Watts Bar Nuclear Power Station, Unit 1 (WBN1) cycle 1. VERA-CS calculations were performed with the Insilico code developed at ORNL using cross sectionmore » processing from the SCALE system and the transport capabilities within the Denovo transport code using the SPN method. The calculations were performed with ENDF/B-VII.0 cross sections in 252 groups (collapsed to 23 groups for the 3D transport solution). The key results of the comparison of calculations with measurements include initial criticality, control rod worth critical configurations, control rod worth, differential boron worth, and isothermal temperature reactivity coefficient (ITC). The VERA results for these parameters show good agreement with measurements, with the exception of the ITC, which requires additional investigation. Results are also compared to those obtained with Monte Carlo methods and a current industry core simulator.« less

  16. GCM simulations of Titan's middle and lower atmosphere and comparison to observations

    NASA Astrophysics Data System (ADS)

    Lora, Juan M.; Lunine, Jonathan I.; Russell, Joellen L.

    2015-04-01

    Simulation results are presented from a new general circulation model (GCM) of Titan, the Titan Atmospheric Model (TAM), which couples the Flexible Modeling System (FMS) spectral dynamical core to a suite of external/sub-grid-scale physics. These include a new non-gray radiative transfer module that takes advantage of recent data from Cassini-Huygens, large-scale condensation and quasi-equilibrium moist convection schemes, a surface model with "bucket" hydrology, and boundary layer turbulent diffusion. The model produces a realistic temperature structure from the surface to the lower mesosphere, including a stratopause, as well as satisfactory superrotation. The latter is shown to depend on the dynamical core's ability to build up angular momentum from surface torques. Simulated latitudinal temperature contrasts are adequate, compared to observations, and polar temperature anomalies agree with observations. In the lower atmosphere, the insolation distribution is shown to strongly impact turbulent fluxes, and surface heating is maximum at mid-latitudes. Surface liquids are unstable at mid- and low-latitudes, and quickly migrate poleward. The simulated humidity profile and distribution of surface temperatures, compared to observations, corroborate the prevalence of dry conditions at low latitudes. Polar cloud activity is well represented, though the observed mid-latitude clouds remain somewhat puzzling, and some formation alternatives are suggested.

  17. Approximation of Engine Casing Temperature Constraints for Casing Mounted Electronics

    NASA Technical Reports Server (NTRS)

    Kratz, Jonathan L.; Culley, Dennis E.; Chapman, Jeffryes W.

    2017-01-01

    The performance of propulsion engine systems is sensitive to weight and volume considerations. This can severely constrain the configuration and complexity of the control system hardware. Distributed Engine Control technology is a response to these concerns by providing more flexibility in designing the control system, and by extension, more functionality leading to higher performing engine systems. Consequently, there can be a weight benefit to mounting modular electronic hardware on the engine core casing in a high temperature environment. This paper attempts to quantify the in-flight temperature constraints for engine casing mounted electronics. In addition, an attempt is made at studying heat soak back effects. The Commercial Modular Aero Propulsion System Simulation 40k (C-MAPSS40k) software is leveraged with real flight data as the inputs to the simulation. A two-dimensional (2-D) heat transfer model is integrated with the engine simulation to approximate the temperature along the length of the engine casing. This modification to the existing C-MAPSS40k software will provide tools and methodologies to develop a better understanding of the requirements for the embedded electronics hardware in future engine systems. Results of the simulations are presented and their implications on temperature constraints for engine casing mounted electronics is discussed.

  18. Approximation of Engine Casing Temperature Constraints for Casing Mounted Electronics

    NASA Technical Reports Server (NTRS)

    Kratz, Jonathan; Culley, Dennis; Chapman, Jeffryes

    2016-01-01

    The performance of propulsion engine systems is sensitive to weight and volume considerations. This can severely constrain the configuration and complexity of the control system hardware. Distributed Engine Control technology is a response to these concerns by providing more flexibility in designing the control system, and by extension, more functionality leading to higher performing engine systems. Consequently, there can be a weight benefit to mounting modular electronic hardware on the engine core casing in a high temperature environment. This paper attempts to quantify the in-flight temperature constraints for engine casing mounted electronics. In addition, an attempt is made at studying heat soak back effects. The Commercial Modular Aero Propulsion System Simulation 40k (C-MAPSS40k) software is leveraged with real flight data as the inputs to the simulation. A two-dimensional (2-D) heat transfer model is integrated with the engine simulation to approximate the temperature along the length of the engine casing. This modification to the existing C-MAPSS40k software will provide tools and methodologies to develop a better understanding of the requirements for the embedded electronics hardware in future engine systems. Results of the simulations are presented and their implications on temperature constraints for engine casing mounted electronics is discussed.

  19. Performance of an MPI-only semiconductor device simulator on a quad socket/quad core InfiniBand platform.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, John Nicolas; Lin, Paul Tinphone

    2009-01-01

    This preliminary study considers the scaling and performance of a finite element (FE) semiconductor device simulator on a capacity cluster with 272 compute nodes based on a homogeneous multicore node architecture utilizing 16 cores. The inter-node communication backbone for this Tri-Lab Linux Capacity Cluster (TLCC) machine is comprised of an InfiniBand interconnect. The nonuniform memory access (NUMA) nodes consist of 2.2 GHz quad socket/quad core AMD Opteron processors. The performance results for this study are obtained with a FE semiconductor device simulation code (Charon) that is based on a fully-coupled Newton-Krylov solver with domain decomposition and multilevel preconditioners. Scaling andmore » multicore performance results are presented for large-scale problems of 100+ million unknowns on up to 4096 cores. A parallel scaling comparison is also presented with the Cray XT3/4 Red Storm capability platform. The results indicate that an MPI-only programming model for utilizing the multicore nodes is reasonably efficient on all 16 cores per compute node. However, the results also indicated that the multilevel preconditioner, which is critical for large-scale capability type simulations, scales better on the Red Storm machine than the TLCC machine.« less

  20. Observability of ionospheric space-time structure with ISR: A simulation study

    NASA Astrophysics Data System (ADS)

    Swoboda, John; Semeter, Joshua; Zettergren, Matthew; Erickson, Philip J.

    2017-02-01

    The sources of error from electronically steerable array (ESA) incoherent scatter radar (ISR) systems are investigated both theoretically and with use of an open-source ISR simulator, developed by the authors, called Simulator for ISR (SimISR). The main sources of error incorporated in the simulator include statistical uncertainty, which arises due to nature of the measurement mechanism and the inherent space-time ambiguity from the sensor. SimISR can take a field of plasma parameters, parameterized by time and space, and create simulated ISR data at the scattered electric field (i.e., complex receiver voltage) level, subsequently processing these data to show possible reconstructions of the original parameter field. To demonstrate general utility, we show a number of simulation examples, with two cases using data from a self-consistent multifluid transport model. Results highlight the significant influence of the forward model of the ISR process and the resulting statistical uncertainty on plasma parameter measurements and the core experiment design trade-offs that must be made when planning observations. These conclusions further underscore the utility of this class of measurement simulator as a design tool for more optimal experiment design efforts using flexible ESA class ISR systems.

  1. A heterogeneous system based on GPU and multi-core CPU for real-time fluid and rigid body simulation

    NASA Astrophysics Data System (ADS)

    da Silva Junior, José Ricardo; Gonzalez Clua, Esteban W.; Montenegro, Anselmo; Lage, Marcos; Dreux, Marcelo de Andrade; Joselli, Mark; Pagliosa, Paulo A.; Kuryla, Christine Lucille

    2012-03-01

    Computational fluid dynamics in simulation has become an important field not only for physics and engineering areas but also for simulation, computer graphics, virtual reality and even video game development. Many efficient models have been developed over the years, but when many contact interactions must be processed, most models present difficulties or cannot achieve real-time results when executed. The advent of parallel computing has enabled the development of many strategies for accelerating the simulations. Our work proposes a new system which uses some successful algorithms already proposed, as well as a data structure organisation based on a heterogeneous architecture using CPUs and GPUs, in order to process the simulation of the interaction of fluids and rigid bodies. This successfully results in a two-way interaction between them and their surrounding objects. As far as we know, this is the first work that presents a computational collaborative environment which makes use of two different paradigms of hardware architecture for this specific kind of problem. Since our method achieves real-time results, it is suitable for virtual reality, simulation and video game fluid simulation problems.

  2. Enhancements to the Image Analysis Tool for Core Punch Experiments and Simulations (vs. 2014)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John Edward; Unal, Cetin

    A previous paper (Hogden & Unal, 2012, Image Analysis Tool for Core Punch Experiments and Simulations) described an image processing computer program developed at Los Alamos National Laboratory. This program has proven useful so developement has been continued. In this paper we describe enhacements to the program as of 2014.

  3. Development of As-Se tapered suspended-core fibers for ultra-broadband mid-IR wavelength conversion

    NASA Astrophysics Data System (ADS)

    Anashkina, E. A.; Shiryaev, V. S.; Koptev, M. Y.; Stepanov, B. S.; Muravyev, S. V.

    2018-01-01

    We designed and developed tapered suspended-core fibers of high-purity As39Se61 glass for supercontinuum generation in the mid-IR with a standard fiber laser pump source at 2 ${\\mu}$m. It was shown that microstructuring allows shifting a zero dispersion wavelength to the range shorter than 2 ${\\mu}$m in the fiber waist with a core diameter of about 1 ${\\mu}$m. In this case, supercontinuum generation in the 1-10 ${\\mu}$m range was obtained numerically with 150-fs 100-pJ pump pulses at 2 ${\\mu}$m. We also performed experiments on wavelength conversion of ultrashort optical pulses at 1.57 ${\\mu}$m from Er: fiber laser system in the manufactured As-Se tapered fibers. The measured broadening spectra were in a good agreement with the ones simulated numerically.

  4. Simulator test to study hot-flow problems related to a gas cooled reactor

    NASA Technical Reports Server (NTRS)

    Poole, J. W.; Freeman, M. P.; Doak, K. W.; Thorpe, M. L.

    1973-01-01

    An advance study of materials, fuel injection, and hot flow problems related to the gas core nuclear rocket is reported. The first task was to test a previously constructed induction heated plasma GCNR simulator above 300 kW. A number of tests are reported operating in the range of 300 kW at 10,000 cps. A second simulator was designed but not constructed for cold-hot visualization studies using louvered walls. A third task was a paper investigation of practical uranium feed systems, including a detailed discussion of related problems. The last assignment resulted in two designs for plasma nozzle test devices that could be operated at 200 atm on hydrogen.

  5. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  6. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Pope, Adrian; Finkel, Hal

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers thatmore » enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.« less

  7. High-Resolution NU-WRF Simulations of a Deep Convective-Precipitation System During MC3E. Part 1; Comparisons Between Goddard Microphysics Schemes and Observations

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Wu, Di; Lang, Stephen; Chern, Jiundar; Peters-Lidard, Christa; Fridlind, Ann; Matsui, Toshihisa

    2015-01-01

    The Goddard microphysics scheme was recently improved by adding a 4th ice class (frozen dropshail). This new 4ICE scheme was implemented and tested in the Goddard Cumulus Ensemble model (GCE) for an intense continental squall line and a moderate,less-organized continental case. Simulated peak radar reflectivity profiles were improved both in intensity and shape for both cases as were the overall reflectivity probability distributions versus observations. In this study, the new Goddard 4ICE scheme is implemented into the regional-scale NASA Unified - Weather Research and Forecasting model (NU-WRF) and tested on an intense mesoscale convective system that occurred during the Midlatitude Continental Convective Clouds Experiment (MC3E). The NU42WRF simulated radar reflectivities, rainfall intensities, and vertical and horizontal structure using the new 4ICE scheme agree as well as or significantly better with observations than when using previous versions of the Goddard 3ICE (graupel or hail) schemes. In the 4ICE scheme, the bin microphysics-based rain evaporation correction produces more erect convective cores, while modification of the unrealistic collection of ice by dry hail produces narrow and intense cores, allowing more slow-falling snow to be transported rearward. Together with a revised snow size mapping, the 4ICE scheme produces a more horizontally stratified trailing stratiform region with a broad, more coherent light rain area. In addition, the NU-WRF 4ICE simulated radar reflectivity distributions are consistent with and generally superior to those using the GCE due to the less restrictive open lateral boundaries

  8. Numerical Study of Sound Emission by 2D Regular and Chaotic Vortex Configurations

    NASA Astrophysics Data System (ADS)

    Knio, Omar M.; Collorec, Luc; Juvé, Daniel

    1995-02-01

    The far-field noise generated by a system of three Gaussian vortices lying over a flat boundary is numerically investigated using a two-dimensional vortex element method. The method is based on the discretization of the vorticity field into a finite number of smoothed vortex elements of spherical overlapping cores. The elements are convected in a Lagrangian reference along particle trajectories using the local velocity vector, given in terms of a desingularized Biot-Savart law. The initial structure of the vortex system is triangular; a one-dimensional family of initial configurations is constructed by keeping one side of the triangle fixed and vertical, and varying the abscissa of the centroid of the remaining vortex. The inviscid dynamics of this vortex configuration are first investigated using non-deformable vortices. Depending on the aspect ratio of the initial system, regular or chaotic motion occurs. Due to wall-related symmetries, the far-field sound always exhibits a time-independent quadrupolar directivity with maxima parallel end perpendicular to the wall. When regular motion prevails, the noise spectrum is dominated by discrete frequencies which correspond to the fundamental system frequency and its superharmonics. For chaotic motion, a broadband spectrum is obtained; computed soundlevels are substantially higher than in non-chaotic systems. A more sophisticated analysis is then performed which accounts for vortex core dynamics. Results show that the vortex cores are susceptible to inviscid instability which leads to violent vorticity reorganization within the core. This phenomenon has little effect on the large-scale features of the motion of the system or on low frequency sound emission. However, it leads to the generation of a high-frequency noise band in the acoustic pressure spectrum. The latter is observed in both regular and chaotic system simulations.

  9. Globular cluster photometry with the Hubble Space Telescope. I - Description of the method and analysis of the core of 47 Tuc

    NASA Technical Reports Server (NTRS)

    Guhathakurta, Puragra; Yanny, Brian; Schneider, Donald P.; Bahcall, John N.

    1992-01-01

    Accurate photometry for individual post-main-sequence stars in the core of the Galactic globular cluster 47 Tuc is presented and analyzed using an empirical point spread function model and Monte Carlo simulations. A V vs. V-I color-magnitude diagrams is constructed which shows several distinct stellar types, including RGB and HB stars. Twenty-four blue straggler stars are detected in 47 Tuc, more concentrated toward the center of the cluster than the giants. This supports the hypothesis is that the stragglers are either coalesced stars or members of binary systems that are more massive than single stars. The radial profile of the projected stellar density is flat in the central region of 47 Tuc with a core radius of 23 +/- 2 arcsec. No signature of a collapsed core is evident. The observed radial cumulative distribution of stars rules out the presence of a massive compact object in the center.

  10. Rydberg atoms in hollow-core photonic crystal fibres.

    PubMed

    Epple, G; Kleinbach, K S; Euser, T G; Joly, N Y; Pfau, T; Russell, P St J; Löw, R

    2014-06-19

    The exceptionally large polarizability of highly excited Rydberg atoms-six orders of magnitude higher than ground-state atoms--makes them of great interest in fields such as quantum optics, quantum computing, quantum simulation and metrology. However, if they are to be used routinely in applications, a major requirement is their integration into technically feasible, miniaturized devices. Here we show that a Rydberg medium based on room temperature caesium vapour can be confined in broadband-guiding kagome-style hollow-core photonic crystal fibres. Three-photon spectroscopy performed on a caesium-filled fibre detects Rydberg states up to a principal quantum number of n=40. Besides small energy-level shifts we observe narrow lines confirming the coherence of the Rydberg excitation. Using different Rydberg states and core diameters we study the influence of confinement within the fibre core after different exposure times. Understanding these effects is essential for the successful future development of novel applications based on integrated room temperature Rydberg systems.

  11. An origin of arc structures deeply embedded in dense molecular cloud cores

    NASA Astrophysics Data System (ADS)

    Matsumoto, Tomoaki; Onishi, Toshikazu; Tokuda, Kazuki; Inutsuka, Shu-ichiro

    2015-04-01

    We investigated the formation of arc-like structures in the infalling envelope around protostars, motivated by the recent Atacama Large Millimeter/Submillimeter Array (ALMA) observations of the high-density molecular cloud core, MC27/L1521F. We performed self-gravitational hydrodynamical numerical simulations with an adaptive mesh refinement code. A filamentary cloud with a 0.1 pc width fragments into cloud cores because of perturbations due to weak turbulence. The cloud core undergoes gravitational collapse to form multiple protostars, and gravitational torque from the orbiting protostars produces arc structures extending up to a 1000 au scale. As well as on a spatial extent, the velocity ranges of the arc structures, ˜0.5 km s-1, are in agreement with the ALMA observations. We also found that circumstellar discs are often misaligned in triple system. The misalignment is caused by the tidal interaction between the protostars when they undergo close encounters because of a highly eccentric orbit of the tight binary pair.

  12. Testing in Support of Fission Surface Power System Qualification

    NASA Technical Reports Server (NTRS)

    Houts, Mike; Bragg-Sitton, Shannon; Godfroy, Tom; Martin, Jim; Pearson, Boise; VanDyke, Melissa

    2007-01-01

    The strategy for qualifying a FSP system could have a significant programmatic impact. The US has not qualified a space fission power system since launch of the SNAP-10A in 1965. This paper explores cost-effective options for obtaining data that would be needed for flight qualification of a fission system. Qualification data could be obtained from both nuclear and non-nuclear testing. The ability to perform highly realistic nonnuclear testing has advanced significantly throughout the past four decades. Instrumented thermal simulators were developed during the 1970s and 1980s to assist in the development, operation, and assessment of terrestrial fission systems. Instrumented thermal simulators optimized for assisting in the development, operation, and assessment of modern FSP systems have been under development (and utilized) since 1998. These thermal simulators enable heat from fission to be closely mimicked (axial power profile, radial power profile, temperature, heat flux, etc.) and extensive data to be taken from the core region. For transient testing, pin power during a transient is calculated based on the reactivity feedback that would occur given measured values of test article temperature and/or dimensional changes. The reactivity feedback coefficients needed for the test are either calculated or measured using cold/warm zero-power criticals. In this way non-nuclear testing can be used to provide very realistic information related to nuclear operation. Non-nuclear testing can be used at all levels, including component, subsystem, and integrated system testing. FSP fuels and materials are typically chosen to ensure very high confidence in operation at design burnups, fluences, and temperatures. However, facilities exist (e.g. ATR, HFIR) for affordably performing in-pile fuel and materials irradiations, if such testing is desired. Ex-core materials and components (such as alternator materials, control drum drives, etc.) could be irradiated in university or DOE reactors to ensure adequate radiation resistance. Facilities also exist for performing warm and cold zero-power criticals.

  13. Investigations of protostellar outflow launching and gas entrainment: Hydrodynamic simulations and molecular emission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Offner, Stella S. R.; Arce, Héctor G., E-mail: stella.offner@yale.edu

    2014-03-20

    We investigate protostellar outflow evolution, gas entrainment, and star formation efficiency using radiation-hydrodynamic simulations of isolated, turbulent low-mass cores. We adopt an X-wind launching model, in which the outflow rate is coupled to the instantaneous protostellar accretion rate and evolution. We vary the outflow collimation angle from θ = 0.01-0.1 and find that even well-collimated outflows effectively sweep up and entrain significant core mass. The Stage 0 lifetime ranges from 0.14-0.19 Myr, which is similar to the observed Class 0 lifetime. The star formation efficiency of the cores spans 0.41-0.51. In all cases, the outflows drive strong turbulence in themore » surrounding material. Although the initial core turbulence is purely solenoidal by construction, the simulations converge to approximate equipartition between solenoidal and compressive motions due to a combination of outflow driving and collapse. When compared to simulation of a cluster of protostars, which is not gravitationally centrally condensed, we find that the outflows drive motions that are mainly solenoidal. The final turbulent velocity dispersion is about twice the initial value of the cores, indicating that an individual outflow is easily able to replenish turbulent motions on sub-parsec scales. We post-process the simulations to produce synthetic molecular line emission maps of {sup 12}CO, {sup 13}CO, and C{sup 18}O and evaluate how well these tracers reproduce the underlying mass and velocity structure.« less

  14. : A Scalable and Transparent System for Simulating MPI Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S

    2010-01-01

    is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less

  15. Cloud-resolving model intercomparison of an MC3E squall line case: Part I-Convective updrafts: CRM Intercomparison of a Squall Line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Jiwen; Han, Bin; Varble, Adam

    A constrained model intercomparison study of a mid-latitude mesoscale squall line is performed using the Weather Research & Forecasting (WRF) model at 1-km horizontal grid spacing with eight cloud microphysics schemes, to understand specific processes that lead to the large spread of simulated cloud and precipitation at cloud-resolving scales, with a focus of this paper on convective cores. Various observational data are employed to evaluate the baseline simulations. All simulations tend to produce a wider convective area than observed, but a much narrower stratiform area, with most bulk schemes overpredicting radar reflectivity. The magnitudes of the virtual potential temperature drop,more » pressure rise, and the peak wind speed associated with the passage of the gust front are significantly smaller compared with the observations, suggesting simulated cool pools are weaker. Simulations also overestimate the vertical velocity and Ze in convective cores as compared with observational retrievals. The modeled updraft velocity and precipitation have a significant spread across the eight schemes even in this strongly dynamically-driven system. The spread of updraft velocity is attributed to the combined effects of the low-level perturbation pressure gradient determined by cold pool intensity and buoyancy that is not necessarily well correlated to differences in latent heating among the simulations. Variability of updraft velocity between schemes is also related to differences in ice-related parameterizations, whereas precipitation variability increases in no-ice simulations because of scheme differences in collision-coalescence parameterizations.« less

  16. Control of the Speed of a Light-Induced Spin Transition through Mesoscale Core-Shell Architecture.

    PubMed

    Felts, Ashley C; Slimani, Ahmed; Cain, John M; Andrus, Matthew J; Ahir, Akhil R; Abboud, Khalil A; Meisel, Mark W; Boukheddaden, Kamel; Talham, Daniel R

    2018-05-02

    The rate of the light-induced spin transition in a coordination polymer network solid dramatically increases when included as the core in mesoscale core-shell particles. A series of photomagnetic coordination polymer core-shell heterostructures, based on the light-switchable Rb a Co b [Fe(CN) 6 ] c · mH 2 O (RbCoFe-PBA) as core with the isostructural K j Ni k [Cr(CN) 6 ] l · nH 2 O (KNiCr-PBA) as shell, are studied using temperature-dependent powder X-ray diffraction and SQUID magnetometry. The core RbCoFe-PBA exhibits a charge transfer-induced spin transition (CTIST), which can be thermally and optically induced. When coupled to the shell, the rate of the optically induced transition from low spin to high spin increases. Isothermal relaxation from the optically induced high spin state of the core back to the low spin state and activation energies associated with the transition between these states were measured. The presence of a shell decreases the activation energy, which is associated with the elastic properties of the core. Numerical simulations using an electro-elastic model for the spin transition in core-shell particles supports the findings, demonstrating how coupling of the core to the shell changes the elastic properties of the system. The ability to tune the rate of optically induced magnetic and structural phase transitions through control of mesoscale architecture presents a new approach to the development of photoswitchable materials with tailored properties.

  17. Virtual Plant Tissue: Building Blocks for Next-Generation Plant Growth Simulation

    PubMed Central

    De Vos, Dirk; Dzhurakhalov, Abdiravuf; Stijven, Sean; Klosiewicz, Przemyslaw; Beemster, Gerrit T. S.; Broeckhove, Jan

    2017-01-01

    Motivation: Computational modeling of plant developmental processes is becoming increasingly important. Cellular resolution plant tissue simulators have been developed, yet they are typically describing physiological processes in an isolated way, strongly delimited in space and time. Results: With plant systems biology moving toward an integrative perspective on development we have built the Virtual Plant Tissue (VPTissue) package to couple functional modules or models in the same framework and across different frameworks. Multiple levels of model integration and coordination enable combining existing and new models from different sources, with diverse options in terms of input/output. Besides the core simulator the toolset also comprises a tissue editor for manipulating tissue geometry and cell, wall, and node attributes in an interactive manner. A parameter exploration tool is available to study parameter dependence of simulation results by distributing calculations over multiple systems. Availability: Virtual Plant Tissue is available as open source (EUPL license) on Bitbucket (https://bitbucket.org/vptissue/vptissue). The project has a website https://vptissue.bitbucket.io. PMID:28523006

  18. IMPROVEMENTS IN THE THERMAL NEUTRON CALIBRATION UNIT, TNF2, AT LNMRI/IRD.

    PubMed

    Astuto, A; Fernandes, S S; Patrão, K C S; Fonseca, E S; Pereira, W W; Lopes, R T

    2018-02-21

    The standard thermal neutron flux unit, TNF2, in the Brazilian National Ionizing Radiation Metrology Laboratory was rebuilt. Fluence is still achieved by moderating of four 241Am-Be sources with 0.6 TBq each. The facility was again simulated and redesigned with graphite core and paraffin added graphite blocks surrounding it. Simulations using the MCNPX code on different geometric arrangements of moderator materials and neutron sources were performed. The resulting neutron fluence quality in terms of intensity, spectrum and cadmium ratio was evaluated. After this step, the system was assembled based on the results obtained from the simulations and measurements were performed with equipment existing in LNMRI/IRD and by simulated equipment. This work focuses on the characterization of a central chamber point and external points around the TNF2 in terms of neutron spectrum, fluence and ambient dose equivalent, H*(10). This system was validated with spectra measurements, fluence and H*(10) to ensure traceability.

  19. Implementation of metal-friendly EAM/FS-type semi-empirical potentials in HOOMD-blue: A GPU-accelerated molecular dynamics software

    NASA Astrophysics Data System (ADS)

    Yang, Lin; Zhang, Feng; Wang, Cai-Zhuang; Ho, Kai-Ming; Travesset, Alex

    2018-04-01

    We present an implementation of EAM and FS interatomic potentials, which are widely used in simulating metallic systems, in HOOMD-blue, a software designed to perform classical molecular dynamics simulations using GPU accelerations. We first discuss the details of our implementation and then report extensive benchmark tests. We demonstrate that single-precision floating point operations efficiently implemented on GPUs can produce sufficient accuracy when compared against double-precision codes, as demonstrated in test simulations of calculations of the glass-transition temperature of Cu64.5Zr35.5, and pair correlation function g (r) of liquid Ni3Al. Our code scales well with the size of the simulating system on NVIDIA Tesla M40 and P100 GPUs. Compared with another popular software LAMMPS running on 32 cores of AMD Opteron 6220 processors, the GPU/CPU performance ratio can reach as high as 4.6. The source code can be accessed through the HOOMD-blue web page for free by any interested user.

  20. HYDES: A generalized hybrid computer program for studying turbojet or turbofan engine dynamics

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.

    1974-01-01

    This report describes HYDES, a hybrid computer program capable of simulating one-spool turbojet, two-spool turbojet, or two-spool turbofan engine dynamics. HYDES is also capable of simulating two- or three-stream turbofans with or without mixing of the exhaust streams. The program is intended to reduce the time required for implementing dynamic engine simulations. HYDES was developed for running on the Lewis Research Center's Electronic Associates (EAI) 690 Hybrid Computing System and satisfies the 16384-word core-size and hybrid-interface limits of that machine. The program could be modified for running on other computing systems. The use of HYDES to simulate a single-spool turbojet and a two-spool, two-stream turbofan engine is demonstrated. The form of the required input data is shown and samples of output listings (teletype) and transient plots (x-y plotter) are provided. HYDES is shown to be capable of performing both steady-state design and off-design analyses and transient analyses.

Top