DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-01-01
The simulation was performed on 64K cores of Intrepid, running at 0.25 simulated-years-per-day and taking 25 million core-hours. This is the first simulation using both the CAM5 physics and the highly scalable spectral element dynamical core. The animation of Total Precipitable Water clearly shows hurricanes developing in the Atlantic and Pacific.
Adaptive Core Simulation Employing Discrete Inverse Theory - Part II: Numerical Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Khalik, Hany S.; Turinsky, Paul J.
2005-07-15
Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. The companion paper, ''Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory,'' describes in detail the theoretical background of the proposed adaptive techniques. This paper, Part II, demonstrates several computational experiments conducted to assess the fidelity and robustness of the proposed techniques. The intentmore » is to check the ability of the adapted core simulator model to predict future core observables that are not included in the adaption or core observables that are recorded at core conditions that differ from those at which adaption is completed. Also, this paper demonstrates successful utilization of an efficient sensitivity analysis approach to calculate the sensitivity information required to perform the adaption for millions of input core parameters. Finally, this paper illustrates a useful application for adaptive simulation - reducing the inconsistencies between two different core simulator code systems, where the multitudes of input data to one code are adjusted to enhance the agreement between both codes for important core attributes, i.e., core reactivity and power distribution. Also demonstrated is the robustness of such an application.« less
Design and testing of coring bits on drilling lunar rock simulant
NASA Astrophysics Data System (ADS)
Li, Peng; Jiang, Shengyuan; Tang, Dewei; Xu, Bo; Ma, Chao; Zhang, Hui; Qin, Hongwei; Deng, Zongquan
2017-02-01
Coring bits are widely utilized in the sampling of celestial bodies, and their drilling behaviors directly affect the sampling results and drilling security. This paper introduces a lunar regolith coring bit (LRCB), which is a key component of sampling tools for lunar rock breaking during the lunar soil sampling process. We establish the interaction model between the drill bit and rock at a small cutting depth, and the two main influential parameters (forward and outward rake angles) of LRCB on drilling loads are determined. We perform the parameter screening task of LRCB with the aim to minimize the weight on bit (WOB). We verify the drilling load performances of LRCB after optimization, and the higher penetrations per revolution (PPR) are, the larger drilling loads we gained. Besides, we perform lunar soil drilling simulations to estimate the efficiency on chip conveying and sample coring of LRCB. The results of the simulation and test are basically consistent on coring efficiency, and the chip removal efficiency of LRCB is slightly lower than HIT-H bit from simulation. This work proposes a method for the design of coring bits in subsequent extraterrestrial explorations.
NASA Astrophysics Data System (ADS)
Rodriguez, M.; Brualla, L.
2018-04-01
Monte Carlo simulation of radiation transport is computationally demanding to obtain reasonably low statistical uncertainties of the estimated quantities. Therefore, it can benefit in a large extent from high-performance computing. This work is aimed at assessing the performance of the first generation of the many-integrated core architecture (MIC) Xeon Phi coprocessor with respect to that of a CPU consisting of a double 12-core Xeon processor in Monte Carlo simulation of coupled electron-photonshowers. The comparison was made twofold, first, through a suite of basic tests including parallel versions of the random number generators Mersenne Twister and a modified implementation of RANECU. These tests were addressed to establish a baseline comparison between both devices. Secondly, through the p DPM code developed in this work. p DPM is a parallel version of the Dose Planning Method (DPM) program for fast Monte Carlo simulation of radiation transport in voxelized geometries. A variety of techniques addressed to obtain a large scalability on the Xeon Phi were implemented in p DPM. Maximum scalabilities of 84 . 2 × and 107 . 5 × were obtained in the Xeon Phi for simulations of electron and photon beams, respectively. Nevertheless, in none of the tests involving radiation transport the Xeon Phi performed better than the CPU. The disadvantage of the Xeon Phi with respect to the CPU owes to the low performance of the single core of the former. A single core of the Xeon Phi was more than 10 times less efficient than a single core of the CPU for all radiation transport simulations.
Tests of a D vented thrust deflecting nozzle behind a simulated turbofan engine
NASA Technical Reports Server (NTRS)
Watson, T. L.
1982-01-01
A D vented thrust deflecting nozzle applicable to subsonic V/STOL aircraft was tested behind a simulated turbofan engine in the verticle thrust stand. Nozzle thrust, fan operating characteristics, nozzle entrance conditions, and static pressures were measured. Nozzle performance was measured for variations in exit area and thrust deflection angle. Six core nozzle configurations, the effect of core exit axial location, mismatched core and fan stream nozzle pressure ratios, and yaw vane presence were evaluated. Core nozzle configuration affected performance at normal and engine out operating conditions. Highest vectored nozzle performance resulted for a given exit area when core and fan stream pressure were equal. Its is concluded that high nozzle performance can be maintained at both normal and engine out conditions through control of the nozzle entrance Mach number with a variable exit area.
Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Khalik, Hany S.; Turinsky, Paul J.
2005-07-15
Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. A meaningful adaption will result in high-fidelity and robust adapted core simulator models. To perform adaption, we propose an inverse theory approach in which the multitudes of input data to core simulators, i.e., reactor physics and thermal-hydraulic data, are to be adjusted to improve agreement withmore » measured observables while keeping core simulator models unadapted. At first glance, devising such adaption for typical core simulators with millions of input and observables data would spawn not only several prohibitive challenges but also numerous disparaging concerns. The challenges include the computational burdens of the sensitivity-type calculations required to construct Jacobian operators for the core simulator models. Also, the computational burdens of the uncertainty-type calculations required to estimate the uncertainty information of core simulator input data present a demanding challenge. The concerns however are mainly related to the reliability of the adjusted input data. The methodologies of adaptive simulation are well established in the literature of data adjustment. We adopt the same general framework for data adjustment; however, we refrain from solving the fundamental adjustment equations in a conventional manner. We demonstrate the use of our so-called Efficient Subspace Methods (ESMs) to overcome the computational and storage burdens associated with the core adaption problem. We illustrate the successful use of ESM-based adaptive techniques for a typical boiling water reactor core simulator adaption problem.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, John Nicolas; Lin, Paul Tinphone
2009-01-01
This preliminary study considers the scaling and performance of a finite element (FE) semiconductor device simulator on a capacity cluster with 272 compute nodes based on a homogeneous multicore node architecture utilizing 16 cores. The inter-node communication backbone for this Tri-Lab Linux Capacity Cluster (TLCC) machine is comprised of an InfiniBand interconnect. The nonuniform memory access (NUMA) nodes consist of 2.2 GHz quad socket/quad core AMD Opteron processors. The performance results for this study are obtained with a FE semiconductor device simulation code (Charon) that is based on a fully-coupled Newton-Krylov solver with domain decomposition and multilevel preconditioners. Scaling andmore » multicore performance results are presented for large-scale problems of 100+ million unknowns on up to 4096 cores. A parallel scaling comparison is also presented with the Cray XT3/4 Red Storm capability platform. The results indicate that an MPI-only programming model for utilizing the multicore nodes is reasonably efficient on all 16 cores per compute node. However, the results also indicated that the multilevel preconditioner, which is critical for large-scale capability type simulations, scales better on the Red Storm machine than the TLCC machine.« less
Full Core TREAT Kinetics Demonstration Using Rattlesnake/BISON Coupling Within MAMMOTH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortensi, Javier; DeHart, Mark D.; Gleicher, Frederick N.
2015-08-01
This report summarizes key aspects of research in evaluation of modeling needs for TREAT transient simulation. Using a measured TREAT critical measurement and a transient for a small, experimentally simplified core, Rattlesnake and MAMMOTH simulations are performed building from simple infinite media to a full core model. Cross sections processing methods are evaluated, various homogenization approaches are assessed and the neutronic behavior of the core studied to determine key modeling aspects. The simulation of the minimum critical core with the diffusion solver shows very good agreement with the reference Monte Carlo simulation and the experiment. The full core transient simulationmore » with thermal feedback shows a significantly lower power peak compared to the documented experimental measurement, which is not unexpected in the early stages of model development.« less
Brightness analysis of an electron beam with a complex profile
NASA Astrophysics Data System (ADS)
Maesaka, Hirokazu; Hara, Toru; Togawa, Kazuaki; Inagaki, Takahiro; Tanaka, Hitoshi
2018-05-01
We propose a novel analysis method to obtain the core bright part of an electron beam with a complex phase-space profile. This method is beneficial to evaluate the performance of simulation data of a linear accelerator (linac), such as an x-ray free electron laser (XFEL) machine, since the phase-space distribution of a linac electron beam is not simple, compared to a Gaussian beam in a synchrotron. In this analysis, the brightness of undulator radiation is calculated and the core of an electron beam is determined by maximizing the brightness. We successfully extracted core electrons from a complex beam profile of XFEL simulation data, which was not expressed by a set of slice parameters. FEL simulations showed that the FEL intensity was well remained even after extracting the core part. Consequently, the FEL performance can be estimated by this analysis without time-consuming FEL simulations.
An assessment of coupling algorithms for nuclear reactor core physics simulations
Hamilton, Steven; Berrill, Mark; Clarno, Kevin; ...
2016-04-01
This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency ofmore » JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less
An assessment of coupling algorithms for nuclear reactor core physics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, Steven; Berrill, Mark; Clarno, Kevin
This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency ofmore » JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less
An assessment of coupling algorithms for nuclear reactor core physics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, Steven, E-mail: hamiltonsp@ornl.gov; Berrill, Mark, E-mail: berrillma@ornl.gov; Clarno, Kevin, E-mail: clarnokt@ornl.gov
This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNKmore » and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less
Measurement and simulation of thermal neutron flux distribution in the RTP core
NASA Astrophysics Data System (ADS)
Rabir, Mohamad Hairie B.; Jalal Bayar, Abi Muttaqin B.; Hamzah, Na'im Syauqi B.; Mustafa, Muhammad Khairul Ariff B.; Karim, Julia Bt. Abdul; Zin, Muhammad Rawi B. Mohamed; Ismail, Yahya B.; Hussain, Mohd Huzair B.; Mat Husin, Mat Zin B.; Dan, Roslan B. Md; Ismail, Ahmad Razali B.; Husain, Nurfazila Bt.; Jalil Khan, Zareen Khan B. Abdul; Yakin, Shaiful Rizaide B. Mohd; Saad, Mohamad Fauzi B.; Masood, Zarina Bt.
2018-01-01
The in-core thermal neutron flux distribution was determined using measurement and simulation methods for the Malaysian’s PUSPATI TRIGA Reactor (RTP). In this work, online thermal neutron flux measurement using Self Powered Neutron Detector (SPND) has been performed to verify and validate the computational methods for neutron flux calculation in RTP calculations. The experimental results were used as a validation to the calculations performed with Monte Carlo code MCNP. The detail in-core neutron flux distributions were estimated using MCNP mesh tally method. The neutron flux mapping obtained revealed the heterogeneous configuration of the core. Based on the measurement and simulation, the thermal flux profile peaked at the centre of the core and gradually decreased towards the outer side of the core. The results show a good agreement (relatively) between calculation and measurement where both show the same radial thermal flux profile inside the core: MCNP model over estimation with maximum discrepancy around 20% higher compared to SPND measurement. As our model also predicts well the neutron flux distribution in the core it can be used for the characterization of the full core, that is neutron flux and spectra calculation, dose rate calculations, reaction rate calculations, etc.
A hybrid algorithm for parallel molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Mangiardi, Chris M.; Meyer, R.
2017-10-01
This article describes algorithms for the hybrid parallelization and SIMD vectorization of molecular dynamics simulations with short-range forces. The parallelization method combines domain decomposition with a thread-based parallelization approach. The goal of the work is to enable efficient simulations of very large (tens of millions of atoms) and inhomogeneous systems on many-core processors with hundreds or thousands of cores and SIMD units with large vector sizes. In order to test the efficiency of the method, simulations of a variety of configurations with up to 74 million atoms have been performed. Results are shown that were obtained on multi-core systems with Sandy Bridge and Haswell processors as well as systems with Xeon Phi many-core processors.
A review of training research and virtual reality simulators for the da Vinci surgical system.
Liu, May; Curet, Myriam
2015-01-01
PHENOMENON: Virtual reality simulators are the subject of several recent studies of skills training for robot-assisted surgery. Yet no consensus exists regarding what a core skill set comprises or how to measure skill performance. Defining a core skill set and relevant metrics would help surgical educators evaluate different simulators. This review draws from published research to propose a core technical skill set for using the da Vinci surgeon console. Publications on three commercial simulators were used to evaluate the simulators' content addressing these skills and associated metrics. An analysis of published research suggests that a core technical skill set for operating the surgeon console includes bimanual wristed manipulation, camera control, master clutching to manage hand position, use of third instrument arm, activating energy sources, appropriate depth perception, and awareness of forces applied by instruments. Validity studies of three commercial virtual reality simulators for robot-assisted surgery suggest that all three have comparable content and metrics. However, none have comprehensive content and metrics for all core skills. INSIGHTS: Virtual reality simulation remains a promising tool to support skill training for robot-assisted surgery, yet existing commercial simulator content is inadequate for performing and assessing a comprehensive basic skill set. The results of this evaluation help identify opportunities and challenges that exist for future developments in virtual reality simulation for robot-assisted surgery. Specifically, the inclusion of educational experts in the development cycle alongside clinical and technological experts is recommended.
NASA Astrophysics Data System (ADS)
Valasek, Lukas; Glasa, Jan
2017-12-01
Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.
NASA Astrophysics Data System (ADS)
Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide
2015-09-01
The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, Sean; Dewan, Leslie; Massie, Mark
This report presents results from a collaboration between Transatomic Power Corporation (TAP) and Oak Ridge National Laboratory (ORNL) to provide neutronic and fuel cycle analysis of the TAP core design through the Department of Energy Gateway for Accelerated Innovation in Nuclear (GAIN) Nuclear Energy Voucher program. The TAP concept is a molten salt reactor using configurable zirconium hydride moderator rod assemblies to shift the neutron spectrum in the core from mostly epithermal at beginning of life to thermal at end of life. Additional developments in the ChemTriton modeling and simulation tool provide the critical moderator-to-fuel ratio searches and time-dependent parametersmore » necessary to simulate the continuously changing physics in this complex system. The implementation of continuous-energy Monte Carlo transport and depletion tools in ChemTriton provide for full-core three-dimensional modeling and simulation. Results from simulations with these tools show agreement with TAP-calculated performance metrics for core lifetime, discharge burnup, and salt volume fraction, verifying the viability of reducing actinide waste production with this concept. Additional analyses of mass feed rates and enrichments, isotopic removals, tritium generation, core power distribution, core vessel helium generation, moderator rod heat deposition, and reactivity coeffcients provide additional information to make informed design decisions. This work demonstrates capabilities of ORNL modeling and simulation tools for neutronic and fuel cycle analysis of molten salt reactor concepts.« less
A study of the required Rayleigh number to sustain dynamo with various inner core radius
NASA Astrophysics Data System (ADS)
Nishida, Y.; Katoh, Y.; Matsui, H.; Kumamoto, A.
2017-12-01
It is widely accepted that the geomagnetic field is sustained by thermal and compositional driven convections of a liquid iron alloy in the outer core. The generation process of the geomagnetic field has been studied by a number of MHD dynamo simulations. Recent studies of the ratio of the Earth's core evolution suggest that the inner solid core radius ri to the outer liquid core radius ro changed from ri/ro = 0 to 0.35 during the last one billion years. There are some studies of dynamo in the early Earth with smaller inner core than the present. Heimpel et al. (2005) revealed the Rayleigh number Ra of the onset of dynamo process as a function of ri/ro from simulation, while paleomagnetic observation shows that the geomagnetic field has been sustained for 3.5 billion years. While Heimpel and Evans (2013) studied dynamo processes taking into account the thermal history of the Earth's interior, there were few cases corresponding to the early Earth. Driscoll (2016) performed a series of dynamo based on a thermal evolution model. Despite a number of dynamo simulations, dynamo process occurring in the interior of the early Earth has not been fully understood because the magnetic Prandtl numbers in these simulations are much larger than that for the actual outer core.In the present study, we performed thermally driven dynamo simulations with different aspect ratio ri/ro = 0.15, 0.25 and 0.35 to evaluate the critical Ra for the thermal convection and required Ra to maintain the dynamo. For this purpose, we performed simulations with various Ra and fixed the other control parameters such as the Ekman, Prandtl, and magnetic Prandtl numbers. For the initial condition and boundary conditions, we followed the dynamo benchmark case 1 by Christensen et al. (2001). The results show that the critical Ra increases with the smaller aspect ratio ri/ro. It is confirmed that larger amplitude of buoyancy is required in the smaller inner core to maintain dynamo.
VERA Core Simulator Methodology for PWR Cycle Depletion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kochunas, Brendan; Collins, Benjamin S; Jabaay, Daniel
2015-01-01
This paper describes the methodology developed and implemented in MPACT for performing high-fidelity pressurized water reactor (PWR) multi-cycle core physics calculations. MPACT is being developed primarily for application within the Consortium for the Advanced Simulation of Light Water Reactors (CASL) as one of the main components of the VERA Core Simulator, the others being COBRA-TF and ORIGEN. The methods summarized in this paper include a methodology for performing resonance self-shielding and computing macroscopic cross sections, 2-D/1-D transport, nuclide depletion, thermal-hydraulic feedback, and other supporting methods. These methods represent a minimal set needed to simulate high-fidelity models of a realistic nuclearmore » reactor. Results demonstrating this are presented from the simulation of a realistic model of the first cycle of Watts Bar Unit 1. The simulation, which approximates the cycle operation, is observed to be within 50 ppm boron (ppmB) reactivity for all simulated points in the cycle and approximately 15 ppmB for a consistent statepoint. The verification and validation of the PWR cycle depletion capability in MPACT is the focus of two companion papers.« less
Brown, Cameron S.; Zhang, Hongbin; Kucukboyaci, Vefa; ...
2016-09-07
VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was used to simulate a typical pressurized water reactor (PWR) full core response with 17x17 fuel assemblies for a main steam line break (MSLB) accident scenario with the most reactive rod cluster control assembly stuck out of the core. The accident scenario was initiated at the hot zero power (HZP) at the end of the first fuel cycle with return to power state points that were determined by amore » system analysis code and the most limiting state point was chosen for core analysis. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this way, 59 full core simulations were performed to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. The results show that this typical PWR core remains within MDNBR safety limits for the MSLB accident.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Cameron S.; Zhang, Hongbin; Kucukboyaci, Vefa
VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was used to simulate a typical pressurized water reactor (PWR) full core response with 17x17 fuel assemblies for a main steam line break (MSLB) accident scenario with the most reactive rod cluster control assembly stuck out of the core. The accident scenario was initiated at the hot zero power (HZP) at the end of the first fuel cycle with return to power state points that were determined by amore » system analysis code and the most limiting state point was chosen for core analysis. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this way, 59 full core simulations were performed to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. The results show that this typical PWR core remains within MDNBR safety limits for the MSLB accident.« less
Theoretical Models of Protostellar Binary and Multiple Systems with AMR Simulations
NASA Astrophysics Data System (ADS)
Matsumoto, Tomoaki; Tokuda, Kazuki; Onishi, Toshikazu; Inutsuka, Shu-ichiro; Saigo, Kazuya; Takakuwa, Shigehisa
2017-05-01
We present theoretical models for protostellar binary and multiple systems based on the high-resolution numerical simulation with an adaptive mesh refinement (AMR) code, SFUMATO. The recent ALMA observations have revealed early phases of the binary and multiple star formation with high spatial resolutions. These observations should be compared with theoretical models with high spatial resolutions. We present two theoretical models for (1) a high density molecular cloud core, MC27/L1521F, and (2) a protobinary system, L1551 NE. For the model for MC27, we performed numerical simulations for gravitational collapse of a turbulent cloud core. The cloud core exhibits fragmentation during the collapse, and dynamical interaction between the fragments produces an arc-like structure, which is one of the prominent structures observed by ALMA. For the model for L1551 NE, we performed numerical simulations of gas accretion onto protobinary. The simulations exhibit asymmetry of a circumbinary disk. Such asymmetry has been also observed by ALMA in the circumbinary disk of L1551 NE.
Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merzari, E.; Shemon, E. R.; Yu, Y. Q.
This report describes to employ SHARP to perform a first-of-a-kind analysis of the core radial expansion phenomenon in an SFR. This effort required significant advances in the framework Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit used to drive the coupled simulations, manipulate the mesh in response to the deformation of the geometry, and generate the necessary modified mesh files. Furthermore, the model geometry is fairly complex, and consistent mesh generation for the three physics modules required significant effort. Fully-integrated simulations of a 7-assembly mini-core test problem have been performed, and the results are presented here. Physics models ofmore » a full-core model of the Advanced Burner Test Reactor have also been developed for each of the three physics modules. Standalone results of each of the three physics modules for the ABTR are presented here, which provides a demonstration of the feasibility of the fully-integrated simulation.« less
NASA Astrophysics Data System (ADS)
Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock
2017-01-01
The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.
Adaptive control method for core power control in TRIGA Mark II reactor
NASA Astrophysics Data System (ADS)
Sabri Minhat, Mohd; Selamat, Hazlina; Subha, Nurul Adilla Mohd
2018-01-01
The 1MWth Reactor TRIGA PUSPATI (RTP) Mark II type has undergone more than 35 years of operation. The existing core power control uses feedback control algorithm (FCA). It is challenging to keep the core power stable at the desired value within acceptable error bands to meet the safety demand of RTP due to the sensitivity of nuclear research reactor operation. Currently, the system is not satisfied with power tracking performance and can be improved. Therefore, a new design core power control is very important to improve the current performance in tracking and regulate reactor power by control the movement of control rods. In this paper, the adaptive controller and focus on Model Reference Adaptive Control (MRAC) and Self-Tuning Control (STC) were applied to the control of the core power. The model for core power control was based on mathematical models of the reactor core, adaptive controller model, and control rods selection programming. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The adaptive control model was presented using Lyapunov method to ensure stable close loop system and STC Generalised Minimum Variance (GMV) Controller was not necessary to know the exact plant transfer function in designing the core power control. The performance between proposed adaptive control and FCA will be compared via computer simulation and analysed the simulation results manifest the effectiveness and the good performance of the proposed control method for core power control.
Team Culture and Business Strategy Simulation Performance
ERIC Educational Resources Information Center
Ritchie, William J.; Fornaciari, Charles J.; Drew, Stephen A. W.; Marlin, Dan
2013-01-01
Many capstone strategic management courses use computer-based simulations as core pedagogical tools. Simulations are touted as assisting students in developing much-valued skills in strategy formation, implementation, and team management in the pursuit of superior strategic performance. However, despite their rich nature, little is known regarding…
Damaris: Addressing performance variability in data management for post-petascale simulations
Dorier, Matthieu; Antoniu, Gabriel; Cappello, Franck; ...
2016-10-01
With exascale computing on the horizon, reducing performance variability in data management tasks (storage, visualization, analysis, etc.) is becoming a key challenge in sustaining high performance. Here, this variability significantly impacts the overall application performance at scale and its predictability over time. In this article, we present Damaris, a system that leverages dedicated cores in multicore nodes to offload data management tasks, including I/O, data compression, scheduling of data movements, in situ analysis, and visualization. We evaluate Damaris with the CM1 atmospheric simulation and the Nek5000 computational fluid dynamic simulation on four platforms, including NICS’s Kraken and NCSA’s Blue Waters.more » Our results show that (1) Damaris fully hides the I/O variability as well as all I/O-related costs, thus making simulation performance predictable; (2) it increases the sustained write throughput by a factor of up to 15 compared with standard I/O approaches; (3) it allows almost perfect scalability of the simulation up to over 9,000 cores, as opposed to state-of-the-art approaches that fail to scale; and (4) it enables a seamless connection to the VisIt visualization software to perform in situ analysis and visualization in a way that impacts neither the performance of the simulation nor its variability. In addition, we extended our implementation of Damaris to also support the use of dedicated nodes and conducted a thorough comparison of the two approaches—dedicated cores and dedicated nodes—for I/O tasks with the aforementioned applications.« less
Damaris: Addressing performance variability in data management for post-petascale simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorier, Matthieu; Antoniu, Gabriel; Cappello, Franck
With exascale computing on the horizon, reducing performance variability in data management tasks (storage, visualization, analysis, etc.) is becoming a key challenge in sustaining high performance. Here, this variability significantly impacts the overall application performance at scale and its predictability over time. In this article, we present Damaris, a system that leverages dedicated cores in multicore nodes to offload data management tasks, including I/O, data compression, scheduling of data movements, in situ analysis, and visualization. We evaluate Damaris with the CM1 atmospheric simulation and the Nek5000 computational fluid dynamic simulation on four platforms, including NICS’s Kraken and NCSA’s Blue Waters.more » Our results show that (1) Damaris fully hides the I/O variability as well as all I/O-related costs, thus making simulation performance predictable; (2) it increases the sustained write throughput by a factor of up to 15 compared with standard I/O approaches; (3) it allows almost perfect scalability of the simulation up to over 9,000 cores, as opposed to state-of-the-art approaches that fail to scale; and (4) it enables a seamless connection to the VisIt visualization software to perform in situ analysis and visualization in a way that impacts neither the performance of the simulation nor its variability. In addition, we extended our implementation of Damaris to also support the use of dedicated nodes and conducted a thorough comparison of the two approaches—dedicated cores and dedicated nodes—for I/O tasks with the aforementioned applications.« less
Neural simulations on multi-core architectures.
Eichner, Hubert; Klug, Tobias; Borst, Alexander
2009-01-01
Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing.
Neural Simulations on Multi-Core Architectures
Eichner, Hubert; Klug, Tobias; Borst, Alexander
2009-01-01
Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing. PMID:19636393
Particle-in-cell simulation study on halo formation in anisotropic beams
NASA Astrophysics Data System (ADS)
Ikegami, Masanori
2000-11-01
In a recent paper (M. Ikegami, Nucl. Instr. and Meth. A 435 (1999) 284), we investigated halo formation processes in transversely anisotropic beams based on the particle-core model. The effect of simultaneous excitation of two normal modes of core oscillation, i.e., high- and low-frequency modes, was examined. In the present study, self-consistent particle simulations are performed to confirm the results obtained in the particle-core analysis. In these simulations, it is confirmed that the particle-core analysis can predict the halo extent accurately even in anisotropic situations. Furthermore, we find that the halo intensity is enhanced in some cases where two normal modes of core oscillation are simultaneously excited as expected in the particle-core analysis. This result is of practical importance because pure high-frequency mode oscillation has frequently been assumed in preceding halo studies. The dependence of halo intensity on the 2:1 fixed point locations is also discussed.
Multi-scale gyrokinetic simulations of an Alcator C-Mod, ELM-y H-mode plasma
NASA Astrophysics Data System (ADS)
Howard, N. T.; Holland, C.; White, A. E.; Greenwald, M.; Rodriguez-Fernandez, P.; Candy, J.; Creely, A. J.
2018-01-01
High fidelity, multi-scale gyrokinetic simulations capable of capturing both ion ({k}θ {ρ }s∼ { O }(1.0)) and electron-scale ({k}θ {ρ }e∼ { O }(1.0)) turbulence were performed in the core of an Alcator C-Mod ELM-y H-mode discharge which exhibits reactor-relevant characteristics. These simulations, performed with all experimental inputs and realistic ion to electron mass ratio ({({m}i/{m}e)}1/2=60.0) provide insight into the physics fidelity that may be needed for accurate simulation of the core of fusion reactor discharges. Three multi-scale simulations and series of separate ion and electron-scale simulations performed using the GYRO code (Candy and Waltz 2003 J. Comput. Phys. 186 545) are presented. As with earlier multi-scale results in L-mode conditions (Howard et al 2016 Nucl. Fusion 56 014004), both ion and multi-scale simulations results are compared with experimentally inferred ion and electron heat fluxes, as well as the measured values of electron incremental thermal diffusivities—indicative of the experimental electron temperature profile stiffness. Consistent with the L-mode results, cross-scale coupling is found to play an important role in the simulation of these H-mode conditions. Extremely stiff ion-scale transport is observed in these high-performance conditions which is shown to likely play and important role in the reproduction of measurements of perturbative transport. These results provide important insight into the role of multi-scale plasma turbulence in the core of reactor-relevant plasmas and establish important constraints on the the fidelity of models needed for predictive simulations.
Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong
2010-10-01
Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, Benjamin S.; Hamilton, Steven P.; Jarrett, Michael G.
This report describes the performance improvements made to the VERA Core Simulator (VERA-CS) during FY2016. The development of the VERA Core Simulator has focused on the capability needed to deplete physical reactors and help solve various problems; this capability required the accurate simulation of many operating cycles of a nuclear power plant. The first section of this report introduces two test problems used to assess the run-time performance of VERA-CS using a source dated February 2016. The next section provides a brief overview of the major modifications made to decrease the computational cost. Following the descriptions of the major improvements,more » the run-time for each improvement is shown. Conclusions on the work are presented, and further follow-on performance improvements are suggested.« less
An approach for coupled-code multiphysics core simulations from a common input
Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; ...
2014-12-10
This study describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which ismore » built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak Ridge National Laboratory using 1156 cores, and a synopsis of the solution results and code performance is presented. Finally, ongoing development of this approach is also briefly described.« less
NASA Astrophysics Data System (ADS)
Greynolds, Alan W.
2013-09-01
Results from the GelOE optical engineering software are presented for the through-focus, monochromatic coherent and polychromatic incoherent imaging of a radial "star" target for equivalent t-number circular and Gaussian pupils. The FFT-based simulations are carried out using OpenMP threading on a multi-core desktop computer, with and without the aid of a many-core NVIDIA GPU accessing its cuFFT library. It is found that a custom FFT optimized for the 12-core host has similar performance to a simply implemented 256-core GPU FFT. A more sophisticated version of the latter but tuned to reduce overhead on a 448-core GPU is 20 to 28 times faster than a basic FFT implementation running on one CPU core.
The ab initio simulation of the Earth's core.
Alfè, D; Gillan, M J; Vocadlo, L; Brodholt, J; Price, G D
2002-06-15
The Earth has a liquid outer and solid inner core. It is predominantly composed of Fe, alloyed with small amounts of light elements, such as S, O and Si. The detailed chemical and thermal structure of the core is poorly constrained, and it is difficult to perform experiments to establish the properties of core-forming phases at the pressures (ca. 300 GPa) and temperatures (ca. 5000-6000 K) to be found in the core. Here we present some major advances that have been made in using quantum mechanical methods to simulate the high-P/T properties of Fe alloys, which have been made possible by recent developments in high-performance computing. Specifically, we outline how we have calculated the Gibbs free energies of the crystalline and liquid forms of Fe alloys, and so conclude that the inner core of the Earth is composed of hexagonal close packed Fe containing ca. 8.5% S (or Si) and 0.2% O in equilibrium at 5600 K at the boundary between the inner and outer cores with a liquid Fe containing ca. 10% S (or Si) and 8% O.
NASA Technical Reports Server (NTRS)
Giffin, R. G.; Mcfalls, R. A.; Beacher, B. F.
1977-01-01
The fan aerodynamic and aeromechanical performance tests of the quiet clean short haul experimental engine under the wing fan and inlet with a simulated core flow are described. Overall forward mode fan performance is presented at each rotor pitch angle setting with conventional flow pressure ratio efficiency fan maps, distinguishing the performance characteristics of the fan bypass and fan core regions. Effects of off design bypass ratio, hybrid inlet geometry, and tip radial inlet distortion on fan performance are determined. The nonaxisymmetric bypass OGV and pylon configuration is assessed relative to both total pressure loss and induced circumferential flow distortion. Reverse mode performance, obtained by resetting the rotor blades through both the stall pitch and flat pitch directions, is discussed in terms of the conventional flow pressure ratio relationship and its implications upon achievable reverse thrust. Core performance in reverse mode operation is presented in terms of overall recovery levels and radial profiles existing at the simulated core inlet plane. Observations of the starting phenomena associated with the initiation of stable rotor flow during acceleration in the reverse mode are briefly discussed. Aeromechanical response characteristics of the fan blades are presented as a separate appendix, along with a description of the vehicle instrumentation and method of data reduction.
Cost efficient CFD simulations: Proper selection of domain partitioning strategies
NASA Astrophysics Data System (ADS)
Haddadi, Bahram; Jordan, Christian; Harasek, Michael
2017-10-01
Computational Fluid Dynamics (CFD) is one of the most powerful simulation methods, which is used for temporally and spatially resolved solutions of fluid flow, heat transfer, mass transfer, etc. One of the challenges of Computational Fluid Dynamics is the extreme hardware demand. Nowadays super-computers (e.g. High Performance Computing, HPC) featuring multiple CPU cores are applied for solving-the simulation domain is split into partitions for each core. Some of the different methods for partitioning are investigated in this paper. As a practical example, a new open source based solver was utilized for simulating packed bed adsorption, a common separation method within the field of thermal process engineering. Adsorption can for example be applied for removal of trace gases from a gas stream or pure gases production like Hydrogen. For comparing the performance of the partitioning methods, a 60 million cell mesh for a packed bed of spherical adsorbents was created; one second of the adsorption process was simulated. Different partitioning methods available in OpenFOAM® (Scotch, Simple, and Hierarchical) have been used with different numbers of sub-domains. The effect of the different methods and number of processor cores on the simulation speedup and also energy consumption were investigated for two different hardware infrastructures (Vienna Scientific Clusters VSC 2 and VSC 3). As a general recommendation an optimum number of cells per processor core was calculated. Optimized simulation speed, lower energy consumption and consequently the cost effects are reported here.
NASA Astrophysics Data System (ADS)
Curilla, L.; Astrauskas, I.; Pugzlys, A.; Stajanca, P.; Pysz, D.; Uherek, F.; Baltuska, A.; Bugar, I.
2018-05-01
We demonstrate ultrafast soliton-based nonlinear balancing of dual-core asymmetry in highly nonlinear photonic crystal fiber at sub-nanojoule pulse energy level. The effect of fiber asymmetry was studied experimentally by selective excitation and monitoring of individual fiber cores at different wavelengths between 1500 nm and 1800 nm. Higher energy transfer rate to non-excited core was observed in the case of fast core excitation due to nonlinear asymmetry balancing of temporal solitons, which was confirmed by the dedicated numerical simulations based on the coupled generalized nonlinear Schrödinger equations. Moreover, the simulation results correspond qualitatively with the experimentally acquired dependences of the output dual-core extinction ratio on excitation energy and wavelength. In the case of 1800 nm fast core excitation, narrow band spectral intensity switching between the output channels was registered with contrast of 23 dB. The switching was achieved by the change of the excitation pulse energy in sub-nanojoule region. The performed detailed analysis of the nonlinear balancing of dual-core asymmetry in solitonic propagation regime opens new perspectives for the development of ultrafast nonlinear all-optical switching devices.
Optimizing the Performance of Reactive Molecular Dynamics Simulations for Multi-core Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aktulga, Hasan Metin; Coffman, Paul; Shan, Tzu-Ray
2015-12-01
Hybrid parallelism allows high performance computing applications to better leverage the increasing on-node parallelism of modern supercomputers. In this paper, we present a hybrid parallel implementation of the widely used LAMMPS/ReaxC package, where the construction of bonded and nonbonded lists and evaluation of complex ReaxFF interactions are implemented efficiently using OpenMP parallelism. Additionally, the performance of the QEq charge equilibration scheme is examined and a dual-solver is implemented. We present the performance of the resulting ReaxC-OMP package on a state-of-the-art multi-core architecture Mira, an IBM BlueGene/Q supercomputer. For system sizes ranging from 32 thousand to 16.6 million particles, speedups inmore » the range of 1.5-4.5x are observed using the new ReaxC-OMP software. Sustained performance improvements have been observed for up to 262,144 cores (1,048,576 processes) of Mira with a weak scaling efficiency of 91.5% in larger simulations containing 16.6 million particles.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romander, C. M.; Cagliostro, D. J.
Five experiments were performed to help evaluate the structural integrity of the reactor vessel and head design and to verify code predictions. In the first experiment (SM 1), a detailed model of the head was loaded statically to determine its stiffness. In the remaining four experiments (SM 2 to SM 5), models of the vessel and head were loaded dynamically under a simulated 661 MW-sec hypothetical core disruptive accident (HCDA). Models SM 2 to SM 4, each of increasing complexity, systematically showed the effects of upper internals structures, a thermal liner, core support platform, and torospherical bottom on vessel response.more » Model SM 5, identical to SM 4 but more heavily instrumented, demonstrated experimental reproducibility and provided more comprehensive data. The models consisted of a Ni 200 vessel and core barrel, a head with shielding and simulated component masses, an upper internals structure (UIS), and, in the more complex models SM 4 and SM 5, a Ni 200 thermal liner and core support structure. Water simulated the liquid sodium coolant and a low-density explosive simulated the HCDA loads.« less
DYNAMICO, an atmospheric dynamical core for high-performance climate modeling
NASA Astrophysics Data System (ADS)
Dubos, Thomas; Meurdesoif, Yann; Spiga, Aymeric; Millour, Ehouarn; Fita, Lluis; Hourdin, Frédéric; Kageyama, Masa; Traore, Abdoul-Khadre; Guerlet, Sandrine; Polcher, Jan
2017-04-01
Institut Pierre Simon Laplace has developed a very scalable atmospheric dynamical core, DYNAMICO, based on energy-conserving finite-difference/finite volume numerics on a quasi-uniform icosahedral-hexagonal mesh. Scalability is achieved by combining hybrid MPI/OpenMP parallelism to asynchronous I/O. This dynamical core has been coupled to radiative transfer physics tailored to the atmosphere of Saturn, allowing unprecedented simulations of the climate of this giant planet. For terrestrial climate studies DYNAMICO is being integrated into the IPSL Earth System Model IPSL-CM. Preliminary aquaplanet and AMIP-style simulations yield reasonable results when compared to outputs from IPSL-CM5. The observed performance suggests that an order of magnitude may be gained with respect to IPSL-CM CMIP5 simulations either on the duration of simulations or on their resolution. Longer simulations would be of interest for the study of paleoclimate, while higher resolution could improve certain aspects of the modeled climate such as extreme events, as will be explored in the HighResMIP project. Following IPSL's strategic vision of building a unified global-regional modelling system, a fully-compressible, non-hydrostatic prototype of DYNAMICO has been developed, enabling future convection-resolving simulations. Work supported by ANR project "HEAT", grant number CE23_2014_HEAT Dubos, T., Dubey, S., Tort, M., Mittal, R., Meurdesoif, Y., and Hourdin, F.: DYNAMICO-1.0, an icosahedral hydrostatic dynamical core designed for consistency and versatility, Geosci. Model Dev., 8, 3131-3150, doi:10.5194/gmd-8-3131-2015, 2015.
Excore Modeling with VERAShift
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pandya, Tara M.; Evans, Thomas M.
It is important to be able to accurately predict the neutron flux outside the immediate reactor core for a variety of safety and material analyses. Monte Carlo radiation transport calculations are required to produce the high fidelity excore responses. Under this milestone VERA (specifically the VERAShift package) has been extended to perform excore calculations by running radiation transport calculations with Shift. This package couples VERA-CS with Shift to perform excore tallies for multiple state points concurrently, with each component capable of parallel execution on independent domains. Specifically, this package performs fluence calculations in the core barrel and vessel, or, performsmore » the requested tallies in any user-defined excore regions. VERAShift takes advantage of the general geometry package in Shift. This gives VERAShift the flexibility to explicitly model features outside the core barrel, including detailed vessel models, detectors, and power plant details. A very limited set of experimental and numerical benchmarks is available for excore simulation comparison. The Consortium for the Advanced Simulation of Light Water Reactors (CASL) has developed a set of excore benchmark problems to include as part of the VERA-CS verification and validation (V&V) problems. The excore capability in VERAShift has been tested on small representative assembly problems, multiassembly problems, and quarter-core problems. VERAView has also been extended to visualize these vessel fluence results from VERAShift. Preliminary vessel fluence results for quarter-core multistate calculations look very promising. Further development is needed to determine the details relevant to excore simulations. Validation of VERA for fluence and excore detectors still needs to be performed against experimental and numerical results.« less
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
Kumar, Arunaz; Gilmour, Carole; Nestel, Debra; Aldridge, Robyn; McLelland, Gayle; Wallace, Euan
2014-12-01
Core clinical skills acquisition is an essential component of undergraduate medical and midwifery education. Although interprofessional education is an increasingly common format for learning efficient teamwork in clinical medicine, its value in undergraduate education is less clear. We present a collaborative effort from the medical and midwifery schools of Monash University, Melbourne, towards the development of an educational package centred around a core skills-based workshop using low fidelity simulation models in an interprofessional setting. Detailed feedback on the package was positive with respect to the relevance of the teaching content, whether the topic was well taught by task trainers and simulation models used, pitch of level of teaching and perception of confidence gained in performing the skill on a real patient after attending the workshop. Overall, interprofessional core skills training using low fidelity simulation models introduced at an undergraduate level in medicine and midwifery had a good acceptance. © 2014 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.
Reducing numerical costs for core wide nuclear reactor CFD simulations by the Coarse-Grid-CFD
NASA Astrophysics Data System (ADS)
Viellieber, Mathias; Class, Andreas G.
2013-11-01
Traditionally complete nuclear reactor core simulations are performed with subchannel analysis codes, that rely on experimental and empirical input. The Coarse-Grid-CFD (CGCFD) intends to replace the experimental or empirical input with CFD data. The reactor core consists of repetitive flow patterns, allowing the general approach of creating a parametrized model for one segment and composing many of those to obtain the entire reactor simulation. The method is based on a detailed and well-resolved CFD simulation of one representative segment. From this simulation we extract so-called parametrized volumetric forces which close, an otherwise strongly under resolved, coarsely-meshed model of a complete reactor setup. While the formulation so far accounts for forces created internally in the fluid others e.g. obstruction and flow deviation through spacers and wire wraps, still need to be accounted for if the geometric details are not represented in the coarse mesh. These are modelled with an Anisotropic Porosity Formulation (APF). This work focuses on the application of the CGCFD to a complete reactor core setup and the accomplishment of the parametrization of the volumetric forces.
NASA Astrophysics Data System (ADS)
Hur, Min Young; Verboncoeur, John; Lee, Hae June
2014-10-01
Particle-in-cell (PIC) simulations have high fidelity in the plasma device requiring transient kinetic modeling compared with fluid simulations. It uses less approximation on the plasma kinetics but requires many particles and grids to observe the semantic results. It means that the simulation spends lots of simulation time in proportion to the number of particles. Therefore, PIC simulation needs high performance computing. In this research, a graphic processing unit (GPU) is adopted for high performance computing of PIC simulation for low temperature discharge plasmas. GPUs have many-core processors and high memory bandwidth compared with a central processing unit (CPU). NVIDIA GeForce GPUs were used for the test with hundreds of cores which show cost-effective performance. PIC code algorithm is divided into two modules which are a field solver and a particle mover. The particle mover module is divided into four routines which are named move, boundary, Monte Carlo collision (MCC), and deposit. Overall, the GPU code solves particle motions as well as electrostatic potential in two-dimensional geometry almost 30 times faster than a single CPU code. This work was supported by the Korea Institute of Science Technology Information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Y. S.; Joo, H. G.; Yoon, J. I.
The nTRACER direct whole core transport code employing the planar MOC solution based 3-D calculation method, the subgroup method for resonance treatment, the Krylov matrix exponential method for depletion, and a subchannel thermal/hydraulic calculation solver was developed for practical high-fidelity simulation of power reactors. Its accuracy and performance is verified by comparing with the measurement data obtained for three pressurized water reactor cores. It is demonstrated that accurate and detailed multi-physic simulation of power reactors is practically realizable without any prior calculations or adjustments. (authors)
Theoretical study of geometry relaxation following core excitation: H2O, NH3, and CH4
NASA Astrophysics Data System (ADS)
Takahashi, Osamu; Kunitake, Naoto; Takaki, Saya
2015-10-01
Single core-hole (SCH) and double core-hole excited state molecular dynamics (MD) calculations for neutral and cationic H2O, NH3, and CH4 have been performed to examine geometry relaxation after core excitation. We observed faster X-H (X = C, N, O) bond elongation for the core-ionized state produced from the valence cationic molecule and the double-core-ionized state produced from the ground and valence cationic molecules than for the first resonant SCH state. Using the results of SCH MD simulations of the ground and valence cationic molecules, Auger decay spectra calculations were performed. We found that fast bond scission leads to peak broadening of the spectra.
Accelerating moderately stiff chemical kinetics in reactive-flow simulations using GPUs
NASA Astrophysics Data System (ADS)
Niemeyer, Kyle E.; Sung, Chih-Jen
2014-01-01
The chemical kinetics ODEs arising from operator-split reactive-flow simulations were solved on GPUs using explicit integration algorithms. Nonstiff chemical kinetics of a hydrogen oxidation mechanism (9 species and 38 irreversible reactions) were computed using the explicit fifth-order Runge-Kutta-Cash-Karp method, and the GPU-accelerated version performed faster than single- and six-core CPU versions by factors of 126 and 25, respectively, for 524,288 ODEs. Moderately stiff kinetics, represented with mechanisms for hydrogen/carbon-monoxide (13 species and 54 irreversible reactions) and methane (53 species and 634 irreversible reactions) oxidation, were computed using the stabilized explicit second-order Runge-Kutta-Chebyshev (RKC) algorithm. The GPU-based RKC implementation demonstrated an increase in performance of nearly 59 and 10 times, for problem sizes consisting of 262,144 ODEs and larger, than the single- and six-core CPU-based RKC algorithms using the hydrogen/carbon-monoxide mechanism. With the methane mechanism, RKC-GPU performed more than 65 and 11 times faster, for problem sizes consisting of 131,072 ODEs and larger, than the single- and six-core RKC-CPU versions, and up to 57 times faster than the six-core CPU-based implicit VODE algorithm on 65,536 ODEs. In the presence of more severe stiffness, such as ethylene oxidation (111 species and 1566 irreversible reactions), RKC-GPU performed more than 17 times faster than RKC-CPU on six cores for 32,768 ODEs and larger, and at best 4.5 times faster than VODE on six CPU cores for 65,536 ODEs. With a larger time step size, RKC-GPU performed at best 2.5 times slower than six-core VODE for 8192 ODEs and larger. Therefore, the need for developing new strategies for integrating stiff chemistry on GPUs was discussed.
NASA Astrophysics Data System (ADS)
Tanikawa, Ataru; Yoshikawa, Kohji; Nitadori, Keigo; Okamoto, Takashi
2013-02-01
We have developed a numerical software library for collisionless N-body simulations named "Phantom-GRAPE" which highly accelerates force calculations among particles by use of a new SIMD instruction set extension to the x86 architecture, Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). In our library, not only the Newton's forces, but also central forces with an arbitrary shape f(r), which has a finite cutoff radius rcut (i.e. f(r)=0 at r>rcut), can be quickly computed. In computing such central forces with an arbitrary force shape f(r), we refer to a pre-calculated look-up table. We also present a new scheme to create the look-up table whose binning is optimal to keep good accuracy in computing forces and whose size is small enough to avoid cache misses. Using an Intel Core i7-2600 processor, we measure the performance of our library for both of the Newton's forces and the arbitrarily shaped central forces. In the case of Newton's forces, we achieve 2×109 interactions per second with one processor core (or 75 GFLOPS if we count 38 operations per interaction), which is 20 times higher than the performance of an implementation without any explicit use of SIMD instructions, and 2 times than that with the SSE instructions. With four processor cores, we obtain the performance of 8×109 interactions per second (or 300 GFLOPS). In the case of the arbitrarily shaped central forces, we can calculate 1×109 and 4×109 interactions per second with one and four processor cores, respectively. The performance with one processor core is 6 times and 2 times higher than those of the implementations without any use of SIMD instructions and with the SSE instructions. These performances depend only weakly on the number of particles, irrespective of the force shape. It is good contrast with the fact that the performance of force calculations accelerated by graphics processing units (GPUs) depends strongly on the number of particles. Substantially weak dependence of the performance on the number of particles is suitable to collisionless N-body simulations, since these simulations are usually performed with sophisticated N-body solvers such as Tree- and TreePM-methods combined with an individual timestep scheme. We conclude that collisionless N-body simulations accelerated with our library have significant advantage over those accelerated by GPUs, especially on massively parallel environments.
A Large number of fast cosmological simulations
NASA Astrophysics Data System (ADS)
Koda, Jun; Kazin, E.; Blake, C.
2014-01-01
Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.
TREAT Transient Analysis Benchmarking for the HEU Core
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontogeorgakos, D. C.; Connaway, H. M.; Wright, A. E.
2014-05-01
This work was performed to support the feasibility study on the potential conversion of the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory from the use of high enriched uranium (HEU) fuel to the use of low enriched uranium (LEU) fuel. The analyses were performed by the GTRI Reactor Conversion staff at the Argonne National Laboratory (ANL). The objective of this study was to benchmark the transient calculations against temperature-limited transients performed in the final operating HEU TREAT core configuration. The MCNP code was used to evaluate steady-state neutronics behavior, and the point kinetics code TREKIN was used tomore » determine core power and energy during transients. The first part of the benchmarking process was to calculate with MCNP all the neutronic parameters required by TREKIN to simulate the transients: the transient rod-bank worth, the prompt neutron generation lifetime, the temperature reactivity feedback as a function of total core energy, and the core-average temperature and peak temperature as a functions of total core energy. The results of these calculations were compared against measurements or against reported values as documented in the available TREAT reports. The heating of the fuel was simulated as an adiabatic process. The reported values were extracted from ANL reports, intra-laboratory memos and experiment logsheets and in some cases it was not clear if the values were based on measurements, on calculations or a combination of both. Therefore, it was decided to use the term “reported” values when referring to such data. The methods and results from the HEU core transient analyses will be used for the potential LEU core configurations to predict the converted (LEU) core’s performance.« less
NASA Astrophysics Data System (ADS)
Tabik, S.; Romero, L. F.; Mimica, P.; Plata, O.; Zapata, E. L.
2012-09-01
A broad area in astronomy focuses on simulating extragalactic objects based on Very Long Baseline Interferometry (VLBI) radio-maps. Several algorithms in this scope simulate what would be the observed radio-maps if emitted from a predefined extragalactic object. This work analyzes the performance and scaling of this kind of algorithms on multi-socket, multi-core architectures. In particular, we evaluate a sharing approach, a privatizing approach and a hybrid approach on systems with complex memory hierarchy that includes shared Last Level Cache (LLC). In addition, we investigate which manual processes can be systematized and then automated in future works. The experiments show that the data-privatizing model scales efficiently on medium scale multi-socket, multi-core systems (up to 48 cores) while regardless of algorithmic and scheduling optimizations, the sharing approach is unable to reach acceptable scalability on more than one socket. However, the hybrid model with a specific level of data-sharing provides the best scalability over all used multi-socket, multi-core systems.
Takano, Yu; Nakata, Kazuto; Yonezawa, Yasushige; Nakamura, Haruki
2016-05-05
A massively parallel program for quantum mechanical-molecular mechanical (QM/MM) molecular dynamics simulation, called Platypus (PLATform for dYnamic Protein Unified Simulation), was developed to elucidate protein functions. The speedup and the parallelization ratio of Platypus in the QM and QM/MM calculations were assessed for a bacteriochlorophyll dimer in the photosynthetic reaction center (DIMER) on the K computer, a massively parallel computer achieving 10 PetaFLOPs with 705,024 cores. Platypus exhibited the increase in speedup up to 20,000 core processors at the HF/cc-pVDZ and B3LYP/cc-pVDZ, and up to 10,000 core processors by the CASCI(16,16)/6-31G** calculations. We also performed excited QM/MM-MD simulations on the chromophore of Sirius (SIRIUS) in water. Sirius is a pH-insensitive and photo-stable ultramarine fluorescent protein. Platypus accelerated on-the-fly excited-state QM/MM-MD simulations for SIRIUS in water, using over 4000 core processors. In addition, it also succeeded in 50-ps (200,000-step) on-the-fly excited-state QM/MM-MD simulations for the SIRIUS in water. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
The effect of core configuration on temperature coefficient of reactivity in IRR-1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bettan, M.; Silverman, I.; Shapira, M.
1997-08-01
Experiments designed to measure the effect of coolant moderator temperature on core reactivity in an HEU swimming pool type reactor were performed. The moderator temperature coefficient of reactivity ({alpha}{sub {omega}}) was obtained and found to be different in two core loadings. The measured {alpha}{sub {omega}} of one core loading was {minus}13 pcm/{degrees}C at the temperature range of 23-30{degrees}C. This value of {alpha}{sub {omega}} is comparable to the data published by the IAEA. The {alpha}{sub {omega}} measured in the second core loading was found to be {minus}8 pcm/{degrees}C at the same temperature range. Another phenomenon considered in this study is coremore » behavior during reactivity insertion transient. The results were compared to a core simulation using the Dynamic Simulator for Nuclear Power Plants. It was found that in the second core loading factors other than the moderator temperature influence the core reactivity more than expected. These effects proved to be extremely dependent on core configuration and may in certain core loadings render the reactor`s reactivity coefficient undesirable.« less
Greenland-Wide Seasonal Temperatures During the Last Deglaciation
NASA Astrophysics Data System (ADS)
Buizert, C.; Keisling, B. A.; Box, J. E.; He, F.; Carlson, A. E.; Sinclair, G.; DeConto, R. M.
2018-02-01
The sensitivity of the Greenland ice sheet to climate forcing is of key importance in assessing its contribution to past and future sea level rise. Surface mass loss occurs during summer, and accounting for temperature seasonality is critical in simulating ice sheet evolution and in interpreting glacial landforms and chronologies. Ice core records constrain the timing and magnitude of climate change but are largely limited to annual mean estimates from the ice sheet interior. Here we merge ice core reconstructions with transient climate model simulations to generate Greenland-wide and seasonally resolved surface air temperature fields during the last deglaciation. Greenland summer temperatures peak in the early Holocene, consistent with records of ice core melt layers. We perform deglacial Greenland ice sheet model simulations to demonstrate that accounting for realistic temperature seasonality decreases simulated glacial ice volume, expedites the deglacial margin retreat, mutes the impact of abrupt climate warming, and gives rise to a clear Holocene ice volume minimum.
Isotope heat source simulator for testing of space power systems
NASA Technical Reports Server (NTRS)
Prok, G. M.; Smith, R. B.
1973-01-01
A reliable isotope heat source simulator was designed for use in a Brayton power system. This simulator is composed of an electrically heated tungsten wire which is wound around a boron nitride core and enclosed in a graphite jacket. Simulator testing was performed at the expected operating temperature of the Brayton power system. Endurance testing for 5012 hours was followed by cycling the simulator temperature. The integrity of this simulator was maintained throughout testing. Alumina beads served as a diffusion barrier to prevent interaction between the tungsten heater and boron nitride core. The simulator was designed to maintain a surface temperature of 1311 to 1366 K (1900 to 2000 F) with a power input of approximately 400 watts. The design concept and the materials used in the simulator make possible man different geometries. This flexibility increases its potential use.
Cheung, Kit; Schultz, Simon R; Luk, Wayne
2015-01-01
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.
Nonlinear dynamic simulation of single- and multi-spool core engines
NASA Technical Reports Server (NTRS)
Schobeiri, T.; Lippke, C.; Abouelkheir, M.
1993-01-01
In this paper a new computational method for accurate simulation of the nonlinear dynamic behavior of single- and multi-spool core engines, turbofan engines, and power generation gas turbine engines is presented. In order to perform the simulation, a modularly structured computer code has been developed which includes individual mathematical modules representing various engine components. The generic structure of the code enables the dynamic simulation of arbitrary engine configurations ranging from single-spool thrust generation to multi-spool thrust/power generation engines under adverse dynamic operating conditions. For precise simulation of turbine and compressor components, row-by-row calculation procedures were implemented that account for the specific turbine and compressor cascade and blade geometry and characteristics. The dynamic behavior of the subject engine is calculated by solving a number of systems of partial differential equations, which describe the unsteady behavior of the individual components. In order to ensure the capability, accuracy, robustness, and reliability of the code, comprehensive critical performance assessment and validation tests were performed. As representatives, three different transient cases with single- and multi-spool thrust and power generation engines were simulated. The transient cases range from operating with a prescribed fuel schedule, to extreme load changes, to generator and turbine shut down.
Cheung, Kit; Schultz, Simon R.; Luk, Wayne
2016-01-01
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542
Extremely low-loss, dispersion flattened porous-core photonic crystal fiber for terahertz regime
NASA Astrophysics Data System (ADS)
Islam, Saiful; Islam, Mohammad Rakibul; Faisal, Mohammad; Arefin, Abu Sayeed Muhammad Shamsul; Rahman, Hasan; Sultana, Jakeya; Rana, Sohel
2016-07-01
A porous-core octagonal photonic crystal fiber (PC-OPCF) with ultralow effective material loss (EML), high core power fraction, and ultra flattened dispersion is proposed for terahertz (THz) wave propagation. At an operating frequency of 1 THz and core diameter of 345 μm, simulation results display an extremely low EML of 0.047 cm-1, 49.1% power transmission through core air holes, decreased confinement loss with the increase of frequency, and dispersion variation of 0.15 ps/THz/cm. In addition, the proposed PCF can successfully operate in single-mode condition. All the simulations are performed with finite-element modeling package, COMSOL v4.2. The design can be fabricated using a stacking and drilling method. Thus, the proposed fiber has the potential of being an effective transmission medium of broadband THz waves.
Integrated simulation of magnetic-field-assist fast ignition laser fusion
NASA Astrophysics Data System (ADS)
Johzaki, T.; Nagatomo, H.; Sunahara, A.; Sentoku, Y.; Sakagami, H.; Hata, M.; Taguchi, T.; Mima, K.; Kai, Y.; Ajimi, D.; Isoda, T.; Endo, T.; Yogo, A.; Arikawa, Y.; Fujioka, S.; Shiraga, H.; Azechi, H.
2017-01-01
To enhance the core heating efficiency in fast ignition laser fusion, the concept of relativistic electron beam guiding by external magnetic fields was evaluated by integrated simulations for FIREX class targets. For the cone-attached shell target case, the core heating performance deteriorates by applying magnetic fields since the core is considerably deformed and most of the fast electrons are reflected due to the magnetic mirror formed through the implosion. On the other hand, in the case of a cone-attached solid ball target, the implosion is more stable under the kilo-tesla-class magnetic field. In addition, feasible magnetic field configuration is formed through the implosion. As a result, the core heating efficiency doubles by magnetic guiding. The dependence of core heating properties on the heating pulse shot timing was also investigated for the solid ball target.
Taming Wild Horses: The Need for Virtual Time-based Scheduling of VMs in Network Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J
2012-01-01
The next generation of scalable network simulators employ virtual machines (VMs) to act as high-fidelity models of traffic producer/consumer nodes in simulated networks. However, network simulations could be inaccurate if VMs are not scheduled according to virtual time, especially when many VMs are hosted per simulator core in a multi-core simulator environment. Since VMs are by default free-running, on the outset, it is not clear if, and to what extent, their untamed execution affects the results in simulated scenarios. Here, we provide the first quantitative basis for establishing the need for generalized virtual time scheduling of VMs in network simulators,more » based on an actual prototyped implementations. To exercise breadth, our system is tested with multiple disparate applications: (a) a set of message passing parallel programs, (b) a computer worm propagation phenomenon, and (c) a mobile ad-hoc wireless network simulation. We define and use error metrics and benchmarks in scaled tests to empirically report the poor match of traditional, fairness-based VM scheduling to VM-based network simulation, and also clearly show the better performance of our simulation-specific scheduler, with up to 64 VMs hosted on a 12-core simulator node.« less
Multidimensional neutrino-transport simulations of the core-collapse supernova central engine
NASA Astrophysics Data System (ADS)
O'Connor, Evan; Couch, Sean
2017-01-01
Core-collapse supernovae (CCSNe) mark the explosive death of a massive star. The explosion itself is triggered by the collapse of the iron core that forms near the end of a massive star's life. The core collapses to nuclear densities where the stiff nuclear equation of state halts the collapse and leads to the formation of the supernova shock. In many cases, this shock will eventually propagate throughout the entire star and produces a bright optical display. However, the path from shock formation to explosion has proven difficult to recreate in simulations. Soon after the shock forms, its outward propagation is stagnated and must be revived in order for the CCSNe to be successful. The leading theory for the mechanism that reenergizes the shock is the deposition of energy by neutrinos. In 1D simulations this mechanism fails. However, there is growing evidence that in 2D and 3D, hydrodynamic instabilities can assist the neutrino heating in reviving the shock. In this talk, I will present new multi-D neutrino-radiation-hydrodynamic simulations of CCSNe performed with the FLASH hydrodynamics package. I will discuss the efficacy of neutrino heating in our simulations and show the impact of the multi-D hydrodynamic instabilities.
Three dimensional core-collapse supernova simulated using a 15 M ⊙ progenitor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lentz, Eric J.; Bruenn, Stephen W.; Hix, W. Raphael
We have performed ab initio neutrino radiation hydrodynamics simulations in three and two spatial dimensions (3D and 2D) of core-collapse supernovae from the same 15 M⊙ progenitor through 440 ms after core bounce. Both 3D and 2D models achieve explosions; however, the onset of explosion (shock revival) is delayed by ~100 ms in 3D relative to the 2D counterpart and the growth of the diagnostic explosion energy is slower. This is consistent with previously reported 3D simulations utilizing iron-core progenitors with dense mantles. In the ~100 ms before the onset of explosion, diagnostics of neutrino heating and turbulent kinetic energymore » favor earlier explosion in 2D. During the delay, the angular scale of convective plumes reaching the shock surface grows and explosion in 3D is ultimately lead by a single, large-angle plume, giving the expanding shock a directional orientation not dissimilar from those imposed by axial symmetry in 2D simulations. Finally, we posit that shock revival and explosion in the 3D simulation may be delayed until sufficiently large plumes form, whereas such plumes form more rapidly in 2D, permitting earlier explosions.« less
Three dimensional core-collapse supernova simulated using a 15 M ⊙ progenitor
Lentz, Eric J.; Bruenn, Stephen W.; Hix, W. Raphael; ...
2015-07-10
We have performed ab initio neutrino radiation hydrodynamics simulations in three and two spatial dimensions (3D and 2D) of core-collapse supernovae from the same 15 M⊙ progenitor through 440 ms after core bounce. Both 3D and 2D models achieve explosions; however, the onset of explosion (shock revival) is delayed by ~100 ms in 3D relative to the 2D counterpart and the growth of the diagnostic explosion energy is slower. This is consistent with previously reported 3D simulations utilizing iron-core progenitors with dense mantles. In the ~100 ms before the onset of explosion, diagnostics of neutrino heating and turbulent kinetic energymore » favor earlier explosion in 2D. During the delay, the angular scale of convective plumes reaching the shock surface grows and explosion in 3D is ultimately lead by a single, large-angle plume, giving the expanding shock a directional orientation not dissimilar from those imposed by axial symmetry in 2D simulations. Finally, we posit that shock revival and explosion in the 3D simulation may be delayed until sufficiently large plumes form, whereas such plumes form more rapidly in 2D, permitting earlier explosions.« less
NASA Astrophysics Data System (ADS)
Romano, Paul Kollath
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)
NASA Astrophysics Data System (ADS)
Yao, Atsushi; Sugimoto, Takaya; Odawara, Shunya; Fujisaki, Keisuke
2018-05-01
We report core loss properties of permanent magnet synchronous motors (PMSM) with amorphous magnetic materials (AMM) core under inverter and sinusoidal excitations. To discuss the core loss properties of AMM core, a comparison with non-oriented (NO) core is also performed. In addition, based on both experiments and numerical simulations, we estimate higher (time and space) harmonic components of the core losses under inverter and sinusoidal excitations. The core losses of PMSM are reduced by about 59% using AMM stator core instead of NO core under sinusoidal excitation. We show that the average decrease obtained by using AMM instead of NO in the stator core is about 94% in time harmonic components.
Tracking Blade Tip Vortices for Numerical Flow Simulations of Hovering Rotorcraft
NASA Technical Reports Server (NTRS)
Kao, David L.
2016-01-01
Blade tip vortices generated by a helicopter rotor blade are a major source of rotor noise and airframe vibration. This occurs when a vortex passes closely by, and interacts with, a rotor blade. The accurate prediction of Blade Vortex Interaction (BVI) continues to be a challenge for Computational Fluid Dynamics (CFD). Though considerable research has been devoted to BVI noise reduction and experimental techniques for measuring the blade tip vortices in a wind tunnel, there are only a handful of post-processing tools available for extracting vortex core lines from CFD simulation data. In order to calculate the vortex core radius, most of these tools require the user to manually select a vortex core to perform the calculation. Furthermore, none of them provide the capability to track the growth of a vortex core, which is a measure of how quickly the vortex diffuses over time. This paper introduces an automated approach for tracking the core growth of a blade tip vortex from CFD simulations of rotorcraft in hover. The proposed approach offers an effective method for the quantification and visualization of blade tip vortices in helicopter rotor wakes. Keywords: vortex core, feature extraction, CFD, numerical flow visualization
A neural network for the prediction of performance parameters of transformer cores
NASA Astrophysics Data System (ADS)
Nussbaum, C.; Booth, T.; Ilo, A.; Pfützner, H.
1996-07-01
The paper shows that Artificial Neural Networks (ANNs) may offer new possibilities for the prediction of transformer core performance parameters, i.e. no-load power losses and excitation. Basically this technique enables simulations with respect to different construction parameters most notably the characteristics of corner designs, i.e. the overlap length, the air gap length, and the number of steps. However, without additional physical knowledge incorporated into the ANN extrapolation beyond the training data limits restricts the predictive performance.
NASA Astrophysics Data System (ADS)
Addanki, Satish; Nedumaran, D.
2017-07-01
Core-Shell nanostructures play a vital role in the sensor field owing to their performance improvements in sensing characteristics and well-established synthesis procedures. These nanostructures can be ingeniously tuned to achieve tailored properties for a particular application of interest. In this work, an Ag-Au core-shell thin film nanoislands with APTMS (3-Aminopropyl trimethoxysilane) and PVA (Polyvinyl alcohol) binding agents was modeled, synthesized and characterized. The simulation results were used to fabricate the sensor through chemical route. The results of this study confirmed that the APTMS based Ag-Au core-shell thin film nanoislands offered a better performance over the PVA based Ag-Au core-shell thin film nanoislands. Also, the APTMS based Ag-Au core-shell thin film nanoislands exhibited better sensitivity towards ozone sensing over the other types, viz., APTMS/PVA based Au-Ag core-shell and standalone Au/Ag thin film nanoislands.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romander, C M; Cagliostro, D J
Five experiments were performed to help evaluate the structural integrity of the reactor vessel and head design and to verify code predictions. In the first experiment (SM 1), a detailed model of the head was loaded statically to determine its stiffness. In the remaining four experiments (SM 2 to SM 5), models of the vessel and head were loaded dynamically under a simulated 661 MW-s hypothetical core disruptive accident (HCDA). Models SM 2 to SM 4, each of increasing complexity, systematically showed the effects of upper internals structures, a thermal liner, core support platform, and torospherical bottom on vessel response.more » Model SM 5, identical to SM 4 but more heavily instrumented, demonstrated experimental reproducibility and provided more comprehensive data. The models consisted of a Ni 200 vessel and core barrel, a head with shielding and simulated component masses, and an upper internals structure (UIS).« less
Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, Benjamin, E-mail: collinsbs@ornl.gov; Stimpson, Shane, E-mail: stimpsonsg@ornl.gov; Kelley, Blake W., E-mail: kelleybl@umich.edu
2016-12-01
A consistent “2D/1D” neutron transport method is derived from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. This paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. Several applications have been performed on both leadership-class and industry-classmore » computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.« less
Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT
Collins, Benjamin; Stimpson, Shane; Kelley, Blake W.; ...
2016-08-25
We derived a consistent “2D/1D” neutron transport method from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. Our paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. We also performed several applications on both leadership-class and industry-classmore » computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.« less
NASA Astrophysics Data System (ADS)
Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Aoki, Takayuki
2010-12-01
We adopted the GPU (graphics processing unit) to accelerate the large-scale finite-difference simulation of seismic wave propagation. The simulation can benefit from the high-memory bandwidth of GPU because it is a "memory intensive" problem. In a single-GPU case we achieved a performance of about 56 GFlops, which was about 45-fold faster than that achieved by a single core of the host central processing unit (CPU). We confirmed that the optimized use of fast shared memory and registers were essential for performance. In the multi-GPU case with three-dimensional domain decomposition, the non-contiguous memory alignment in the ghost zones was found to impose quite long time in data transfer between GPU and the host node. This problem was solved by using contiguous memory buffers for ghost zones. We achieved a performance of about 2.2 TFlops by using 120 GPUs and 330 GB of total memory: nearly (or more than) 2200 cores of host CPUs would be required to achieve the same performance. The weak scaling was nearly proportional to the number of GPUs. We therefore conclude that GPU computing for large-scale simulation of seismic wave propagation is a promising approach as a faster simulation is possible with reduced computational resources compared to CPUs.
VERA Core Simulator methodology for pressurized water reactor cycle depletion
Kochunas, Brendan; Collins, Benjamin; Stimpson, Shane; ...
2017-01-12
This paper describes the methodology developed and implemented in the Virtual Environment for Reactor Applications Core Simulator (VERA-CS) to perform high-fidelity, pressurized water reactor (PWR), multicycle, core physics calculations. Depletion of the core with pin-resolved power and nuclide detail is a significant advance in the state of the art for reactor analysis, providing the level of detail necessary to address the problems of the U.S. Department of Energy Nuclear Reactor Simulation Hub, the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS has three main components: the neutronics solver MPACT, the thermal-hydraulic (T-H) solver COBRA-TF (CTF), and the nuclidemore » transmutation solver ORIGEN. This paper focuses on MPACT and provides an overview of the resonance self-shielding methods, macroscopic-cross-section calculation, two-dimensional/one-dimensional (2-D/1-D) transport, nuclide depletion, T-H feedback, and other supporting methods representing a minimal set of the capabilities needed to simulate high-fidelity models of a commercial nuclear reactor. Results are presented from the simulation of a model of the first cycle of Watts Bar Unit 1. The simulation is within 16 parts per million boron (ppmB) reactivity for all state points compared to cycle measurements, with an average reactivity bias of <5 ppmB for the entire cycle. Comparisons to cycle 1 flux map data are also provided, and the average 2-D root-mean-square (rms) error during cycle 1 is 1.07%. To demonstrate the multicycle capability, a state point at beginning of cycle (BOC) 2 was also simulated and compared to plant data. The comparison of the cycle 2 BOC state has a reactivity difference of +3 ppmB from measurement, and the 2-D rms of the comparison in the flux maps is 1.77%. Lastly, these results provide confidence in VERA-CS’s capability to perform high-fidelity calculations for practical PWR reactor problems.« less
NASA Astrophysics Data System (ADS)
Guan, Zhen; Pekurovsky, Dmitry; Luce, Jason; Thornton, Katsuyo; Lowengrub, John
The structural phase field crystal (XPFC) model can be used to model grain growth in polycrystalline materials at diffusive time-scales while maintaining atomic scale resolution. However, the governing equation of the XPFC model is an integral-partial-differential-equation (IPDE), which poses challenges in implementation onto high performance computing (HPC) platforms. In collaboration with the XSEDE Extended Collaborative Support Service, we developed a distributed memory HPC solver for the XPFC model, which combines parallel multigrid and P3DFFT. The performance benchmarking on the Stampede supercomputer indicates near linear strong and weak scaling for both multigrid and transfer time between multigrid and FFT modules up to 1024 cores. Scalability of the FFT module begins to decline at 128 cores, but it is sufficient for the type of problem we will be examining. We have demonstrated simulations using 1024 cores, and we expect to achieve 4096 cores and beyond. Ongoing work involves optimization of MPI/OpenMP-based codes for the Intel KNL Many-Core Architecture. This optimizes the code for coming pre-exascale systems, in particular many-core systems such as Stampede 2.0 and Cori 2 at NERSC, without sacrificing efficiency on other general HPC systems.
Coupled Neutronics Thermal-Hydraulic Solution of a Full-Core PWR Using VERA-CS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarno, Kevin T; Palmtag, Scott; Davidson, Gregory G
2014-01-01
The Consortium for Advanced Simulation of Light Water Reactors (CASL) is developing a core simulator called VERA-CS to model operating PWR reactors with high resolution. This paper describes how the development of VERA-CS is being driven by a set of progression benchmark problems that specify the delivery of useful capability in discrete steps. As part of this development, this paper will describe the current capability of VERA-CS to perform a multiphysics simulation of an operating PWR at Hot Full Power (HFP) conditions using a set of existing computer codes coupled together in a novel method. Results for several single-assembly casesmore » are shown that demonstrate coupling for different boron concentrations and power levels. Finally, high-resolution results are shown for a full-core PWR reactor modeled in quarter-symmetry.« less
Tabassum, Rana; Kaur, Parvinder; Gupta, Banshi D
2016-05-27
We report the fabrication and characterization of a surface plasmon resonance (SPR)-based fiber optic sensor that uses coatings of silver and aluminum (Al)-zinc oxide (ZnO) core-shell nanostructure (Al@ZnO) for the detection of phenyl hydrazine (Ph-Hyd). To optimize the volume fraction (f) of Al in ZnO and the thickness of the core-shell nanostructure layer (d), the electric field intensity along the normal to the multilayer system is simulated using the two-dimensional multilayer matrix method. The Al@ZnO core-shell nanostructure is prepared using the laser ablation technique. Various probes are fabricated with different values of f and an optimized thickness of core-shell nanostructure for the characterization of the Ph-Hyd sensor. The performance of the Ph-Hyd sensor is evaluated in terms of sensitivity. It is found that the Ag/Al@ZnO nanostructure core-shell-coated SPR probe with f = 0.25 and d = 0.040 μm possesses the maximum sensitivity towards Ph-Hyd. These results are in agreement with the simulated ones obtained using electric field intensity. In addition, the performance of the proposed probe is compared with that of probes coated with (i) Al@ZnO nanocomposite, (ii) Al nanoparticles and (iii) ZnO nanoparticles. It is found that the probe coated with an Al@ZnO core-shell nanostructure shows the largest resonance wavelength shift. The detailed mechanism of the sensing (involving chemical reactions) is presented. The sensor also manifests optimum performance at pH 7.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kodavasal, Janardhan; Harms, Kevin; Srivastava, Priyesh
A closed-cycle gasoline compression ignition engine simulation near top dead center (TDC) was used to profile the performance of a parallel commercial engine computational fluid dynamics code, as it was scaled on up to 4096 cores of an IBM Blue Gene/Q supercomputer. The test case has 9 million cells near TDC, with a fixed mesh size of 0.15 mm, and was run on configurations ranging from 128 to 4096 cores. Profiling was done for a small duration of 0.11 crank angle degrees near TDC during ignition. Optimization of input/output performance resulted in a significant speedup in reading restart files, andmore » in an over 100-times speedup in writing restart files and files for post-processing. Improvements to communication resulted in a 1400-times speedup in the mesh load balancing operation during initialization, on 4096 cores. An improved, “stiffness-based” algorithm for load balancing chemical kinetics calculations was developed, which results in an over 3-times faster run-time near ignition on 4096 cores relative to the original load balancing scheme. With this improvement to load balancing, the code achieves over 78% scaling efficiency on 2048 cores, and over 65% scaling efficiency on 4096 cores, relative to 256 cores.« less
Campus Energy Model for Control and Performance Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-09-19
The core of the modeling platform is an extensible block library for the MATLAB/Simulink software suite. The platform enables true co-simulation (interaction at each simulation time step) with NREL's state-of-the-art modeling tools and other energy modeling software.
Experimental Validation of the Transverse Shear Behavior of a Nomex Core for Sandwich Panels
NASA Astrophysics Data System (ADS)
Farooqi, M. I.; Nasir, M. A.; Ali, H. M.; Ali, Y.
2017-05-01
This work deals with determination of the transverse shear moduli of a Nomex® honeycomb core of sandwich panels. Their out-of-plane shear characteristics depend on the transverse shear moduli of the honeycomb core. These moduli were determined experimentally, numerically, and analytically. Numerical simulations were performed by using a unit cell model and three analytical approaches. Analytical calculations showed that two of the approaches provided reasonable predictions for the transverse shear modulus as compared with experimental results. However, the approach based upon the classical lamination theory showed large deviations from experimental data. Numerical simulations also showed a trend similar to that resulting from the analytical models.
Interpretation of the results of the CORA-33 dry core BWR test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ott, L.J.; Hagen, S.
All BWR degraded core experiments performed prior to CORA-33 were conducted under ``wet`` core degradation conditions for which water remains within the core and continuous steaming feeds metal/steam oxidation reactions on the in-core metallic surfaces. However, one dominant set of accident scenarios would occur with reduced metal oxidation under ``dry`` core degradation conditions and, prior to CORA-33, this set had been neglected experimentally. The CORA-33 experiment was designed specifically to address this dominant set of BWR ``dry`` core severe accident scenarios and to partially resolve phenomenological uncertainties concerning the behavior of relocating metallic melts draining into the lower regions ofmore » a ``dry`` BWR core. CORA-33 was conducted on October 1, 1992, in the CORA tests facility at KfK. Review of the CORA-33 data indicates that the test objectives were achieved; that is, core degradation occurred at a core heatup rate and a test section axial temperature profile that are prototypic of full-core nuclear power plant (NPP) simulations at ``dry`` core conditions. Simulations of the CORA-33 test at ORNL have required modification of existing control blade/canister materials interaction models to include the eutectic melting of the stainless steel/Zircaloy interaction products and the heat of mixing of stainless steel and Zircaloy. The timing and location of canister failure and melt intrusion into the fuel assembly appear to be adequately simulated by the ORNL models. This paper will present the results of the posttest analyses carried out at ORNL based upon the experimental data and the posttest examination of the test bundle at KfK. The implications of these results with respect to degraded core modeling and the associated safety issues are also discussed.« less
Yin, Xuesong; Tang, Chunhua; Zhang, Liuyang; Yu, Zhi Gen; Gong, Hao
2016-01-01
Nanostructured core/shell electrodes have been experimentally demonstrated promising for high-performance electrochemical energy storage devices. However, chemical insights into the significant roles of nanowire cores on the growth of shells and their supercapacitor behaviors still remain as a research shortfall. In this work, by substituting 1/3 cobalt in the Co3O4 nanowire core with nickel, a 61% enhancement of the specific mass-loading of the Ni(OH)2 shell, a tremendous 93% increase of the volumetric capacitance and a superior cyclability were achieved in a novel NiCo2O4/Ni(OH)2 core/shell electrode in contrast to a Co3O4/Ni(OH)2 one. A comparative study suggested that not only the growth of Ni(OH)2 shells but also the contribution of cores were attributed to the overall performances. Importantly, their chemical origins were revealed through a theoretical simulation of the core/shell interfacial energy changes. Besides, asymmetric supercapacitor devices and applications were also explored. The scientific clues and practical potentials obtained in this work are helpful for the design and analysis of alternative core/shell electrode materials. PMID:26857606
Yin, Xuesong; Tang, Chunhua; Zhang, Liuyang; Yu, Zhi Gen; Gong, Hao
2016-02-09
Nanostructured core/shell electrodes have been experimentally demonstrated promising for high-performance electrochemical energy storage devices. However, chemical insights into the significant roles of nanowire cores on the growth of shells and their supercapacitor behaviors still remain as a research shortfall. In this work, by substituting 1/3 cobalt in the Co3O4 nanowire core with nickel, a 61% enhancement of the specific mass-loading of the Ni(OH)2 shell, a tremendous 93% increase of the volumetric capacitance and a superior cyclability were achieved in a novel NiCo2O4/Ni(OH)2 core/shell electrode in contrast to a Co3O4/Ni(OH)2 one. A comparative study suggested that not only the growth of Ni(OH)2 shells but also the contribution of cores were attributed to the overall performances. Importantly, their chemical origins were revealed through a theoretical simulation of the core/shell interfacial energy changes. Besides, asymmetric supercapacitor devices and applications were also explored. The scientific clues and practical potentials obtained in this work are helpful for the design and analysis of alternative core/shell electrode materials.
Fast simulation of the NICER instrument
NASA Astrophysics Data System (ADS)
Doty, John P.; Wampler-Doty, Matthew P.; Prigozhin, Gregory Y.; Okajima, Takashi; Arzoumanian, Zaven; Gendreau, Keith
2016-07-01
The NICER1 mission uses a complicated physical system to collect information from objects that are, by x-ray timing science standards, rather faint. To get the most out of the data we will need a rigorous understanding of all instrumental effects. We are in the process of constructing a very fast, high fidelity simulator that will help us to assess instrument performance, support simulation-based data reduction, and improve our estimates of measurement error. We will combine and extend existing optics, detector, and electronics simulations. We will employ the Compute Unified Device Architecture (CUDA2) to parallelize these calculations. The price of suitable CUDA-compatible multi-giga op cores is about $0.20/core, so this approach will be very cost-effective.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bylaska, Eric J.; Jacquelin, Mathias; De Jong, Wibe A.
2017-10-20
Ab-initio Molecular Dynamics (AIMD) methods are an important class of algorithms, as they enable scientists to understand the chemistry and dynamics of molecular and condensed phase systems while retaining a first-principles-based description of their interactions. Many-core architectures such as the Intel® Xeon Phi™ processor are an interesting and promising target for these algorithms, as they can provide the computational power that is needed to solve interesting problems in chemistry. In this paper, we describe the efforts of refactoring the existing AIMD plane-wave method of NWChem from an MPI-only implementation to a scalable, hybrid code that employs MPI and OpenMP tomore » exploit the capabilities of current and future many-core architectures. We describe the optimizations required to get close to optimal performance for the multiplication of the tall-and-skinny matrices that form the core of the computational algorithm. We present strong scaling results on the complete AIMD simulation for a test case that simulates 256 water molecules and that strong-scales well on a cluster of 1024 nodes of Intel Xeon Phi processors. We compare the performance obtained with a cluster of dual-socket Intel® Xeon® E5–2698v3 processors.« less
Poirier, Therese I; Pailden, Junvie; Jhala, Ray; Ronald, Katie; Wilhelm, Miranda; Fan, Jingyang
2017-04-01
Objectives. To conduct a prospective evaluation for effectiveness of an error disclosure assessment tool and video recordings to enhance student learning and metacognitive skills while assessing the IPEC competencies. Design. The instruments for assessing performance (planning, communication, process, and team dynamics) in interprofessional error disclosure were developed. Student self-assessment of performance before and after viewing the recordings of their encounters were obtained. Faculty used a similar instrument to conduct real-time assessments. An instrument to assess achievement of the Interprofessional Education Collaborative (IPEC) core competencies was developed. Qualitative data was reviewed to determine student and faculty perceptions of the simulation. Assessment. The interprofessional simulation training involved a total of 233 students (50 dental, 109 nursing and 74 pharmacy). Use of video recordings made a significant difference in student self-assessment for communication and process categories of error disclosure. No differences in student self-assessments were noted among the different professions. There were differences among the family member affects for planning and communication for both pre-video and post-video data. There were significant differences between student self-assessment and faculty assessment for all paired comparisons, except communication in student post-video self-assessment. Students' perceptions of achievement of the IPEC core competencies were positive. Conclusion. The use of assessment instruments and video recordings may have enhanced students' metacognitive skills for assessing performance in interprofessional error disclosure. The simulation training was effective in enhancing perceptions on achievement of IPEC core competencies. This enhanced assessment process appeared to enhance learning about the skills needed for interprofessional error disclosure.
Pailden, Junvie; Jhala, Ray; Ronald, Katie; Wilhelm, Miranda; Fan, Jingyang
2017-01-01
Objectives. To conduct a prospective evaluation for effectiveness of an error disclosure assessment tool and video recordings to enhance student learning and metacognitive skills while assessing the IPEC competencies. Design. The instruments for assessing performance (planning, communication, process, and team dynamics) in interprofessional error disclosure were developed. Student self-assessment of performance before and after viewing the recordings of their encounters were obtained. Faculty used a similar instrument to conduct real-time assessments. An instrument to assess achievement of the Interprofessional Education Collaborative (IPEC) core competencies was developed. Qualitative data was reviewed to determine student and faculty perceptions of the simulation. Assessment. The interprofessional simulation training involved a total of 233 students (50 dental, 109 nursing and 74 pharmacy). Use of video recordings made a significant difference in student self-assessment for communication and process categories of error disclosure. No differences in student self-assessments were noted among the different professions. There were differences among the family member affects for planning and communication for both pre-video and post-video data. There were significant differences between student self-assessment and faculty assessment for all paired comparisons, except communication in student post-video self-assessment. Students’ perceptions of achievement of the IPEC core competencies were positive. Conclusion. The use of assessment instruments and video recordings may have enhanced students’ metacognitive skills for assessing performance in interprofessional error disclosure. The simulation training was effective in enhancing perceptions on achievement of IPEC core competencies. This enhanced assessment process appeared to enhance learning about the skills needed for interprofessional error disclosure. PMID:28496274
Coupled field effects in BWR stability simulations using SIMULATE-3K
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borkowski, J.; Smith, K.; Hagrman, D.
1996-12-31
The SIMULATE-3K code is the transient analysis version of the Studsvik advanced nodal reactor analysis code, SIMULATE-3. Recent developments have focused on further broadening the range of transient applications by refinement of core thermal-hydraulic models and on comparison with boiling water reactor (BWR) stability measurements performed at Ringhals unit 1, during the startups of cycles 14 through 17.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Cedric J., E-mail: cedric.powell@nist.gov; Chudzicki, Maksymilian; Werner, Wolfgang S. M.
2015-09-15
The National Institute of Standards and Technology database for the simulation of electron spectra for surface analysis has been used to simulate Cu 2p photoelectron spectra for four types of spherical copper–gold nanoparticles (NPs). These simulations were made to extend the work of Tougaard [J. Vac. Sci. Technol. A 14, 1415 (1996)] and of Powell et al. [J. Vac. Sci. Technol. A 31, 021402 (2013)] who performed similar simulations for four types of planar copper–gold films. The Cu 2p spectra for the NPs were compared and contrasted with analogous results for the planar films and the effects of elastic scatteringmore » were investigated. The new simulations were made for a monolayer of three types of Cu/Au core–shell NPs on a Si substrate: (1) an Au shell of variable thickness on a Cu core with diameters of 0.5, 1.0, 2.0, 5.0, and 10.0 nm; (2) a Cu shell of variable thickness on an Au core with diameters of 0.5, 1.0, 2.0, 5.0, and 10.0 nm; and (3) an Au shell of variable thickness on a 1 nm Cu shell on an Au core with diameters of 0.5, 1.0, 2.0, 5.0, and 10.0 nm. For these three morphologies, the outer-shell thickness was varied until the Cu 2p{sub 3/2} peak intensity was the same (within 2%) as that found in our previous work with planar Cu/Au morphologies. The authors also performed similar simulations for a monolayer of spherical NPs consisting of a CuAu{sub x} alloy (also on a Si substrate) with diameters of 0.5, 1.0, 2.0, 5.0, and 10.0 nm. In the latter simulations, the relative Au concentration (x) was varied to give the same Cu 2p{sub 3/2} peak intensity (within 2%) as that found previously. For each morphology, the authors performed simulations with elastic scattering switched on and off. The authors found that elastic-scattering effects were generally strong for the Cu-core/Au-shell and weak for the Au-core/Cu-shell NPs; intermediate elastic-scattering effects were found for the Au-core/Cu-shell/Au-shell NPs. The shell thicknesses required to give the selected Cu 2p{sub 3/2} peak intensity for the three types of core–shell NPs were less than the corresponding film thicknesses of planar samples since Cu 2p photoelectrons can be detected from the sides and, for the smaller NPs, bottoms of the NPs. Elastic-scattering effects were also observed on the Au atomic fractions found for the CuAu{sub x} NP alloys with different diameters.« less
Multi-dimensional Core-Collapse Supernova Simulations with Neutrino Transport
NASA Astrophysics Data System (ADS)
Pan, Kuo-Chuan; Liebendörfer, Matthias; Hempel, Matthias; Thielemann, Friedrich-Karl
We present multi-dimensional core-collapse supernova simulations using the Isotropic Diffusion Source Approximation (IDSA) for the neutrino transport and a modified potential for general relativity in two different supernova codes: FLASH and ELEPHANT. Due to the complexity of the core-collapse supernova explosion mechanism, simulations require not only high-performance computers and the exploitation of GPUs, but also sophisticated approximations to capture the essential microphysics. We demonstrate that the IDSA is an elegant and efficient neutrino radiation transfer scheme, which is portable to multiple hydrodynamics codes and fast enough to investigate long-term evolutions in two and three dimensions. Simulations with a 40 solar mass progenitor are presented in both FLASH (1D and 2D) and ELEPHANT (3D) as an extreme test condition. It is found that the black hole formation time is delayed in multiple dimensions and we argue that the strong standing accretion shock instability before black hole formation will lead to strong gravitational waves.
Kinetic turbulence simulations at extreme scale on leadership-class systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Bei; Ethier, Stephane; Tang, William
2013-01-01
Reliable predictive simulation capability addressing confinement properties in magnetically confined fusion plasmas is critically-important for ITER, a 20 billion dollar international burning plasma device under construction in France. The complex study of kinetic turbulence, which can severely limit the energy confinement and impact the economic viability of fusion systems, requires simulations at extreme scale for such an unprecedented device size. Our newly optimized, global, ab initio particle-in-cell code solving the nonlinear equations underlying gyrokinetic theory achieves excellent performance with respect to "time to solution" at the full capacity of the IBM Blue Gene/Q on 786,432 cores of Mira at ALCFmore » and recently of the 1,572,864 cores of Sequoia at LLNL. Recent multithreading and domain decomposition optimizations in the new GTC-P code represent critically important software advances for modern, low memory per core systems by enabling routine simulations at unprecedented size (130 million grid points ITER-scale) and resolution (65 billion particles).« less
Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment*†
Khan, Md. Ashfaquzzaman; Herbordt, Martin C.
2011-01-01
Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations. PMID:21822327
Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment.
Khan, Md Ashfaquzzaman; Herbordt, Martin C
2011-07-20
Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations.
Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model
NASA Astrophysics Data System (ADS)
Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin
2016-08-01
This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.
Lee, Ki-Sun; Shin, Joo-Hee; Kim, Jong-Eun; Kim, Jee-Hwan; Lee, Won-Chang; Shin, Sang-Wan; Lee, Jeong-Yol
2017-01-01
The aim of this study was to evaluate the biomechanical behavior and long-term safety of high performance polymer PEKK as an intraradicular dental post-core material through comparative finite element analysis (FEA) with other conventional post-core materials. A 3D FEA model of a maxillary central incisor was constructed. A cyclic loading force of 50 N was applied at an angle of 45° to the longitudinal axis of the tooth at the palatal surface of the crown. For comparison with traditionally used post-core materials, three materials (gold, fiberglass, and PEKK) were simulated to determine their post-core properties. PEKK, with a lower elastic modulus than root dentin, showed comparably high failure resistance and a more favorable stress distribution than conventional post-core material. However, the PEKK post-core system showed a higher probability of debonding and crown failure under long-term cyclic loading than the metal or fiberglass post-core systems.
Shin, Joo-Hee; Kim, Jong-Eun; Kim, Jee-Hwan; Lee, Won-Chang; Shin, Sang-Wan
2017-01-01
The aim of this study was to evaluate the biomechanical behavior and long-term safety of high performance polymer PEKK as an intraradicular dental post-core material through comparative finite element analysis (FEA) with other conventional post-core materials. A 3D FEA model of a maxillary central incisor was constructed. A cyclic loading force of 50 N was applied at an angle of 45° to the longitudinal axis of the tooth at the palatal surface of the crown. For comparison with traditionally used post-core materials, three materials (gold, fiberglass, and PEKK) were simulated to determine their post-core properties. PEKK, with a lower elastic modulus than root dentin, showed comparably high failure resistance and a more favorable stress distribution than conventional post-core material. However, the PEKK post-core system showed a higher probability of debonding and crown failure under long-term cyclic loading than the metal or fiberglass post-core systems. PMID:28386547
NASA Astrophysics Data System (ADS)
Ryu, Hoon; Jeong, Yosang; Kang, Ji-Hoon; Cho, Kyu Nam
2016-12-01
Modelling of multi-million atomic semiconductor structures is important as it not only predicts properties of physically realizable novel materials, but can accelerate advanced device designs. This work elaborates a new Technology-Computer-Aided-Design (TCAD) tool for nanoelectronics modelling, which uses a sp3d5s∗ tight-binding approach to describe multi-million atomic structures, and simulate electronic structures with high performance computing (HPC), including atomic effects such as alloy and dopant disorders. Being named as Quantum simulation tool for Advanced Nanoscale Devices (Q-AND), the tool shows nice scalability on traditional multi-core HPC clusters implying the strong capability of large-scale electronic structure simulations, particularly with remarkable performance enhancement on latest clusters of Intel Xeon PhiTM coprocessors. A review of the recent modelling study conducted to understand an experimental work of highly phosphorus-doped silicon nanowires, is presented to demonstrate the utility of Q-AND. Having been developed via Intel Parallel Computing Center project, Q-AND will be open to public to establish a sound framework of nanoelectronics modelling with advanced HPC clusters of a many-core base. With details of the development methodology and exemplary study of dopant electronics, this work will present a practical guideline for TCAD development to researchers in the field of computational nanoelectronics.
NASA Astrophysics Data System (ADS)
Abidi, Dhafer
TTEthernet is a deterministic network technology that makes enhancements to Layer 2 Quality-of-Service (QoS) for Ethernet. The components that implement its services enrich the Ethernet functionality with distributed fault-tolerant synchronization, robust temporal partitioning bandwidth and synchronous communication with fixed latency and low jitter. TTEthernet services can facilitate the design of scalable, robust, less complex distributed systems and architectures tolerant to faults. Simulation is nowadays an essential step in critical systems design process and represents a valuable support for validation and performance evaluation. CoRE4INET is a project bringing together all TTEthernet simulation models currently available. It is based on the extension of models of OMNeT ++ INET framework. Our objective is to study and simulate the TTEthernet protocol on a flight management subsystem (FMS). The idea is to use CoRE4INET to design the simulation model of the target system. The problem is that CoRE4INET does not offer a task scheduling tool for TTEthernet network. To overcome this problem we propose an adaptation for simulation purposes of a task scheduling approach based on formal specification of network constraints. The use of Yices solver allowed the translation of the formal specification into an executable program to generate the desired transmission plan. A case study allowed us at the end to assess the impact of the arrangement of Time-Triggered frames offsets on the performance of each type of the system traffic.
Pellet-clad mechanical interaction screening using VERA applied to Watts Bar Unit 1, Cycles 1–3
Stimpson, Shane; Powers, Jeffrey; Clarno, Kevin; ...
2017-12-22
The Consortium for Advanced Simulation of Light Water Reactors (CASL) aims to provide high-fidelity multiphysics simulations of light water nuclear reactors. To accomplish this, CASL is developing the Virtual Environment for Reactor Applications (VERA), which is a suite of code packages for thermal hydraulics, neutron transport, fuel performance, and coolant chemistry. As VERA continues to grow and expand, there has been an increased focus on incorporating fuel performance analysis methods. One of the primary goals of CASL is to estimate local cladding failure probability through pellet-clad interaction, which consists of both pellet-clad mechanical interaction (PCMI) and stress corrosion cracking. Estimatingmore » clad failure is important to preventing release of fission products to the primary system and accurate estimates could prove useful in establishing less conservative power ramp rates or when considering load-follow operations.While this capability is being pursued through several different approaches, the procedure presented in this article focuses on running independent fuel performance calculations with BISON using a file-based one-way coupling based on multicycle output data from high fidelity, pin-resolved coupled neutron transport–thermal hydraulics simulations. This type of approach is consistent with traditional fuel performance analysis methods, which are typically separate from core simulation analyses. A more tightly coupled approach is currently being developed, which is the ultimate target application in CASL.Recent work simulating 12 cycles of Watts Bar Unit 1 with VERA core simulator are capitalized upon, and quarter-core BISON results for parameters of interest to PCMI (maximum centerline fuel temperature, maximum clad hoop stress, and minimum gap size) are presented for Cycles 1–3. In conclusion, based on these results, this capability demonstrates its value and how it could be used as a screening tool for gathering insight into PCMI, singling out limiting rods for further, more detailed analysis.« less
Pellet-clad mechanical interaction screening using VERA applied to Watts Bar Unit 1, Cycles 1–3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stimpson, Shane; Powers, Jeffrey; Clarno, Kevin
The Consortium for Advanced Simulation of Light Water Reactors (CASL) aims to provide high-fidelity multiphysics simulations of light water nuclear reactors. To accomplish this, CASL is developing the Virtual Environment for Reactor Applications (VERA), which is a suite of code packages for thermal hydraulics, neutron transport, fuel performance, and coolant chemistry. As VERA continues to grow and expand, there has been an increased focus on incorporating fuel performance analysis methods. One of the primary goals of CASL is to estimate local cladding failure probability through pellet-clad interaction, which consists of both pellet-clad mechanical interaction (PCMI) and stress corrosion cracking. Estimatingmore » clad failure is important to preventing release of fission products to the primary system and accurate estimates could prove useful in establishing less conservative power ramp rates or when considering load-follow operations.While this capability is being pursued through several different approaches, the procedure presented in this article focuses on running independent fuel performance calculations with BISON using a file-based one-way coupling based on multicycle output data from high fidelity, pin-resolved coupled neutron transport–thermal hydraulics simulations. This type of approach is consistent with traditional fuel performance analysis methods, which are typically separate from core simulation analyses. A more tightly coupled approach is currently being developed, which is the ultimate target application in CASL.Recent work simulating 12 cycles of Watts Bar Unit 1 with VERA core simulator are capitalized upon, and quarter-core BISON results for parameters of interest to PCMI (maximum centerline fuel temperature, maximum clad hoop stress, and minimum gap size) are presented for Cycles 1–3. In conclusion, based on these results, this capability demonstrates its value and how it could be used as a screening tool for gathering insight into PCMI, singling out limiting rods for further, more detailed analysis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J
2012-01-01
In prior work (Yoginath and Perumalla, 2011; Yoginath, Perumalla and Henz, 2012), the motivation, challenges and issues were articulated in favor of virtual time ordering of Virtual Machines (VMs) in network simulations hosted on multi-core machines. Two major components in the overall virtualization challenge are (1) virtual timeline establishment and scheduling of VMs, and (2) virtualization of inter-VM communication. Here, we extend prior work by presenting scaling results for the first component, with experiment results on up to 128 VMs scheduled in virtual time order on a single 12-core host. We also explore the solution space of design alternatives formore » the second component, and present performance results from a multi-threaded, multi-queue implementation of inter-VM network control for synchronized execution with VM scheduling, incorporated in our NetWarp simulation system.« less
2010-07-22
dependent , providing a natural bandwidth match between compute cores and the memory subsystem. • High Bandwidth Dcnsity. Waveguides crossing the chip...simulate this memory access architecture on a 2S6-core chip with a concentrated 64-node network lIsing detailed traces of high-performance embedded...memory modulcs, wc placc memory access poi nts (MAPs) around the pcriphery of the chip connected to thc nctwork. These MAPs, shown in Figure 4, contain
Initial Comparison of Direct and Legacy Modeling Approaches for Radial Core Expansion Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shemon, Emily R.
2016-10-10
Radial core expansion in sodium-cooled fast reactors provides an important reactivity feedback effect. As the reactor power increases due to normal start up conditions or accident scenarios, the core and surrounding materials heat up, causing both grid plate expansion and bowing of the assembly ducts. When the core restraint system is designed correctly, the resulting structural deformations introduce negative reactivity which decreases the reactor power. Historically, an indirect procedure has been used to estimate the reactivity feedback due to structural deformation which relies upon perturbation theory and coupling legacy physics codes with limited geometry capabilities. With advancements in modeling andmore » simulation, radial core expansion phenomena can now be modeled directly, providing an assessment of the accuracy of the reactivity feedback coefficients generated by indirect legacy methods. Recently a new capability was added to the PROTEUS-SN unstructured geometry neutron transport solver to analyze deformed meshes quickly and directly. By supplying the deformed mesh in addition to the base configuration input files, PROTEUS-SN automatically processes material adjustments including calculation of region densities to conserve mass, calculation of isotopic densities according to material models (for example, sodium density as a function of temperature), and subsequent re-homogenization of materials. To verify the new capability of directly simulating deformed meshes, PROTEUS-SN was used to compute reactivity feedback for a series of contrived yet representative deformed configurations for the Advanced Burner Test Reactor design. The indirect legacy procedure was also performed to generate reactivity feedback coefficients for the same deformed configurations. Interestingly, the legacy procedure consistently overestimated reactivity feedbacks by 35% compared to direct simulations by PROTEUS-SN. This overestimation indicates that the legacy procedures are in fact not conservative and could be overestimating reactivity feedback effects that are closely tied to reactor safety. We conclude that there is indeed value in performing direct simulation of deformed meshes despite the increased computational expense. PROTEUS-SN is already part of the SHARP multi-physics toolkit where both thermal hydraulics and structural mechanical feedback modeling can be applied but this is the first comparison of direct simulation to legacy techniques for radial core expansion.« less
Offner, Stella S. R.; Klein, Richard I.; McKee, Christopher F.
2008-10-20
Molecular clouds are observed to be turbulent, but the origin of this turbulence is not well understood. As a result, there are two different approaches to simulating molecular clouds, one in which the turbulence is allowed to decay after it is initialized, and one in which it is driven. We use the adaptive mesh refinement (AMR) code, Orion, to perform high-resolution simulations of molecular cloud cores and protostars in environments with both driven and decaying turbulence. We include self-gravity, use a barotropic equation of state, and represent regions exceeding the maximum grid resolution with sink particles. We analyze the propertiesmore » of bound cores such as size, shape, line width, and rotational energy, and we find reasonable agreement with observation. At high resolution the different rates of core accretion in the two cases have a significant effect on protostellar system development. Clumps forming in a decaying turbulence environment produce high-multiplicity protostellar systems with Toomre Q unstable disks that exhibit characteristics of the competitive accretion model for star formation. In contrast, cores forming in the context of continuously driven turbulence and virial equilibrium form smaller protostellar systems with fewer low-mass members. Furthermore, our simulations of driven and decaying turbulence show some statistically significant differences, particularly in the production of brown dwarfs and core rotation, but the uncertainties are large enough that we are not able to conclude whether observations favor one or the other.« less
Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias
2011-10-01
Future multiscale and multiphysics models that support research into human disease, translational medical science, and treatment can utilize the power of high-performance computing (HPC) systems. We anticipate that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message-passing processes [e.g., the message-passing interface (MPI)] with multithreading (e.g., OpenMP, Pthreads). The objective of this study is to compare the performance of such hybrid programming models when applied to the simulation of a realistic physiological multiscale model of the heart. Our results show that the hybrid models perform favorably when compared to an implementation using only the MPI and, furthermore, that OpenMP in combination with the MPI provides a satisfactory compromise between performance and code complexity. Having the ability to use threads within MPI processes enables the sophisticated use of all processor cores for both computation and communication phases. Considering that HPC systems in 2012 will have two orders of magnitude more cores than what was used in this study, we believe that faster than real-time multiscale cardiac simulations can be achieved on these systems.
A Numerical and Experimental Study of Damage Growth in a Composite Laminate
NASA Technical Reports Server (NTRS)
McElroy, Mark; Ratcliffe, James; Czabaj, Michael; Wang, John; Yuan, Fuh-Gwo
2014-01-01
The present study has three goals: (1) perform an experiment where a simple laminate damage process can be characterized in high detail; (2) evaluate the performance of existing commercially available laminate damage simulation tools by modeling the experiment; (3) observe and understand the underlying physics of damage in a composite honeycomb sandwich structure subjected to low-velocity impact. A quasi-static indentation experiment has been devised to provide detailed information about a simple mixed-mode damage growth process. The test specimens consist of an aluminum honeycomb core with a cross-ply laminate facesheet supported on a stiff uniform surface. When the sample is subjected to an indentation load, the honeycomb core provides support to the facesheet resulting in a gradual and stable damage growth process in the skin. This enables real time observation as a matrix crack forms, propagates through a ply, and then causes a delamination. Finite element analyses were conducted in ABAQUS/Explicit(TradeMark) 6.13 that used continuum and cohesive modeling techniques to simulate facesheet damage and a geometric and material nonlinear model to simulate core crushing. The high fidelity of the experimental data allows a detailed investigation and discussion of the accuracy of each numerical modeling approach.
Stability Estimation of ABWR on the Basis of Noise Analysis
NASA Astrophysics Data System (ADS)
Furuya, Masahiro; Fukahori, Takanori; Mizokami, Shinya; Yokoya, Jun
In order to investigate the stability of a nuclear reactor core with an oxide mixture of uranium and plutonium (MOX) fuel installed, channel stability and regional stability tests were conducted with the SIRIUS-F facility. The SIRIUS-F facility was designed and constructed to provide a highly accurate simulation of thermal-hydraulic (channel) instabilities and coupled thermalhydraulics-neutronics instabilities of the Advanced Boiling Water Reactors (ABWRs). A real-time simulation was performed by modal point kinetics of reactor neutronics and fuel-rod thermal conduction on the basis of a measured void fraction in a reactor core section of the facility. A time series analysis was performed to calculate decay ratio and resonance frequency from a dominant pole of a transfer function by applying auto regressive (AR) methods to the time-series of the core inlet flow rate. Experiments were conducted with the SIRIUS-F facility, which simulates ABWR with MOX fuel installed. The variations in the decay ratio and resonance frequency among the five common AR methods are within 0.03 and 0.01 Hz, respectively. In this system, the appropriate decay ratio and resonance frequency can be estimated on the basis of the Yule-Walker method with the model order of 30.
Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S; Seal, Sudip K
2010-01-01
The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over themore » $$\\mu$$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.« less
Damage Tolerance of Sandwich Plates With Debonded Face Sheets
NASA Technical Reports Server (NTRS)
Sankar, Bhavani V.
2001-01-01
A nonlinear finite element analysis was performed to simulate axial compression of sandwich beams with debonded face sheets. The load - end-shortening diagrams were generated for a variety of specimens used in a previous experimental study. The energy release rate at the crack tip was computed using the J-integral, and plotted as a function of the load. A detailed stress analysis was performed and the critical stresses in the face sheet and the core were computed. The core was also modeled as an isotropic elastic-perfectly plastic material and a nonlinear post buckling analysis was performed. A Graeco-Latin factorial plan was used to study the effects of debond length, face sheet and core thicknesses, and core density on the load carrying capacity of the sandwich composite. It has been found that a linear buckling analysis is inadequate in determining the maximum load a debonded sandwich beam can carry. A nonlinear post-buckling analysis combined with an elastoplastic model of the core is required to predict the compression behavior of debonded sandwich beams.
Study of sample drilling techniques for Mars sample return missions
NASA Technical Reports Server (NTRS)
Mitchell, D. C.; Harris, P. T.
1980-01-01
To demonstrate the feasibility of acquiring various surface samples for a Mars sample return mission the following tasks were performed: (1) design of a Mars rover-mounted drill system capable of acquiring crystalline rock cores; prediction of performance, mass, and power requirements for various size systems, and the generation of engineering drawings; (2) performance of simulated permafrost coring tests using a residual Apollo lunar surface drill, (3) design of a rock breaker system which can be used to produce small samples of rock chips from rocks which are too large to return to Earth, but too small to be cored with the Rover-mounted drill; (4)design of sample containers for the selected regolith cores, rock cores, and small particulate or rock samples; and (5) design of sample handling and transfer techniques which will be required through all phase of sample acquisition, processing, and stowage on-board the Earth return vehicle. A preliminary design of a light-weight Rover-mounted sampling scoop was also developed.
Self-consistent core-pedestal transport simulations with neural network accelerated models
Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.; ...
2017-07-12
Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less
Self-consistent core-pedestal transport simulations with neural network accelerated models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.
Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less
Self-consistent core-pedestal transport simulations with neural network accelerated models
NASA Astrophysics Data System (ADS)
Meneghini, O.; Smith, S. P.; Snyder, P. B.; Staebler, G. M.; Candy, J.; Belli, E.; Lao, L.; Kostuk, M.; Luce, T.; Luda, T.; Park, J. M.; Poli, F.
2017-08-01
Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflow that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. The NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.
Analysis of the TREAT LEU Conceptual Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Connaway, H. M.; Kontogeorgakos, D. C.; Papadias, D. D.
2016-03-01
Analyses were performed to evaluate the performance of the low enriched uranium (LEU) conceptual design fuel for the conversion of the Transient Reactor Test Facility (TREAT) from its current highly enriched uranium (HEU) fuel. TREAT is an experimental nuclear reactor designed to produce high neutron flux transients for the testing of reactor fuels and other materials. TREAT is currently in non-operational standby, but is being restarted under the U.S. Department of Energy’s Resumption of Transient Testing Program. The conversion of TREAT is being pursued in keeping with the mission of the Department of Energy National Nuclear Security Administration’s Material Managementmore » and Minimization (M3) Reactor Conversion Program. The focus of this study was to demonstrate that the converted LEU core is capable of maintaining the performance of the existing HEU core, while continuing to operate safely. Neutronic and thermal hydraulic simulations have been performed to evaluate the performance of the LEU conceptual-design core under both steady-state and transient conditions, for both normal operation and reactivity insertion accident scenarios. In addition, ancillary safety analyses which were performed for previous LEU design concepts have been reviewed and updated as-needed, in order to evaluate if the converted LEU core will function safely with all existing facility systems. Simulations were also performed to evaluate the detailed behavior of the UO 2-graphite fuel, to support future fuel manufacturing decisions regarding particle size specifications. The results of these analyses will be used in conjunction with work being performed at Idaho National Laboratory and Los Alamos National Laboratory, in order to develop the Conceptual Design Report project deliverable.« less
Sleep restriction during simulated wildfire suppression: effect on physical task performance.
Vincent, Grace; Ferguson, Sally A; Tran, Jacqueline; Larsen, Brianna; Wolkow, Alexander; Aisbett, Brad
2015-01-01
To examine the effects of sleep restriction on firefighters' physical task performance during simulated wildfire suppression. Thirty-five firefighters were matched and randomly allocated to either a control condition (8-hour sleep opportunity, n = 18) or a sleep restricted condition (4-hour sleep opportunity, n = 17). Performance on physical work tasks was evaluated across three days. In addition, heart rate, core temperature, and worker activity were measured continuously. Rate of perceived and exertion and effort sensation were evaluated during the physical work periods. There were no differences between the sleep-restricted and control groups in firefighters' task performance, heart rate, core temperature, or perceptual responses during self-paced simulated firefighting work tasks. However, the sleep-restricted group were less active during periods of non-physical work compared to the control group. Under self-paced work conditions, 4 h of sleep restriction did not adversely affect firefighters' performance on physical work tasks. However, the sleep-restricted group were less physically active throughout the simulation. This may indicate that sleep-restricted participants adapted their behaviour to conserve effort during rest periods, to subsequently ensure they were able to maintain performance during the firefighter work tasks. This work contributes new knowledge to inform fire agencies of firefighters' operational capabilities when their sleep is restricted during multi-day wildfire events. The work also highlights the need for further research to explore how sleep restriction affects physical performance during tasks of varying duration, intensity, and complexity.
Sleep Restriction during Simulated Wildfire Suppression: Effect on Physical Task Performance
Vincent, Grace; Ferguson, Sally A.; Tran, Jacqueline; Larsen, Brianna; Wolkow, Alexander; Aisbett, Brad
2015-01-01
Objectives To examine the effects of sleep restriction on firefighters’ physical task performance during simulated wildfire suppression. Methods Thirty-five firefighters were matched and randomly allocated to either a control condition (8-hour sleep opportunity, n = 18) or a sleep restricted condition (4-hour sleep opportunity, n = 17). Performance on physical work tasks was evaluated across three days. In addition, heart rate, core temperature, and worker activity were measured continuously. Rate of perceived and exertion and effort sensation were evaluated during the physical work periods. Results There were no differences between the sleep-restricted and control groups in firefighters’ task performance, heart rate, core temperature, or perceptual responses during self-paced simulated firefighting work tasks. However, the sleep-restricted group were less active during periods of non-physical work compared to the control group. Conclusions Under self-paced work conditions, 4 h of sleep restriction did not adversely affect firefighters’ performance on physical work tasks. However, the sleep-restricted group were less physically active throughout the simulation. This may indicate that sleep-restricted participants adapted their behaviour to conserve effort during rest periods, to subsequently ensure they were able to maintain performance during the firefighter work tasks. This work contributes new knowledge to inform fire agencies of firefighters’ operational capabilities when their sleep is restricted during multi-day wildfire events. The work also highlights the need for further research to explore how sleep restriction affects physical performance during tasks of varying duration, intensity, and complexity. PMID:25615988
Performance of the ICAO standard core service modulation and coding techniques
NASA Technical Reports Server (NTRS)
Lodge, John; Moher, Michael
1988-01-01
Aviation binary phase shift keying (A-BPSK) is described and simulated performance results are given that demonstrate robust performance in the presence of hardlimiting amplifiers. The performance of coherently-detected A-BPSK with rate 1/2 convolutional coding are given. The performance loss due to the Rician fading was shown to be less than 1 dB over the simulated range. A partially coherent detection scheme that does not require carrier phase recovery was described. This scheme exhibits similiar performance to coherent detection, at high bit error rates, while it is superior at lower bit error rates.
NASA Astrophysics Data System (ADS)
Wang, Lei; Fan, Youping; Zhang, Dai; Ge, Mengxin; Zou, Xianbin; Li, Jingjiao
2017-09-01
This paper proposes a method to simulate a back-to-back modular multilevel converter (MMC) HVDC transmission system. In this paper we utilize an equivalent networks to simulate the dynamic power system. Moreover, to account for the performance of converter station, core components of model of the converter station gives a basic model of simulation. The proposed method is applied to an equivalent real power system.
Visualization and Quantification of Rotor Tip Vortices in Helicopter Flows
NASA Technical Reports Server (NTRS)
Kao, David L.; Ahmad, Jasim U.; Holst, Terry L.
2015-01-01
This paper presents an automated approach for effective extraction, visualization, and quantification of vortex core radii from the Navier-Stokes simulations of a UH-60A rotor in forward flight. We adopt a scaled Q-criterion to determine vortex regions and then perform vortex core profiling in these regions to calculate vortex core radii. This method provides an efficient way of visualizing and quantifying the blade tip vortices. Moreover, the vortices radii are displayed graphically in a plane.
Cosmological simulations of dwarf galaxies with cosmic ray feedback
NASA Astrophysics Data System (ADS)
Chen, Jingjing; Bryan, Greg L.; Salem, Munier
2016-08-01
We perform zoom-in cosmological simulations of a suite of dwarf galaxies, examining the impact of cosmic rays (CRs) generated by supernovae, including the effect of diffusion. We first look at the effect of varying the uncertain CR parameters by repeatedly simulating a single galaxy. Then we fix the comic ray model and simulate five dwarf systems with virial masses range from 8 to 30 × 1010 M⊙. We find that including CR feedback (with diffusion) consistently leads to disc-dominated systems with relatively flat rotation curves and constant star formation rates. In contrast, our purely thermal feedback case results in a hot stellar system and bursty star formation. The CR simulations very well match the observed baryonic Tully-Fisher relation, but have a lower gas fraction than in real systems. We also find that the dark matter cores of the CR feedback galaxies are cuspy, while the purely thermal feedback case results in a substantial core.
The FRIGG project: From intermediate galactic scales to self-gravitating cores
NASA Astrophysics Data System (ADS)
Hennebelle, Patrick
2018-03-01
Context. Understanding the detailed structure of the interstellar gas is essential for our knowledge of the star formation process. Aim. The small-scale structure of the interstellar medium (ISM) is a direct consequence of the galactic scales and making the link between the two is essential. Methods: We perform adaptive mesh simulations that aim to bridge the gap between the intermediate galactic scales and the self-gravitating prestellar cores. For this purpose we use stratified supernova regulated ISM magneto-hydrodynamical simulations at the kpc scale to set up the initial conditions. We then zoom, performing a series of concentric uniform refinement and then refining on the Jeans length for the last levels. This allows us to reach a spatial resolution of a few 10-3 pc. The cores are identified using a clump finder and various criteria based on virial analysis. Their most relevant properties are computed and, due to the large number of objects formed in the simulations, reliable statistics are obtained. Results: The cores' properties show encouraging agreements with observations. The mass spectrum presents a clear powerlaw at high masses with an exponent close to ≃-1.3 and a peak at about 1-2 M⊙. The velocity dispersion and the angular momentum distributions are respectively a few times the local sound speed and a few 10-2 pc km s-1. We also find that the distribution of thermally supercritical cores present a range of magnetic mass-to-flux over critical mass-to-flux ratios, typically between ≃0.3 and 3 indicating that they are significantly magnetized. Investigating the time and spatial dependence of these statistical properties, we conclude that they are not significantly affected by the zooming procedure and that they do not present very large fluctuations. The most severe issue appears to be the dependence on the numerical resolution of the core mass function (CMF). While the core definition process may possibly introduce some biases, the peak tends to shift to smaller values when the resolution improves. Conclusions: Our simulations, which use self-consistently generated initial conditions at the kpc scale, produce a large number of prestellar cores from which reliable statistics can be inferred. Preliminary comparisons with observations show encouraging agreements. In particular the inferred CMFs resemble the ones inferred from recent observations. We stress, however, a possible issue with the peak position shifting with numerical resolution.
Teaching core competencies of reconstructive microsurgery with the use of standardized patients.
Son, Ji; Zeidler, Kamakshi R; Echo, Anthony; Otake, Leo; Ahdoot, Michael; Lee, Gordon K
2013-04-01
The Accreditation Council of Graduate Medical Education has defined 6 core competencies that residents must master before completing their training. Objective structured clinical examinations (OSCEs) using standardized patients are effective educational tools to assess and teach core competencies. We developed an OSCE specific for microsurgical head and neck reconstruction. Fifteen plastic surgery residents participated in the OSCE simulating a typical new patient consultation, which involved a patient with oral cancer. Residents were scored in all 6 core competencies by the standardized patients and faculty experts. Analysis of participant performance showed that although residents performed well overall, many lacked proficiency in systems-based practice. Junior residents were also more likely to omit critical elements of the physical examination compared to senior residents. We have modified our educational curriculum to specifically address these deficiencies. Our study demonstrates that the OSCE is an effective assessment tool for teaching and assessing all core competencies in microsurgery.
Two-Dimensional Neutronic and Fuel Cycle Analysis of the Transatomic Power Molten Salt Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Betzler, Benjamin R.; Powers, Jeffrey J.; Worrall, Andrew
2017-01-15
This status report presents the results from the first phase of the collaboration between Transatomic Power Corporation (TAP) and Oak Ridge National Laboratory (ORNL) to provide neutronic and fuel cycle analysis of the TAP core design through the Department of Energy Gateway for Accelerated Innovation in Nuclear, Nuclear Energy Voucher program. The TAP design is a molten salt reactor using movable moderator rods to shift the neutron spectrum in the core from mostly epithermal at beginning of life to thermal at end of life. Additional developments in the ChemTriton modeling and simulation tool provide the critical moderator-to-fuel ratio searches andmore » time-dependent parameters necessary to simulate the continuously changing physics in this complex system. Results from simulations with these tools show agreement with TAP-calculated performance metrics for core lifetime, discharge burnup, and salt volume fraction, verifying the viability of reducing actinide waste production with this design. Additional analyses of time step sizes, mass feed rates and enrichments, and isotopic removals provide additional information to make informed design decisions. This work further demonstrates capabilities of ORNL modeling and simulation tools for analysis of molten salt reactor designs and strongly positions this effort for the upcoming three-dimensional core analysis.« less
NASA Astrophysics Data System (ADS)
Angulo, A. A.; Kuranz, C. C.; Drake, R. P.; Huntington, C. M.; Park, H.-S.; Remington, B. A.; Kalantar, D.; MacLaren, S.; Raman, K.; Miles, A.; Trantham, Matthew; Kline, J. L.; Flippo, K.; Doss, F. W.; Shvarts, D.
2016-10-01
This poster will describe simulations based on results from ongoing laboratory astrophysics experiments at the National Ignition Facility (NIF) relevant to the effects of radiative shock on hydrodynamically unstable surfaces. The experiments performed on NIF uniquely provide the necessary conditions required to emulate radiative shock that occurs in astrophysical systems. The core-collapse explosions of red supergiant stars is such an example wherein the interaction between the supernova ejecta and the circumstellar medium creates a region susceptible to Rayleigh-Taylor (R-T) instabilities. Radiative and nonradiative experiments were performed to show that R-T growth should be reduced by the effects of the radiative shocks that occur during this core-collapse. Simulations were performed using the radiation hydrodynamics code Hyades using the experimental conditions to find the mean interface acceleration of the instability and then further analyzed in the buoyancy drag model to observe how the material expansion contributes to the mix-layer growth. This work is funded by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas under Grant Number DE-FG52-09NA29548.
Palmer, T. N.
2014-01-01
This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic–dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only. PMID:24842038
Palmer, T N
2014-06-28
This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic-dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only.
Core-Collapse Supernovae Explored by Multi-D Boltzmann Hydrodynamic Simulations
NASA Astrophysics Data System (ADS)
Sumiyoshi, Kohsuke; Nagakura, Hiroki; Iwakami, Wakana; Furusawa, Shun; Matsufuru, Hideo; Imakura, Akira; Yamada, Shoichi
We report the latest results of numerical simulations of core-collapse supernovae by solving multi-D neutrino-radiation hydrodynamics with Boltzmann equations. One of the longstanding issues of the explosion mechanism of supernovae has been uncertainty in the approximations of the neutrino transfer in multi-D such as the diffusion approximation and ray-by-ray method. The neutrino transfer is essential, together with 2D/3D hydrodynamical instabilities, to evaluate the neutrino heating behind the shock wave for successful explosions and to predict the neutrino burst signals. We tackled this difficult problem by utilizing our solver of the 6D Boltzmann equation for neutrinos in 3D space and 3D neutrino momentum space coupled with multi-D hydrodynamics adding special and general relativistic extensions. We have performed a set of 2D core-collapse simulations from 11M ⊙ and 15M ⊙ stars on K-computer in Japan by following long-term evolution over 400 ms after bounce to reveal the outcome from the full Boltzmann hydrodynamic simulations with a sophisticated equation of state with multi-nuclear species and updated rates for electron captures on nuclei.
Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS
Brown, C. S.; Zhang, Hongbin
2016-05-24
Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less
Exploiting MIC architectures for the simulation of channeling of charged particles in crystals
NASA Astrophysics Data System (ADS)
Bagli, Enrico; Karpusenko, Vadim
2016-08-01
Coherent effects of ultra-relativistic particles in crystals is an area of science under development. DYNECHARM + + is a toolkit for the simulation of coherent interactions between high-energy charged particles and complex crystal structures. The particle trajectory in a crystal is computed through numerical integration of the equation of motion. The code was revised and improved in order to exploit parallelization on multi-cores and vectorization of single instructions on multiple data. An Intel Xeon Phi card was adopted for the performance measurements. The computation time was proved to scale linearly as a function of the number of physical and virtual cores. By enabling the auto-vectorization flag of the compiler a three time speedup was obtained. The performances of the card were compared to the Dual Xeon ones.
NASA Astrophysics Data System (ADS)
Clay, M. P.; Buaria, D.; Gotoh, T.; Yeung, P. K.
2017-10-01
A new dual-communicator algorithm with very favorable performance characteristics has been developed for direct numerical simulation (DNS) of turbulent mixing of a passive scalar governed by an advection-diffusion equation. We focus on the regime of high Schmidt number (S c), where because of low molecular diffusivity the grid-resolution requirements for the scalar field are stricter than those for the velocity field by a factor √{ S c }. Computational throughput is improved by simulating the velocity field on a coarse grid of Nv3 points with a Fourier pseudo-spectral (FPS) method, while the passive scalar is simulated on a fine grid of Nθ3 points with a combined compact finite difference (CCD) scheme which computes first and second derivatives at eighth-order accuracy. A static three-dimensional domain decomposition and a parallel solution algorithm for the CCD scheme are used to avoid the heavy communication cost of memory transposes. A kernel is used to evaluate several approaches to optimize the performance of the CCD routines, which account for 60% of the overall simulation cost. On the petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign, scalability is improved substantially with a hybrid MPI-OpenMP approach in which a dedicated thread per NUMA domain overlaps communication calls with computational tasks performed by a separate team of threads spawned using OpenMP nested parallelism. At a target production problem size of 81923 (0.5 trillion) grid points on 262,144 cores, CCD timings are reduced by 34% compared to a pure-MPI implementation. Timings for 163843 (4 trillion) grid points on 524,288 cores encouragingly maintain scalability greater than 90%, although the wall clock time is too high for production runs at this size. Performance monitoring with CrayPat for problem sizes up to 40963 shows that the CCD routines can achieve nearly 6% of the peak flop rate. The new DNS code is built upon two existing FPS and CCD codes. With the grid ratio Nθ /Nv = 8, the disparity in the computational requirements for the velocity and scalar problems is addressed by splitting the global communicator MPI_COMM_WORLD into disjoint communicators for the velocity and scalar fields, respectively. Inter-communicator transfer of the velocity field from the velocity communicator to the scalar communicator is handled with discrete send and non-blocking receive calls, which are overlapped with other operations on the scalar communicator. For production simulations at Nθ = 8192 and Nv = 1024 on 262,144 cores for the scalar field, the DNS code achieves 94% strong scaling relative to 65,536 cores and 92% weak scaling relative to Nθ = 1024 and Nv = 128 on 512 cores.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le, Hai D.
2017-03-02
SimEngine provides the core functionalities and components that are key to the development of discrete event simulation tools. These include events, activities, event queues, random number generators, and basic result tracking classes. SimEngine was designed for high performance, integrates seamlessly into any Microsoft .Net development environment, and provides a flexible API for simulation developers.
The numerical simulation of a high-speed axial flow compressor
NASA Technical Reports Server (NTRS)
Mulac, Richard A.; Adamczyk, John J.
1991-01-01
The advancement of high-speed axial-flow multistage compressors is impeded by a lack of detailed flow-field information. Recent development in compressor flow modeling and numerical simulation have the potential to provide needed information in a timely manner. The development of a computer program is described to solve the viscous form of the average-passage equation system for multistage turbomachinery. Programming issues such as in-core versus out-of-core data storage and CPU utilization (parallelization, vectorization, and chaining) are addressed. Code performance is evaluated through the simulation of the first four stages of a five-stage, high-speed, axial-flow compressor. The second part addresses the flow physics which can be obtained from the numerical simulation. In particular, an examination of the endwall flow structure is made, and its impact on blockage distribution assessed.
Neutron dose estimation in a zero power nuclear reactor
NASA Astrophysics Data System (ADS)
Triviño, S.; Vedelago, J.; Cantargi, F.; Keil, W.; Figueroa, R.; Mattea, F.; Chautemps, A.; Santibañez, M.; Valente, M.
2016-10-01
This work presents the characterization and contribution of neutron and gamma components to the absorbed dose in a zero power nuclear reactor. A dosimetric method based on Fricke gel was implemented to evaluate the separation between dose components in the mixed field. The validation of this proposed method was performed by means of direct measurements of neutron flux in different positions using Au and Mg-Ni activation foils. Monte Carlo simulations were conversely performed using the MCNP main code with a dedicated subroutine to incorporate the exact complete geometry of the nuclear reactor facility. Once nuclear fuel elements were defined, the simulations computed the different contributions to the absorbed dose in specific positions inside the core. Thermal/epithermal contributions of absorbed dose were assessed by means of Fricke gel dosimetry using different isotopic compositions aimed at modifying the sensitivity of the dosimeter for specific dose components. Clear distinctions between gamma and neutron capture dose were obtained. Both Monte Carlo simulations and experimental results provided reliable estimations about neutron flux rate as well as dose rate during the reactor operation. Simulations and experimental results are in good agreement in every positions measured and simulated in the core.
A Role for the Left Angular Gyrus in Episodic Simulation and Memory.
Thakral, Preston P; Madore, Kevin P; Schacter, Daniel L
2017-08-23
Functional magnetic resonance imaging (fMRI) studies indicate that episodic simulation (i.e., imagining specific future experiences) and episodic memory (i.e., remembering specific past experiences) are associated with enhanced activity in a common set of neural regions referred to as the core network. This network comprises the hippocampus, medial prefrontal cortex, and left angular gyrus, among other regions. Because fMRI data are correlational, it is unknown whether activity increases in core network regions are critical for episodic simulation and episodic memory. In the current study, we used MRI-guided transcranial magnetic stimulation (TMS) to assess whether temporary disruption of the left angular gyrus would impair both episodic simulation and memory (16 participants, 10 females). Relative to TMS to a control site (vertex), disruption of the left angular gyrus significantly reduced the number of internal (i.e., episodic) details produced during the simulation and memory tasks, with a concomitant increase in external detail production (i.e., semantic, repetitive, or off-topic information), reflected by a significant detail by TMS site interaction. Difficulty in the simulation and memory tasks also increased after TMS to the left angular gyrus relative to the vertex. In contrast, performance in a nonepisodic control task did not differ statistically as a function of TMS site (i.e., number of free associates produced or difficulty in performing the free associate task). Together, these results are the first to demonstrate that the left angular gyrus is critical for both episodic simulation and episodic memory. SIGNIFICANCE STATEMENT Humans have the ability to imagine future episodes (i.e., episodic simulation) and remember episodes from the past (i.e., episodic memory). A wealth of neuroimaging studies have revealed that these abilities are associated with enhanced activity in a core network of neural regions, including the hippocampus, medial prefrontal cortex, and left angular gyrus. However, neuroimaging data are correlational and do not tell us whether core regions support critical processes for simulation and memory. In the current study, we used transcranial magnetic stimulation and demonstrated that temporary disruption of the left angular gyrus leads to impairments in simulation and memory. The present study provides the first causal evidence to indicate that this region is critical for these fundamental abilities. Copyright © 2017 the authors 0270-6474/17/378142-08$15.00/0.
A Role for the Left Angular Gyrus in Episodic Simulation and Memory
2017-01-01
Functional magnetic resonance imaging (fMRI) studies indicate that episodic simulation (i.e., imagining specific future experiences) and episodic memory (i.e., remembering specific past experiences) are associated with enhanced activity in a common set of neural regions referred to as the core network. This network comprises the hippocampus, medial prefrontal cortex, and left angular gyrus, among other regions. Because fMRI data are correlational, it is unknown whether activity increases in core network regions are critical for episodic simulation and episodic memory. In the current study, we used MRI-guided transcranial magnetic stimulation (TMS) to assess whether temporary disruption of the left angular gyrus would impair both episodic simulation and memory (16 participants, 10 females). Relative to TMS to a control site (vertex), disruption of the left angular gyrus significantly reduced the number of internal (i.e., episodic) details produced during the simulation and memory tasks, with a concomitant increase in external detail production (i.e., semantic, repetitive, or off-topic information), reflected by a significant detail by TMS site interaction. Difficulty in the simulation and memory tasks also increased after TMS to the left angular gyrus relative to the vertex. In contrast, performance in a nonepisodic control task did not differ statistically as a function of TMS site (i.e., number of free associates produced or difficulty in performing the free associate task). Together, these results are the first to demonstrate that the left angular gyrus is critical for both episodic simulation and episodic memory. SIGNIFICANCE STATEMENT Humans have the ability to imagine future episodes (i.e., episodic simulation) and remember episodes from the past (i.e., episodic memory). A wealth of neuroimaging studies have revealed that these abilities are associated with enhanced activity in a core network of neural regions, including the hippocampus, medial prefrontal cortex, and left angular gyrus. However, neuroimaging data are correlational and do not tell us whether core regions support critical processes for simulation and memory. In the current study, we used transcranial magnetic stimulation and demonstrated that temporary disruption of the left angular gyrus leads to impairments in simulation and memory. The present study provides the first causal evidence to indicate that this region is critical for these fundamental abilities. PMID:28733357
NASA Astrophysics Data System (ADS)
Serpa-Imbett, C. M.; Marín-Alfonso, J.; Gómez-Santamaría, C.; Betancur-Agudelo, L.; Amaya-Fernández, F.
2013-12-01
Space division multiplexing in multicore fibers is one of the most promise technologies in order to support transmissions of next-generation peta-to-exaflop-scale supercomputers and mega data centers, owing to advantages in terms of costs and space saving of the new optical fibers with multiple cores. Additionally, multicore fibers allow photonic signal processing in optical communication systems, taking advantage of the mode coupling phenomena. In this work, we numerically have simulated an optical MIMO-OFDM (multiple-input multiple-output orthogonal frequency division multiplexing) by using the coded Alamouti to be transmitted through a twin-core fiber with low coupling. Furthermore, an optical OFDM is transmitted through a core of a singlemode fiber, using pilot-aided channel estimation. We compare the transmission performance in the twin-core fiber and in the singlemode fiber taking into account numerical results of the bit-error rate, considering linear propagation, and Gaussian noise through an optical fiber link. We carry out an optical fiber transmission of OFDM frames using 8 PSK and 16 QAM, with bit rates values of 130 Gb/s and 170 Gb/s, respectively. We obtain a penalty around 4 dB for the 8 PSK transmissions, after 100 km of linear fiber optic propagation for both singlemode and twin core fiber. We obtain a penalty around 6 dB for the 16 QAM transmissions, with linear propagation after 100 km of optical fiber. The transmission in a two-core fiber by using Alamouti coded OFDM-MIMO exhibits a better performance, offering a good alternative in the mitigation of fiber impairments, allowing to expand Alamouti coded in multichannel systems spatially multiplexed in multicore fibers.
Making the case for high temperature low sag (htls) overhead transmission line conductors
NASA Astrophysics Data System (ADS)
Banerjee, Koustubh
The future grid will face challenges to meet an increased power demand by the consumers. Various solutions were studied to address this issue. One alternative to realize increased power flow in the grid is to use High Temperature Low Sag (HTLS) since it fulfills essential criteria of less sag and good material performance with temperature. HTLS conductors like Aluminum Conductor Composite Reinforced (ACCR) and Aluminum Conductor Carbon Composite (ACCC) are expected to face high operating temperatures of 150-200 degree Celsius in order to achieve the desired increased power flow. Therefore, it is imperative to characterize the material performance of these conductors with temperature. The work presented in this thesis addresses the characterization of carbon composite core based and metal matrix core based HTLS conductors. The thesis focuses on the study of variation of tensile strength of the carbon composite core with temperature and the level of temperature rise of the HTLS conductors due to fault currents cleared by backup protection. In this thesis, Dynamic Mechanical Analysis (DMA) was used to quantify the loss in storage modulus of carbon composite cores with temperature. It has been previously shown in literature that storage modulus is correlated to the tensile strength of the composite. Current temperature relationships of HTLS conductors were determined using the IEEE 738-2006 standard. Temperature rise of these conductors due to fault currents were also simulated. All simulations were performed using Microsoft Visual C++ suite. Tensile testing of metal matrix core was also performed. Results of DMA on carbon composite cores show that the storage modulus, hence tensile strength, decreases rapidly in the temperature range of intended use. DMA on composite cores subjected to heat treatment were conducted to investigate any changes in the variation of storage modulus curves. The experiments also indicates that carbon composites cores subjected to temperatures at or above 250 degree Celsius can cause permanent loss of mechanical properties including tensile strength. The fault current temperature analysis of carbon composite based conductors reveal that fault currents eventually cleared by backup protection in the event of primary protection failure can cause damage to fiber matrix interface.
Geotechnical properties of core sample from methane hydrate deposits in Eastern Nankai Trough
NASA Astrophysics Data System (ADS)
Yoneda, J.; Masui, A.; Egawa, K.; Konno, Y.; Ito, T.; Kida, M.; Jin, Y.; Suzuki, K.; Nakatsuka, Y.; Tenma, N.; Nagao, J.
2013-12-01
To date, MH extraction has been simulated in several ways to help ensure the safe and efficient production of gas, with a particular focus on the investigation of landsliding, uneven settlement, and production well integrity. The mechanical properties of deep sea sediments and gas-hydrate-bearing sediments, typically obtained through material tests, are essential for the geomechanical response simulation to hydrate extraction. We conducted triaxial compression tests and the geotechnical properties of the sediments was investigated. Consolidated undrained compression tests were performed for silty sediments. And consolidated drained tests were performed for sandy samples. In addition, permeability was investigated from isotropic consolidation results. These core samples recovered from methane hydrate deposits of Daini Atsumi Knoll in Eastern Nankai Trough during the 2012 JOGMEC/JAPEX Pressure coring operation. The pressure core samples were rapidly depressurized on the ship and it were frozen using liquid nitrogen to prevent MH dissociation. Undrained shear strength of the core samples increase linearly with depth from sea floor. These core samples should be normally consolidated sample in-situ. Drained shear strength increases dramatically with hydrate saturation increases. Peak stress ratio q/p' of the core sample which has 73% of hydrate saturation was approximately 2.0 and it decrease down to 1.3 at the critical state. Dilatancy also changed from compressive tendency to dilative tendency with hydrate saturation increase. This study was financially supported by the Research Consortium for Methane Hydrate Resources in Japan (MH21 Research Consortium) that carries out Japan's Methane Hydrate R&D Program conducted by the Ministry of Economy, Trade and Industry (METI).
Moritsugu, Kei; Kidera, Akinori; Smith, Jeremy C
2014-07-24
Protein solvation dynamics has been investigated using atom-dependent Langevin friction coefficients derived directly from molecular dynamics (MD) simulations. To determine the effect of solvation on the atomic friction coefficients, solution and vacuum MD simulations were performed for lysozyme and staphylococcal nuclease and analyzed by Langevin mode analysis. The coefficients thus derived are roughly correlated with the atomic solvent-accessible surface area (ASA), as expected from the fact that friction occurs as the result of collisions with solvent molecules. However, a considerable number of atoms with higher friction coefficients are found inside the core region. Hence, the influence of solvent friction propagates into the protein core. The internal coefficients have large contributions from the low-frequency modes, yielding a simple picture of the surface-to-core long-range damping via solvation governed by collective low-frequency modes. To make use of these findings in implicit-solvent modeling, we compare the all-atom friction results with those obtained using Langevin dynamics (LD) with two empirical representations: the constant-friction and the ASA-dependent (Pastor-Karplus) friction models. The constant-friction model overestimates the core and underestimates the surface damping whereas the ASA-dependent friction model, which damps protein atoms only on the solvent-accessible surface, reproduces well the friction coefficients for both the surface and core regions observed in the explicit-solvent MD simulations. Therefore, in LD simulation, the solvent friction coefficients should be imposed only on the protein surface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moritsugu, Kei; Kidera, Akinori; Smith, Jeremy C.
2014-06-25
Protein solvation dynamics has been investigated using atom-dependent Langevin friction coefficients derived directly from molecular dynamics (MD) simulations. To determine the effect of solvation on the atomic friction coefficients, solution and vacuum MD simulations were performed for lysozyme and staphylococcal nuclease and analyzed by Langevin mode analysis. The coefficients thus derived are roughly correlated with the atomic solvent-accessible surface area (ASA), as expected from the fact that friction occurs as the result of collisions with solvent molecules. However, a considerable number of atoms with higher friction coefficients are found inside the core region. Hence, the influence of solvent friction propagatesmore » into the protein core. The internal coefficients have large contributions from the low-frequency modes, yielding a simple picture of the surface-to-core long-range damping via solvation governed by collective low-frequency modes. To make use of these findings in implicit-solvent modeling, we compare the all-atom friction results with those obtained using Langevin dynamics (LD) with two empirical representations: the constant-friction and the ASA-dependent (Pastor Karplus) friction models. The constant-friction model overestimates the core and underestimates the surface damping whereas the ASA-dependent friction model, which damps protein atoms only on the solvent-accessible surface, reproduces well the friction coefficients for both the surface and core regions observed in the explicit-solvent MD simulations. Furthermore, in LD simulation, the solvent friction coefficients should be imposed only on the protein surface.« less
Shared Memory Parallelism for 3D Cartesian Discrete Ordinates Solver
NASA Astrophysics Data System (ADS)
Moustafa, Salli; Dutka-Malen, Ivan; Plagne, Laurent; Ponçot, Angélique; Ramet, Pierre
2014-06-01
This paper describes the design and the performance of DOMINO, a 3D Cartesian SN solver that implements two nested levels of parallelism (multicore+SIMD) on shared memory computation nodes. DOMINO is written in C++, a multi-paradigm programming language that enables the use of powerful and generic parallel programming tools such as Intel TBB and Eigen. These two libraries allow us to combine multi-thread parallelism with vector operations in an efficient and yet portable way. As a result, DOMINO can exploit the full power of modern multi-core processors and is able to tackle very large simulations, that usually require large HPC clusters, using a single computing node. For example, DOMINO solves a 3D full core PWR eigenvalue problem involving 26 energy groups, 288 angular directions (S16), 46 × 106 spatial cells and 1 × 1012 DoFs within 11 hours on a single 32-core SMP node. This represents a sustained performance of 235 GFlops and 40:74% of the SMP node peak performance for the DOMINO sweep implementation. The very high Flops/Watt ratio of DOMINO makes it a very interesting building block for a future many-nodes nuclear simulation tool.
Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2011-01-01
Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less
Development of IR imaging system simulator
NASA Astrophysics Data System (ADS)
Xiang, Xinglang; He, Guojing; Dong, Weike; Dong, Lu
2017-02-01
To overcome the disadvantages of the tradition semi-physical simulation and injection simulation equipment in the performance evaluation of the infrared imaging system (IRIS), a low-cost and reconfigurable IRIS simulator, which can simulate the realistic physical process of infrared imaging, is proposed to test and evaluate the performance of the IRIS. According to the theoretical simulation framework and the theoretical models of the IRIS, the architecture of the IRIS simulator is constructed. The 3D scenes are generated and the infrared atmospheric transmission effects are simulated using OGRE technology in real-time on the computer. The physical effects of the IRIS are classified as the signal response characteristic, modulation transfer characteristic and noise characteristic, and they are simulated on the single-board signal processing platform based on the core processor FPGA in real-time using high-speed parallel computation method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, C. L.; Cozzi, A. D.; Hill, K. A.
2016-06-01
The primary disposition path of Low Activity Waste (LAW) at the DOE Hanford Site is vitrification. A cementitious waste form is one of the alternatives being considered for the supplemental immobilization of the LAW that will not be treated by the primary vitrification facility. Washington River Protection Solutions (WRPS) has been directed to generate and collect data on cementitious or pozzolanic waste forms such as Cast Stone. This report documents the coring and leach testing of monolithic samples cored from an engineering-scale demonstration (ES Demo) with non-radioactive simulants. The ES Demo was performed at SRNL in October of 2013 usingmore » the Scaled Continuous Processing Facility (SCPF) to fill an 8.5 ft. diameter x 3.25 ft. high container with simulated Cast Stone grout. The Cast Stone formulation was chosen from the previous screening tests. Legacy salt solution from previous Hanford salt waste testing was adjusted to correspond to the average LAW composition generated from the Hanford Tank Waste Operation Simulator (HTWOS). The dry blend materials, ordinary portland cement (OPC), Class F fly ash, and ground granulated blast furnace slag (GGBFS or BFS), were obtained from Lafarge North America in Pasco, WA. In 2014 core samples originally obtained approximately six months after filling the ES Demo were tested along with bench scale molded samples that were collected during the original pour. A latter set of core samples were obtained in late March of 2015, eighteen months after completion of the original ES Demo. Core samples were obtained using a 2” diameter x 11” long coring bit. The ES Demo was sampled in three different regions consisting of an outer ring, a middle ring and an inner core zone. Cores from these three lateral zones were further segregated into upper, middle and lower vertical segments. Monolithic core samples were tested using the Environmental Protection Agency (EPA) Method 1315, which is designed to provide mass transfer rates (release rates) of inorganic analytes contained in monolithic material under diffusion controlled release conditions as a function of leaching time. Compressive strength measurements and drying tests were also performed on the 2015 samples. Leachability indices reported are based on analyte concentrations determined from dissolution of the dried samples.« less
NASA Astrophysics Data System (ADS)
Leggett, C.; Binet, S.; Jackson, K.; Levinthal, D.; Tatarkhanov, M.; Yao, Y.
2011-12-01
Thermal limitations have forced CPU manufacturers to shift from simply increasing clock speeds to improve processor performance, to producing chip designs with multi- and many-core architectures. Further the cores themselves can run multiple threads as a zero overhead context switch allowing low level resource sharing (Intel Hyperthreading). To maximize bandwidth and minimize memory latency, memory access has become non uniform (NUMA). As manufacturers add more cores to each chip, a careful understanding of the underlying architecture is required in order to fully utilize the available resources. We present AthenaMP and the Atlas event loop manager, the driver of the simulation and reconstruction engines, which have been rewritten to make use of multiple cores, by means of event based parallelism, and final stage I/O synchronization. However, initial studies on 8 andl6 core Intel architectures have shown marked non-linearities as parallel process counts increase, with as much as 30% reductions in event throughput in some scenarios. Since the Intel Nehalem architecture (both Gainestown and Westmere) will be the most common choice for the next round of hardware procurements, an understanding of these scaling issues is essential. Using hardware based event counters and Intel's Performance Tuning Utility, we have studied the performance bottlenecks at the hardware level, and discovered optimization schemes to maximize processor throughput. We have also produced optimization mechanisms, common to all large experiments, that address the extreme nature of today's HEP code, which due to it's size, places huge burdens on the memory infrastructure of today's processors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turinsky, Paul J., E-mail: turinsky@ncsu.edu; Kothe, Douglas B., E-mail: kothe@ornl.gov
The Consortium for the Advanced Simulation of Light Water Reactors (CASL), the first Energy Innovation Hub of the Department of Energy, was established in 2010 with the goal of providing modeling and simulation (M&S) capabilities that support and accelerate the improvement of nuclear energy's economic competitiveness and the reduction of spent nuclear fuel volume per unit energy, and all while assuring nuclear safety. To accomplish this requires advances in M&S capabilities in radiation transport, thermal-hydraulics, fuel performance and corrosion chemistry. To focus CASL's R&D, industry challenge problems have been defined, which equate with long standing issues of the nuclear powermore » industry that M&S can assist in addressing. To date CASL has developed a multi-physics “core simulator” based upon pin-resolved radiation transport and subchannel (within fuel assembly) thermal-hydraulics, capitalizing on the capabilities of high performance computing. CASL's fuel performance M&S capability can also be optionally integrated into the core simulator, yielding a coupled multi-physics capability with untapped predictive potential. Material models have been developed to enhance predictive capabilities of fuel clad creep and growth, along with deeper understanding of zirconium alloy clad oxidation and hydrogen pickup. Understanding of corrosion chemistry (e.g., CRUD formation) has evolved at all scales: micro, meso and macro. CFD R&D has focused on improvement in closure models for subcooled boiling and bubbly flow, and the formulation of robust numerical solution algorithms. For multiphysics integration, several iterative acceleration methods have been assessed, illuminating areas where further research is needed. Finally, uncertainty quantification and data assimilation techniques, based upon sampling approaches, have been made more feasible for practicing nuclear engineers via R&D on dimensional reduction and biased sampling. Industry adoption of CASL's evolving M&S capabilities, which is in progress, will assist in addressing long-standing and future operational and safety challenges of the nuclear industry. - Highlights: • Complexity of physics based modeling of light water reactor cores being addressed. • Capability developed to help address problems that have challenged the nuclear power industry. • Simulation capabilities that take advantage of high performance computing developed.« less
FY 2016 Status Report on the Modeling of the M8 Calibration Series using MAMMOTH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Benjamin Allen; Ortensi, Javier; DeHart, Mark David
2016-09-01
This report provides a summary of the progress made towards validating the multi-physics reactor analysis application MAMMOTH using data from measurements performed at the Transient Reactor Test facility, TREAT. The work completed consists of a series of comparisons of TREAT element types (standard and control rod assemblies) in small geometries as well as slotted mini-cores to reference Monte Carlo simulations to ascertain the accuracy of cross section preparation techniques. After the successful completion of these smaller problems, a full core model of the half slotted core used in the M8 Calibration series was assembled. Full core MAMMOTH simulations were comparedmore » to Serpent reference calculations to assess the cross section preparation process for this larger configuration. As part of the validation process the M8 Calibration series included a steady state wire irradiation experiment and coupling factors for the experiment region. The shape of the power distribution obtained from the MAMMOTH simulation shows excellent agreement with the experiment. Larger differences were encountered in the calculation of the coupling factors, but there is also great uncertainty on how the experimental values were obtained. Future work will focus on resolving some of these differences.« less
Testing Numerical Models of Cool Core Galaxy Cluster Formation with X-Ray Observations
NASA Astrophysics Data System (ADS)
Henning, Jason W.; Gantner, Brennan; Burns, Jack O.; Hallman, Eric J.
2009-12-01
Using archival Chandra and ROSAT data along with numerical simulations, we compare the properties of cool core and non-cool core galaxy clusters, paying particular attention to the region beyond the cluster cores. With the use of single and double β-models, we demonstrate a statistically significant difference in the slopes of observed cluster surface brightness profiles while the cluster cores remain indistinguishable between the two cluster types. Additionally, through the use of hardness ratio profiles, we find evidence suggesting cool core clusters are cooler beyond their cores than non-cool core clusters of comparable mass and temperature, both in observed and simulated clusters. The similarities between real and simulated clusters supports a model presented in earlier work by the authors describing differing merger histories between cool core and non-cool core clusters. Discrepancies between real and simulated clusters will inform upcoming numerical models and simulations as to new ways to incorporate feedback in these systems.
Hypersonic vibrations of Ag@SiO2 (cubic core)-shell nanospheres.
Sun, Jing Ya; Wang, Zhi Kui; Lim, Hock Siah; Ng, Ser Choon; Kuok, Meng Hau; Tran, Toan Trong; Lu, Xianmao
2010-12-28
The intriguing optical and catalytic properties of metal-silica core-shell nanoparticles, inherited from their plasmonic metallic cores together with the rich surface chemistry and increased stability offered by their silica shells, have enabled a wide variety of applications. In this work, we investigate the confined vibrational modes of a series of monodisperse Ag@SiO(2) (cubic core)-shell nanospheres synthesized using a modified Stöber sol-gel method. The particle-size dependence of their mode frequencies has been mapped by Brillouin light scattering, a powerful tool for probing hypersonic vibrations. Unlike the larger particles, the observed spheroidal-like mode frequencies of the smaller ones do not scale with inverse diameter. Interestingly, the onset of the deviation from this linearity occurs at a smaller particle size for higher-energy modes than for lower-energy ones. Finite element simulations show that the mode displacement profiles of the Ag@SiO(2) core-shells closely resemble those of a homogeneous SiO(2) sphere. Simulations have also been performed to ascertain the effects that the core shape and the relative hardness of the core and shell materials have on the vibrations of the core-shell as a whole. As the vibrational modes of a particle have a bearing on its thermal and mechanical properties, the findings would be of value in designing core-shell nanostructures with customized thermal and mechanical characteristics.
High-Performance Agent-Based Modeling Applied to Vocal Fold Inflammation and Repair.
Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K
2018-01-01
Fast and accurate computational biology models offer the prospect of accelerating the development of personalized medicine. A tool capable of estimating treatment success can help prevent unnecessary and costly treatments and potential harmful side effects. A novel high-performance Agent-Based Model (ABM) was adopted to simulate and visualize multi-scale complex biological processes arising in vocal fold inflammation and repair. The computational scheme was designed to organize the 3D ABM sub-tasks to fully utilize the resources available on current heterogeneous platforms consisting of multi-core CPUs and many-core GPUs. Subtasks are further parallelized and convolution-based diffusion is used to enhance the performance of the ABM simulation. The scheme was implemented using a client-server protocol allowing the results of each iteration to be analyzed and visualized on the server (i.e., in-situ ) while the simulation is running on the same server. The resulting simulation and visualization software enables users to interact with and steer the course of the simulation in real-time as needed. This high-resolution 3D ABM framework was used for a case study of surgical vocal fold injury and repair. The new framework is capable of completing the simulation, visualization and remote result delivery in under 7 s per iteration, where each iteration of the simulation represents 30 min in the real world. The case study model was simulated at the physiological scale of a human vocal fold. This simulation tracks 17 million biological cells as well as a total of 1.7 billion signaling chemical and structural protein data points. The visualization component processes and renders all simulated biological cells and 154 million signaling chemical data points. The proposed high-performance 3D ABM was verified through comparisons with empirical vocal fold data. Representative trends of biomarker predictions in surgically injured vocal folds were observed.
High-Performance Agent-Based Modeling Applied to Vocal Fold Inflammation and Repair
Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y. K.
2018-01-01
Fast and accurate computational biology models offer the prospect of accelerating the development of personalized medicine. A tool capable of estimating treatment success can help prevent unnecessary and costly treatments and potential harmful side effects. A novel high-performance Agent-Based Model (ABM) was adopted to simulate and visualize multi-scale complex biological processes arising in vocal fold inflammation and repair. The computational scheme was designed to organize the 3D ABM sub-tasks to fully utilize the resources available on current heterogeneous platforms consisting of multi-core CPUs and many-core GPUs. Subtasks are further parallelized and convolution-based diffusion is used to enhance the performance of the ABM simulation. The scheme was implemented using a client-server protocol allowing the results of each iteration to be analyzed and visualized on the server (i.e., in-situ) while the simulation is running on the same server. The resulting simulation and visualization software enables users to interact with and steer the course of the simulation in real-time as needed. This high-resolution 3D ABM framework was used for a case study of surgical vocal fold injury and repair. The new framework is capable of completing the simulation, visualization and remote result delivery in under 7 s per iteration, where each iteration of the simulation represents 30 min in the real world. The case study model was simulated at the physiological scale of a human vocal fold. This simulation tracks 17 million biological cells as well as a total of 1.7 billion signaling chemical and structural protein data points. The visualization component processes and renders all simulated biological cells and 154 million signaling chemical data points. The proposed high-performance 3D ABM was verified through comparisons with empirical vocal fold data. Representative trends of biomarker predictions in surgically injured vocal folds were observed. PMID:29706894
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, A. E., E-mail: whitea@mit.edu; Howard, N. T.; Creely, A. J.
2015-05-15
For the first time, nonlinear gyrokinetic simulations of I-mode plasmas are performed and compared with experiment. I-mode is a high confinement regime, featuring energy confinement similar to H-mode, but without enhanced particle and impurity particle confinement [D. G. Whyte et al., Nucl. Fusion 50, 105005 (2010)]. As a consequence of the separation between heat and particle transport, I-mode exhibits several favorable characteristics compared to H-mode. The nonlinear gyrokinetic code GYRO [J. Candy and R. E. Waltz, J Comput. Phys. 186, 545 (2003)] is used to explore the effects of E × B shear and profile stiffness in I-mode and comparemore » with L-mode. The nonlinear GYRO simulations show that I-mode core ion temperature and electron temperature profiles are more stiff than L-mode core plasmas. Scans of the input E × B shear in GYRO simulations show that E × B shearing of turbulence is a stronger effect in the core of I-mode than L-mode. The nonlinear simulations match the observed reductions in long wavelength density fluctuation levels across the L-I transition but underestimate the reduction of long wavelength electron temperature fluctuation levels. The comparisons between experiment and gyrokinetic simulations for I-mode suggest that increased E × B shearing of turbulence combined with increased profile stiffness are responsible for the reductions in core turbulence observed in the experiment, and that I-mode resembles H-mode plasmas more than L-mode plasmas with regards to marginal stability and temperature profile stiffness.« less
Active Flash: Out-of-core Data Analytics on Flash Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S
2012-01-01
Next generation science will increasingly come to rely on the ability to perform efficient, on-the-fly analytics of data generated by high-performance computing (HPC) simulations, modeling complex physical phenomena. Scientific computing workflows are stymied by the traditional chaining of simulation and data analysis, creating multiple rounds of redundant reads and writes to the storage system, which grows in cost with the ever-increasing gap between compute and storage speeds in HPC clusters. Recent HPC acquisitions have introduced compute node-local flash storage as a means to alleviate this I/O bottleneck. We propose a novel approach, Active Flash, to expedite data analysis pipelines bymore » migrating to the location of the data, the flash device itself. We argue that Active Flash has the potential to enable true out-of-core data analytics by freeing up both the compute core and the associated main memory. By performing analysis locally, dependence on limited bandwidth to a central storage system is reduced, while allowing this analysis to proceed in parallel with the main application. In addition, offloading work from the host to the more power-efficient controller reduces peak system power usage, which is already in the megawatt range and poses a major barrier to HPC system scalability. We propose an architecture for Active Flash, explore energy and performance trade-offs in moving computation from host to storage, demonstrate the ability of appropriate embedded controllers to perform data analysis and reduction tasks at speeds sufficient for this application, and present a simulation study of Active Flash scheduling policies. These results show the viability of the Active Flash model, and its capability to potentially have a transformative impact on scientific data analysis.« less
Redwing: A MOOSE application for coupling MPACT and BISON
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frederick N. Gleicher; Michael Rose; Tom Downar
Fuel performance and whole core neutron transport programs are often used to analyze fuel behavior as it is depleted in a reactor. For fuel performance programs, internal models provide the local intra-pin power density, fast neutron flux, burnup, and fission rate density, which are needed for a fuel performance analysis. The fuel performance internal models have a number of limitations. These include effects on the intra-pin power distribution by nearby assembly elements, such as water channels and control rods, and the further limitation of applicability to a specified fuel type such as low enriched UO2. In addition, whole core neutronmore » transport codes need an accurate intra-pin temperature distribution in order to calculate neutron cross sections. Fuel performance simulations are able to model the intra-pin fuel displacement as the fuel expands and densifies. These displacements must be accurately modeled in order to capture the eventual mechanical contact of the fuel and the clad; the correct radial gap width is needed for an accurate calculation of the temperature distribution of the fuel rod. Redwing is a MOOSE-based application that enables coupling between MPACT and BISON for transport and fuel performance coupling. MPACT is a 3D neutron transport and reactor core simulator based on the method of characteristics (MOC). The development of MPACT began at the University of Michigan (UM) and now is under the joint development of ORNL and UM as part of the DOE CASL Simulation Hub. MPACT is able to model the effects of local assembly elements and is able calculate intra-pin quantities such as the local power density on a volumetric mesh for any fuel type. BISON is a fuel performance application of Multi-physics Object Oriented Simulation Environment (MOOSE), which is under development at Idaho National Laboratory. BISON is able to solve the nonlinearly coupled mechanical deformation and heat transfer finite element equations that model a fuel element as it is depleted in a nuclear reactor. Redwing couples BISON and MPACT in a single application. Redwing maps and transfers the individual intra-pin quantities such as fission rate density, power density, and fast neutron flux from the MPACT volumetric mesh to the individual BISON finite element meshes. For a two-way coupling Redwing maps and transfers the individual pin temperature field and axially dependent coolant densities from the BISON mesh to the MPACT volumetric mesh. Details of the mapping are given. Redwing advances the simulation with the MPACT solution for each depletion time step and then advances the multiple BISON simulations for fuel performance calculations. Sub-cycle advancement can be applied to the individual BISON simulations and allows multiple time steps to be applied to the fuel performance simulations. Currently, only loose coupling where data from a previous time step is applied to the current time step is performed.« less
Off-Center Collisions between Clusters of Galaxies
NASA Astrophysics Data System (ADS)
Ricker, P. M.
1998-03-01
We present numerical simulations of off-center collisions between galaxy clusters made using a new hydrodynamical code based on the piecewise-parabolic method (PPM) and an isolated multigrid potential solver. The current simulations follow only the intracluster gas. We have performed three high-resolution (256 × 1282) simulations of collisions between equal-mass clusters using a nonuniform grid with different values of the impact parameter (0, 5, and 10 times the cluster core radius). Using these simulations, we have studied the variation in equilibration time, luminosity enhancement during the collision, and structure of the merger remnant with varying impact parameter. We find that in off-center collisions the cluster cores (the inner regions where the pressure exceeds the ram pressure) behave quite differently from the clusters' outer regions. A strong, roughly ellipsoidal shock front, similar to that noted in previous simulations of head-on collisions, enables the cores to become bound to each other by dissipating their kinetic energy as heat in the surrounding gas. These cores survive well into the collision, dissipating their orbital angular momentum via spiral bow shocks. After the ellipsoidal shock has passed well outside the interaction region, the material left in its wake falls back onto the merger remnant formed through the inspiral of the cluster cores, creating a roughly spherical accretion shock. For less than one-half of a sound crossing time after the cores first interact, the total X-ray luminosity increases by a large factor; the magnitude of this increase depends sensitively on the size of the impact parameter. Observational evidence of the ongoing collision, in the form of bimodality and distortion in projected X-ray surface brightness and temperature maps, is present for one to two sound crossing times after the collision but only for special viewing angles. The remnant actually requires at least five crossing times to reach virial equilibrium. Since the sound crossing time can be as large as 1-2 Gyr, the equilibration time can thus be a substantial fraction of the age of the universe. The final merger remnant is very similar for impact parameters of 0 and 5 core radii. It possesses a roughly isothermal core with central density and temperature twice the initial values for the colliding clusters. Outside the core, the temperature drops as r-1, and the density roughly as r-3.8. The core radius shows a small increase due to shock heating during the merger. For an impact parameter of 10 core radii, the core of the remnant possesses a more flattened density profile with a steeper drop-off outside the core. In both off-center cases, the merger remnant rotates, but only for the 10 core-radius case does this appear to have an effect on the structure of the remnant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stinson-Bagby, Kelly L.; Fielder, Robert S.; Van Dyke, Melissa K.
2004-02-04
The motivation for the reported research was to support NASA space nuclear power initiatives through the development of advanced fiber optic sensors for space-based nuclear power applications. Distributed high temperature measurements were made with 20 FBG temperature sensors installed in the SAFE-100 thermal simulator at the NASA Marshal Space Flight Center. Experiments were performed at temperatures approaching 800 deg. C and 1150 deg. C for characterization studies of the SAFE-100 core. Temperature profiles were successfully generated for the core during temperature increases and decreases. Related tests in the SAFE-100 successfully provided strain measurement data.
Tailoring the response of Autonomous Reactivity Control (ARC) systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qvist, Staffan A.; Hellesen, Carl; Gradecka, Malwina
The Autonomous Reactivity Control (ARC) system was developed to ensure inherent safety of Generation IV reactors while having a minimal impact on reactor performance and economic viability. In this study we present the transient response of fast reactor cores to postulated accident scenarios with and without ARC systems installed. Using a combination of analytical methods and numerical simulation, the principles of ARC system design that assure stability and avoids oscillatory behavior have been identified. A comprehensive transient analysis study for ARC-equipped cores, including a series of Unprotected Loss of Flow (ULOF) and Unprotected Loss of Heat Sink (ULOHS) simulations, weremore » performed for Argonne National Laboratory (ANL) Advanced Burner Reactor (ABR) designs. With carefully designed ARC-systems installed in the fuel assemblies, the cores exhibit a smooth non-oscillatory transition to stabilization at acceptable temperatures following all postulated transients. To avoid oscillations in power and temperature, the reactivity introduced per degree of temperature change in the ARC system needs to be kept below a certain threshold the value of which is system dependent, the temperature span of actuation needs to be as large as possible.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izvekov, Sergei, E-mail: sergiy.izvyekov.civ@mail.mil; Rice, Betsy M.
2015-12-28
A core-softening of the effective interaction between oxygen atoms in water and silica systems and its role in developing anomalous thermodynamic, transport, and structural properties have been extensively debated. For silica, the progress with addressing these issues has been hampered by a lack of effective interaction models with explicit core-softening. In this work, we present an extension of a two-body soft-core interatomic force field for silica recently reported by us [S. Izvekov and B. M. Rice, J. Chem. Phys. 136(13), 134508 (2012)] to include three-body forces. Similar to two-body interaction terms, the three-body terms are derived using parameter-free force-matching ofmore » the interactions from ab initio MD simulations of liquid silica. The derived shape of the O–Si–O three-body potential term affirms the existence of repulsion softening between oxygen atoms at short separations. The new model shows a good performance in simulating liquid, amorphous, and crystalline silica. By comparing the soft-core model and a similar model with the soft-core suppressed, we demonstrate that the topology reorganization within the local tetrahedral network and the O–O core-softening are two competitive mechanisms responsible for anomalous thermodynamic and kinetic behaviors observed in liquid and amorphous silica. The studied anomalies include the temperature of density maximum locus and anomalous diffusivity in liquid silica, and irreversible densification of amorphous silica. We show that the O–O core-softened interaction enhances the observed anomalies primarily through two mechanisms: facilitating the defect driven structural rearrangements of the silica tetrahedral network and modifying the tetrahedral ordering induced interactions toward multiple characteristic scales, the feature which underlies the thermodynamic anomalies.« less
NASA Astrophysics Data System (ADS)
Stoker, Carol R.; Cannon, Howard N.; Dunagan, Stephen E.; Lemke, Lawrence G.; Glass, Brian J.; Miller, David; Gomez-Elvira, Javier; Davis, Kiel; Zavaleta, Jhony; Winterholler, Alois; Roman, Matt; Rodriguez-Manfredi, Jose Antonio; Bonaccorsi, Rosalba; Bell, Mary Sue; Brown, Adrian; Battler, Melissa; Chen, Bin; Cooper, George; Davidson, Mark; Fernández-Remolar, David; Gonzales-Pastor, Eduardo; Heldmann, Jennifer L.; Martínez-Frías, Jesus; Parro, Victor; Prieto-Ballesteros, Olga; Sutter, Brad; Schuerger, Andrew C.; Schutt, John; Rull, Fernando
2008-10-01
The Mars Astrobiology Research and Technology Experiment (MARTE) simulated a robotic drilling mission to search for subsurface life on Mars. The drill site was on Peña de Hierro near the headwaters of the Río Tinto river (southwest Spain), on a deposit that includes massive sulfides and their gossanized remains that resemble some iron and sulfur minerals found on Mars. The mission used a fluidless, 10-axis, autonomous coring drill mounted on a simulated lander. Cores were faced; then instruments collected color wide-angle context images, color microscopic images, visible near infrared point spectra, and (lower resolution) visible-near infrared hyperspectral images. Cores were then stored for further processing or ejected. A borehole inspection system collected panoramic imaging and Raman spectra of borehole walls. Life detection was performed on full cores with an adenosine triphosphate luciferin-luciferase bioluminescence assay and on crushed core sections with SOLID2, an antibody array-based instrument. Two remotely located science teams analyzed the remote sensing data and chose subsample locations. In 30 days of operation, the drill penetrated to 6 m and collected 21 cores. Biosignatures were detected in 12 of 15 samples analyzed by SOLID2. Science teams correctly interpreted the nature of the deposits drilled as compared to the ground truth. This experiment shows that drilling to search for subsurface life on Mars is technically feasible and scientifically rewarding.
Stoker, Carol R; Cannon, Howard N; Dunagan, Stephen E; Lemke, Lawrence G; Glass, Brian J; Miller, David; Gomez-Elvira, Javier; Davis, Kiel; Zavaleta, Jhony; Winterholler, Alois; Roman, Matt; Rodriguez-Manfredi, Jose Antonio; Bonaccorsi, Rosalba; Bell, Mary Sue; Brown, Adrian; Battler, Melissa; Chen, Bin; Cooper, George; Davidson, Mark; Fernández-Remolar, David; Gonzales-Pastor, Eduardo; Heldmann, Jennifer L; Martínez-Frías, Jesus; Parro, Victor; Prieto-Ballesteros, Olga; Sutter, Brad; Schuerger, Andrew C; Schutt, John; Rull, Fernando
2008-10-01
The Mars Astrobiology Research and Technology Experiment (MARTE) simulated a robotic drilling mission to search for subsurface life on Mars. The drill site was on Peña de Hierro near the headwaters of the Río Tinto river (southwest Spain), on a deposit that includes massive sulfides and their gossanized remains that resemble some iron and sulfur minerals found on Mars. The mission used a fluidless, 10-axis, autonomous coring drill mounted on a simulated lander. Cores were faced; then instruments collected color wide-angle context images, color microscopic images, visible-near infrared point spectra, and (lower resolution) visible-near infrared hyperspectral images. Cores were then stored for further processing or ejected. A borehole inspection system collected panoramic imaging and Raman spectra of borehole walls. Life detection was performed on full cores with an adenosine triphosphate luciferin-luciferase bioluminescence assay and on crushed core sections with SOLID2, an antibody array-based instrument. Two remotely located science teams analyzed the remote sensing data and chose subsample locations. In 30 days of operation, the drill penetrated to 6 m and collected 21 cores. Biosignatures were detected in 12 of 15 samples analyzed by SOLID2. Science teams correctly interpreted the nature of the deposits drilled as compared to the ground truth. This experiment shows that drilling to search for subsurface life on Mars is technically feasible and scientifically rewarding.
Micromagnetics on high-performance workstation and mobile computational platforms
NASA Astrophysics Data System (ADS)
Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.
2015-05-01
The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.
Molecular dynamics studies on the DNA-binding process of ERG.
Beuerle, Matthias G; Dufton, Neil P; Randi, Anna M; Gould, Ian R
2016-11-15
The ETS family of transcription factors regulate gene targets by binding to a core GGAA DNA-sequence. The ETS factor ERG is required for homeostasis and lineage-specific functions in endothelial cells, some subset of haemopoietic cells and chondrocytes; its ectopic expression is linked to oncogenesis in multiple tissues. To date details of the DNA-binding process of ERG including DNA-sequence recognition outside the core GGAA-sequence are largely unknown. We combined available structural and experimental data to perform molecular dynamics simulations to study the DNA-binding process of ERG. In particular we were able to reproduce the ERG DNA-complex with a DNA-binding simulation starting in an unbound configuration with a final root-mean-square-deviation (RMSD) of 2.1 Å to the core ETS domain DNA-complex crystal structure. This allowed us to elucidate the relevance of amino acids involved in the formation of the ERG DNA-complex and to identify Arg385 as a novel key residue in the DNA-binding process. Moreover we were able to show that water-mediated hydrogen bonds are present between ERG and DNA in our simulations and that those interactions have the potential to achieve sequence recognition outside the GGAA core DNA-sequence. The methodology employed in this study shows the promising capabilities of modern molecular dynamics simulations in the field of protein DNA-interactions.
Macro-Fiber Composite Based Transduction
2016-03-01
displacements, resonance frequencies, and acoustic performance. In addition to the experimental work, ATILA++ finite element models were developed and...done free flooded and with a simulated air backing made from a foam core (a weight was suspended below the device for negative buoyancy). Figure 13 and...Layer Ring -- 80000 100.000 100000 ~ Figure 15 shows the TVR and phase of the MFC cylinder in-water with an air backing ( foam core). The wide
Initial Data Analysis Results for ATD-2 ISAS HITL Simulation
NASA Technical Reports Server (NTRS)
Lee, Hanbong
2017-01-01
To evaluate the operational procedures and information requirements for the core functional capabilities of the ATD-2 project, such as tactical surface metering tool, APREQ-CFR procedure, and data element exchanges between ramp and tower, human-in-the-loop (HITL) simulations were performed in March, 2017. This presentation shows the initial data analysis results from the HITL simulations. With respect to the different runway configurations and metering values in tactical surface scheduler, various airport performance metrics were analyzed and compared. These metrics include gate holding time, taxi-out in time, runway throughput, queue size and wait time in queue, and TMI flight compliance. In addition to the metering value, other factors affecting the airport performance in the HITL simulation, including run duration, runway changes, and TMI constraints, are also discussed.
A novel methodology for litho-to-etch pattern fidelity correction for SADP process
NASA Astrophysics Data System (ADS)
Chen, Shr-Jia; Chang, Yu-Cheng; Lin, Arthur; Chang, Yi-Shiang; Lin, Chia-Chi; Lai, Jun-Cheng
2017-03-01
For 2x nm node semiconductor devices and beyond, more aggressive resolution enhancement techniques (RETs) such as source-mask co-optimization (SMO), litho-etch-litho-etch (LELE) and self-aligned double patterning (SADP) are utilized for the low k1 factor lithography processes. In the SADP process, the pattern fidelity is extremely critical since a slight photoresist (PR) top-loss or profile roughness may impact the later core trim process, due to its sensitivity to environment. During the subsequent sidewall formation and core removal processes, the core trim profile weakness may worsen and induces serious defects that affect the final electrical performance. To predict PR top-loss, a rigorous lithography simulation can provide a reference to modify mask layouts; but it takes a much longer run time and is not capable of full-field mask data preparation. In this paper, we first brought out an algorithm which utilizes multi-intensity levels from conventional aerial image simulation to assess the physical profile through lithography to core trim etching steps. Subsequently, a novel correction method was utilized to improve the post-etch pattern fidelity without the litho. process window suffering. The results not only matched PR top-loss in rigorous lithography simulation, but also agreed with post-etch wafer data. Furthermore, this methodology can also be incorporated with OPC and post-OPC verification to improve core trim profile and final pattern fidelity at an early stage.
Nonlinear combining and compression in multicore fibers
Chekhovskoy, I. S.; Rubenchik, A. M.; Shtyrina, O. V.; ...
2016-10-25
In this paper, we demonstrate numerically light-pulse combining and pulse compression using wave-collapse (self-focusing) energy-localization dynamics in a continuous-discrete nonlinear system, as implemented in a multicore fiber (MCF) using one-dimensional (1D) and 2D core distribution designs. Large-scale numerical simulations were performed to determine the conditions of the most efficient coherent combining and compression of pulses injected into the considered MCFs. We demonstrate the possibility of combining in a single core 90% of the total energy of pulses initially injected into all cores of a 7-core MCF with a hexagonal lattice. Finally, a pulse compression factor of about 720 can bemore » obtained with a 19-core ring MCF.« less
DKIST Adaptive Optics System: Simulation Results
NASA Astrophysics Data System (ADS)
Marino, Jose; Schmidt, Dirk
2016-05-01
The 4 m class Daniel K. Inouye Solar Telescope (DKIST), currently under construction, will be equipped with an ultra high order solar adaptive optics (AO) system. The requirements and capabilities of such a solar AO system are beyond those of any other solar AO system currently in operation. We must rely on solar AO simulations to estimate and quantify its performance.We present performance estimation results of the DKIST AO system obtained with a new solar AO simulation tool. This simulation tool is a flexible and fast end-to-end solar AO simulator which produces accurate solar AO simulations while taking advantage of current multi-core computer technology. It relies on full imaging simulations of the extended field Shack-Hartmann wavefront sensor (WFS), which directly includes important secondary effects such as field dependent distortions and varying contrast of the WFS sub-aperture images.
Simulating storage part of application with Simgrid
NASA Astrophysics Data System (ADS)
Wang, Cong
2017-10-01
Design of a file system simulation and visualization system, using simgrid API and visualization techniques to help users understanding and improving the file system portion of their application. The core of the simulator is the API provided by simgrid, cluefs tracks and catches the procedure of the I/O operation. Run the simulator simulating this application to generate the output visualization file, which can visualize the I/O action proportion and time series. Users can also change the parameters in the configuration file to change the parameters of the storage system such as reading and writing bandwidth, users can also adjust the storage strategy, test the performance, getting reference to be much easier to optimize the storage system. We have tested all the aspects of the simulator, the results suggest that the simulator performance can be believable.
Behavior of composite sandwich panels with several core designs at different impact velocities
NASA Astrophysics Data System (ADS)
Jiga, Gabriel; Stamin, Ştefan; Dinu, Gabriela
2018-02-01
A sandwich composite represents a special class of composite materials that is manufactured by bonding two thin but stiff faces to a low density and low strength but thick core. The distance between the skins given by the core increases the flexural modulus of the panel with a low mass increase, producing an efficient structure able to resist at flexural and buckling loads. The strength of sandwich panels depends on the size of the panel, skins material and number or density of the cells within it. Sandwich composites are used widely in several industries, such as aerospace, automotive, medical and leisure industries. The behavior of composite sandwich panels with different core designs under different impact velocities are analyzed in this paper by numerical simulations performed on sandwich panels. The modeling was done in ANSYS and the analysis was performed through LS-DYNA.
Yin, Xinxing; An, Qiaoshi; Yu, Jiangsheng; Guo, Fengning; Geng, Yongliang; Bian, Linyi; Xu, Zhongsheng; Zhou, Baojing; Xie, Linghai; Zhang, Fujun; Tang, Weihua
2016-01-01
Three novel small molecules have been developed by side-chain engineering on benzo[1,2-b:4,5-b’]dithiophene (BDT) core. The typical acceptor-donor-acceptor (A-D-A) structure is adopted with 4,8-functionalized BDT moieties as core, dioctylterthiophene as π bridge and 3-ethylrhodanine as electron-withdrawing end group. Side-chain engineering on BDT core exhibits small but measurable effect on the optoelectronic properties of small molecules. Theoretical simulation and X-ray diffraction study reveal the subtle tuning of interchain distance between conjugated backbones has large effect on the charge transport and thus the photovoltaic performance of these molecules. Bulk-heterojunction solar cells fabricated with a configuration of ITO/PEDOT:PSS/SM:PC71BM/PFN/Al exhibit a highest power conversion efficiency (PCE) of 6.99% after solvent vapor annealing. PMID:27140224
Yin, Xinxing; An, Qiaoshi; Yu, Jiangsheng; Guo, Fengning; Geng, Yongliang; Bian, Linyi; Xu, Zhongsheng; Zhou, Baojing; Xie, Linghai; Zhang, Fujun; Tang, Weihua
2016-05-03
Three novel small molecules have been developed by side-chain engineering on benzo[1,2-b:4,5-b']dithiophene (BDT) core. The typical acceptor-donor-acceptor (A-D-A) structure is adopted with 4,8-functionalized BDT moieties as core, dioctylterthiophene as π bridge and 3-ethylrhodanine as electron-withdrawing end group. Side-chain engineering on BDT core exhibits small but measurable effect on the optoelectronic properties of small molecules. Theoretical simulation and X-ray diffraction study reveal the subtle tuning of interchain distance between conjugated backbones has large effect on the charge transport and thus the photovoltaic performance of these molecules. Bulk-heterojunction solar cells fabricated with a configuration of ITO/PEDOT:PSS/SM:PC71BM/PFN/Al exhibit a highest power conversion efficiency (PCE) of 6.99% after solvent vapor annealing.
NASA Astrophysics Data System (ADS)
Augustine, Carlyn
2018-01-01
Type Ia Supernovae are thermonuclear explosions of white dwarf (WD) stars. Past studies predict the existence of "hybrid" white dwarfs, made of a C/O/Ne core with a O/Ne shell, and that these are viable progenitors for supernovae. More recent work found that the C/O core is mixed with the surrounding O/Ne while the WD cools. Inspired by this scenario, we performed simulations of thermonuclear supernovae in the single degenerate paradigm from these hybrid progenitors. Our investigation began by constructing a hybrid white dwarf model with the one-dimensional stellar evolution code MESA. The model was allowed to go through unstable interior mixing ignite carbon burning centrally. The MESA model was then mapped to a two-dimensional initial condition and an explosion simulated from that with FLASH. For comparison, a similar simulation of an explosion was performed from a traditional C/O progenitor WD. Comparing the yields produced by explosion simulations allows us to determine which model produces more 56Ni, and therefore brighter events, and how explosions from these models differ from explosions from previous models without the mixing during the WD cooling.
Discrete Event Modeling and Massively Parallel Execution of Epidemic Outbreak Phenomena
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S; Seal, Sudip K
2011-01-01
In complex phenomena such as epidemiological outbreaks, the intensity of inherent feedback effects and the significant role of transients in the dynamics make simulation the only effective method for proactive, reactive or post-facto analysis. The spatial scale, runtime speed, and behavioral detail needed in detailed simulations of epidemic outbreaks make it necessary to use large-scale parallel processing. Here, an optimistic parallel execution of a new discrete event formulation of a reaction-diffusion simulation model of epidemic propagation is presented to facilitate in dramatically increasing the fidelity and speed by which epidemiological simulations can be performed. Rollback support needed during optimistic parallelmore » execution is achieved by combining reverse computation with a small amount of incremental state saving. Parallel speedup of over 5,500 and other runtime performance metrics of the system are observed with weak-scaling execution on a small (8,192-core) Blue Gene / P system, while scalability with a weak-scaling speedup of over 10,000 is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes exceeding several hundreds of millions of individuals in the largest cases are successfully exercised to verify model scalability.« less
Defining the Simulation Technician Role: Results of a Survey-Based Study.
Bailey, Rachel; Taylor, Regina G; FitzGerald, Michael R; Kerrey, Benjamin T; LeMaster, Thomas; Geis, Gary L
2015-10-01
In health care simulation, simulation technicians perform multiple tasks to support various educational offerings. Technician responsibilities and the tasks that accompany them seem to vary between centers. The objectives were to identify the range and frequency of tasks that technicians perform and to determine if there is a correspondence between what technicians do and what they feel their responsibilities should be. We hypothesized that there is a core set of responsibilities and tasks for the technician position regardless of background, experience, and type of simulation center. We conducted a prospective, survey-based study of individuals currently functioning in a simulation technician role in a simulation center. This survey was designed internally and piloted within 3 academic simulation centers. Potential respondents were identified through a national mailing list, and the survey was distributed electronically during a 3-week period. A survey request was sent to 280 potential participants, 136 (49%) responded, and 73 met inclusion criteria. Five core tasks were identified as follows: equipment setup and breakdown, programming scenarios into software, operation of software during simulation, audiovisual support for courses, and on-site simulator maintenance. Independent of background before they were hired, technicians felt unprepared for their role once taking the position. Formal training was identified as a need; however, the majority of technicians felt experience over time was the main contributor toward developing knowledge and skills within their role. This study represents a first step in defining the technician role within simulation-based education and supports the need for the development of a formal job description to allow recruitment, development, and certification.
Standalone BISON Fuel Performance Results for Watts Bar Unit 1, Cycles 1-3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarno, Kevin T.; Pawlowski, Roger; Stimpson, Shane
2016-03-07
The Consortium for Advanced Simulation of Light Water Reactors (CASL) is moving forward with more complex multiphysics simulations and increased focus on incorporating fuel performance analysis methods. The coupled neutronics/thermal-hydraulics capabilities within the Virtual Environment for Reactor Applications Core Simulator (VERA-CS) have become relatively stable, and major advances have been made in analysis efforts, including the simulation of twelve cycles of Watts Bar Nuclear Unit 1 (WBN1) operation. While this is a major achievement, the VERA-CS approaches for treating fuel pin heat transfer have well-known limitations that could be eliminated through better integration with the BISON fuel performance code. Severalmore » approaches are being implemented to consider fuel performance, including a more direct multiway coupling with Tiamat, as well as a more loosely coupled one-way approach with standalone BISON cases. Fuel performance typically undergoes an independent analysis using a standalone fuel performance code with manually specified input defined from an independent core simulator solution or set of assumptions. This report summarizes the improvements made since the initial milestone to execute BISON from VERA-CS output. Many of these improvements were prompted through tighter collaboration with the BISON development team at Idaho National Laboratory (INL). A brief description of WBN1 and some of the VERA-CS data used to simulate it are presented. Data from a small mesh sensitivity study are shown, which helps justify the mesh parameters used in this work. The multi-cycle results are presented, followed by the results for the first three cycles of WBN1 operation, particularly the parameters of interest to pellet-clad interaction (PCI) screening (fuel-clad gap closure, maximum centerline fuel temperature, maximum/minimum clad hoop stress, and cumulative damage index). Once the mechanics of this capability are functioning, future work will target cycles with known or suspected PCI failures to determine how well they can be estimated.« less
Dynamical Core in Atmospheric Model Does Matter in the Simulation of Arctic Climate
NASA Astrophysics Data System (ADS)
Jun, Sang-Yoon; Choi, Suk-Jin; Kim, Baek-Min
2018-03-01
Climate models using different dynamical cores can simulate significantly different winter Arctic climates even if equipped with virtually the same physics schemes. Current climate simulated by the global climate model using cubed-sphere grid with spectral element method (SE core) exhibited significantly warmer Arctic surface air temperature compared to that using latitude-longitude grid with finite volume method core. Compared to the finite volume method core, SE core simulated additional adiabatic warming in the Arctic lower atmosphere, and this was consistent with the eddy-forced secondary circulation. Downward longwave radiation further enhanced Arctic near-surface warming with a higher surface air temperature of about 1.9 K. Furthermore, in the atmospheric response to the reduced sea ice conditions with the same physical settings, only the SE core showed a robust cooling response over North America. We emphasize that special attention is needed in selecting the dynamical core of climate models in the simulation of the Arctic climate and associated teleconnection patterns.
Porting plasma physics simulation codes to modern computing architectures using the
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Abbott, Stephen
2015-11-01
Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source
NASA Astrophysics Data System (ADS)
Tanikawa, Ataru; Yoshikawa, Kohji; Okamoto, Takashi; Nitadori, Keigo
2012-02-01
We present a high-performance N-body code for self-gravitating collisional systems accelerated with the aid of a new SIMD instruction set extension of the x86 architecture: Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). With one processor core of Intel Core i7-2600 processor (8 MB cache and 3.40 GHz) based on Sandy Bridge micro-architecture, we implemented a fourth-order Hermite scheme with individual timestep scheme ( Makino and Aarseth, 1992), and achieved the performance of ˜20 giga floating point number operations per second (GFLOPS) for double-precision accuracy, which is two times and five times higher than that of the previously developed code implemented with the SSE instructions ( Nitadori et al., 2006b), and that of a code implemented without any explicit use of SIMD instructions with the same processor core, respectively. We have parallelized the code by using so-called NINJA scheme ( Nitadori et al., 2006a), and achieved ˜90 GFLOPS for a system containing more than N = 8192 particles with 8 MPI processes on four cores. We expect to achieve about 10 tera FLOPS (TFLOPS) for a self-gravitating collisional system with N ˜ 10 5 on massively parallel systems with at most 800 cores with Sandy Bridge micro-architecture. This performance will be comparable to that of Graphic Processing Unit (GPU) cluster systems, such as the one with about 200 Tesla C1070 GPUs ( Spurzem et al., 2010). This paper offers an alternative to collisional N-body simulations with GRAPEs and GPUs.
OpenACC performance for simulating 2D radial dambreak using FVM HLLE flux
NASA Astrophysics Data System (ADS)
Gunawan, P. H.; Pahlevi, M. R.
2018-03-01
The aim of this paper is to investigate the performances of openACC platform for computing 2D radial dambreak. Here, the shallow water equation will be used to describe and simulate 2D radial dambreak with finite volume method (FVM) using HLLE flux. OpenACC is a parallel computing platform based on GPU cores. Indeed, from this research this platform is used to minimize computational time on the numerical scheme performance. The results show the using OpenACC, the computational time is reduced. For the dry and wet radial dambreak simulations using 2048 grids, the computational time of parallel is obtained 575.984 s and 584.830 s respectively for both simulations. These results show the successful of OpenACC when they are compared with the serial time of dry and wet radial dambreak simulations which are collected 28047.500 s and 29269.40 s respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelkar, Mohan
2002-04-02
This report explains the unusual characteristics of West Carney Field based on detailed geological and engineering analyses. A geological history that explains the presence of mobile water and oil in the reservoir was proposed. The combination of matrix and fractures in the reservoir explains the reservoir?s flow behavior. We confirm our hypothesis by matching observed performance with a simulated model and develop procedures for correlating core data to log data so that the analysis can be extended to other, similar fields where the core coverage may be limited.
Absolute binding free energy calculations of CBClip host–guest systems in the SAMPL5 blind challenge
Tofoleanu, Florentina; Pickard, Frank C.; König, Gerhard; Huang, Jing; Damjanović, Ana; Baek, Minkyung; Seok, Chaok; Brooks, Bernard R.
2016-01-01
Herein, we report the absolute binding free energy calculations of CBClip complexes in the SAMPL5 blind challenge. Initial conformations of CBClip complexes were obtained using docking and molecular dynamics simulations. Free energy calculations were performed using thermodynamic integration (TI) with soft-core potentials and Bennett’s acceptance ratio (BAR) method based on a serial insertion scheme. We compared the results obtained with TI simulations with soft-core potentials and Hamiltonian replica exchange simulations with the serial insertion method combined with the BAR method. The results show that the difference between the two methods can be mainly attributed to the van der Waals free energies, suggesting that either the simulations used for TI or the simulations used for BAR, or both are not fully converged and the two sets of simulations may have sampled difference phase space regions. The penalty scores of force field parameters of the 10 guest molecules provided by CHARMM Generalized Force Field can be an indicator of the accuracy of binding free energy calculations. Among our submissions, the combination of docking and TI performed best, which yielded the root mean square deviation of 2.94 kcal/mol and an average unsigned error of 3.41 kcal/mol for the ten guest molecules. These values were best overall among all participants. However, our submissions had little correlation with experiments. PMID:27677749
NASA Astrophysics Data System (ADS)
Pu, Z.; Yu, Y.
2016-12-01
The prediction of Hurricane Joaquin's hairpin clockwise during 1 and 2 October 2015 presents a forecasting challenge during real-time numerical weather prediction, as tracks of several major numerical weather prediction models differ from each other. To investigate the large-scale environment and hurricane inner-core structures related to the hairpin turn of Joaquin, a series of high-resolution mesoscale numerical simulations of Hurricane Joaquin had been performed with an advanced research version of the Weather Research and Forecasting (WRF) model. The outcomes were compared with the observations obtained from the US Office of Naval Research's Tropical Cyclone Intensity (TCI) Experiment during 2015 hurricane season. Specifically, five groups of sensitivity experiments with different cumulus, boundary layer, and microphysical schemes as well as different initial and boundary conditions and initial times in WRF simulations had been performed. It is found that the choice of the cumulus parameterization scheme plays a significant role in reproducing reasonable track forecast during Joaquin's hairpin turn. The mid-level environmental steering flows can be the reason that leads to different tracks in the simulations with different cumulus schemes. In addition, differences in the distribution and amounts of the latent heating over the inner-core region are associated with discrepancies in the simulated intensity among different experiments. Detailed simulation results, comparison with TCI-2015 observations, and comprehensive diagnoses will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kochunas, Brendan; Collins, Benjamin; Stimpson, Shane
This paper describes the methodology developed and implemented in the Virtual Environment for Reactor Applications Core Simulator (VERA-CS) to perform high-fidelity, pressurized water reactor (PWR), multicycle, core physics calculations. Depletion of the core with pin-resolved power and nuclide detail is a significant advance in the state of the art for reactor analysis, providing the level of detail necessary to address the problems of the U.S. Department of Energy Nuclear Reactor Simulation Hub, the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS has three main components: the neutronics solver MPACT, the thermal-hydraulic (T-H) solver COBRA-TF (CTF), and the nuclidemore » transmutation solver ORIGEN. This paper focuses on MPACT and provides an overview of the resonance self-shielding methods, macroscopic-cross-section calculation, two-dimensional/one-dimensional (2-D/1-D) transport, nuclide depletion, T-H feedback, and other supporting methods representing a minimal set of the capabilities needed to simulate high-fidelity models of a commercial nuclear reactor. Results are presented from the simulation of a model of the first cycle of Watts Bar Unit 1. The simulation is within 16 parts per million boron (ppmB) reactivity for all state points compared to cycle measurements, with an average reactivity bias of <5 ppmB for the entire cycle. Comparisons to cycle 1 flux map data are also provided, and the average 2-D root-mean-square (rms) error during cycle 1 is 1.07%. To demonstrate the multicycle capability, a state point at beginning of cycle (BOC) 2 was also simulated and compared to plant data. The comparison of the cycle 2 BOC state has a reactivity difference of +3 ppmB from measurement, and the 2-D rms of the comparison in the flux maps is 1.77%. Lastly, these results provide confidence in VERA-CS’s capability to perform high-fidelity calculations for practical PWR reactor problems.« less
Natural Circulation Level Optimization and the Effect during ULOF Accident in the SPINNOR Reactors
NASA Astrophysics Data System (ADS)
Abdullah, Ade Gafar; Su'ud, Zaki; Kurniadi, Rizal; Kurniasih, Neny; Yulianti, Yanti
2010-12-01
Natural circulation level optimization and the effect during loss of flow accident in the 250 MWt MOX fuelled small Pb-Bi Cooled non-refueling nuclear reactors (SPINNOR) have been performed. The simulation was performed using FI-ITB safety code which has been developed in ITB. The simulation begins with steady state calculation of neutron flux, power distribution and temperature distribution across the core, hot pool and cool pool, and also steam generator. When the accident is started due to the loss of pumping power the power distribution and the temperature distribution of core, hot pool and cool pool, and steam generator change. Then the feedback reactivity calculation is conducted, followed by kinetic calculation. The process is repeated until the optimum power distribution is achieved. The results show that the SPINNOR reactor has inherent safety capability against this accident.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCulloch, R.W.; MacPherson, R.E.
1983-03-01
The Core Flow Test Loop was constructed to perform many of the safety, core design, and mechanical interaction tests in support of the Gas-Cooled Fast Reactor (GCFR) using electrically heated fuel rod simulators (FRSs). Operation includes many off-normal or postulated accident sequences including transient, high-power, and high-temperature operation. The FRS was developed to survive: (1) hundreds of hours of operation at 200 W/cm/sup 2/, 1000/sup 0/C cladding temperature, and (2) 40 h at 40 W/cm/sup 2/, 1200/sup 0/C cladding temperature. Six 0.5-mm type K sheathed thermocouples were placed inside the FRS cladding to measure steady-state and transient temperatures through cladmore » melting at 1370/sup 0/C.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ball, R.M.; Madaras, J.J.; Trowbridge, F.R. Jr.
Experimental tests on the Annular Core Research Reactor have confirmed that the Three-Bean-Salad'' control algorithm based on the Pontryagin maximum principle can change the power of a nuclear reactor many decades with a very fast startup rate and minimal overshoot. The paper describes the results of simulations and operations up to 25 MW and 87 decades per minute. 3 refs., 4 figs., 1 tab.
Analysis of aircraft performance during lateral maneuvering for microburst avoidance
NASA Technical Reports Server (NTRS)
Avila De Melo, Denise; Hansman, R. John, Jr.
1990-01-01
Aircraft response to a severe and a moderate three-dimensional microburst model using nonlinear numerical simulations of a Boeing 737-100 is studied. The relative performance loss is compared for microburst escape procedures with and without lateral maneuvering. The results show that the hazards caused by the penetration of a microburst in the landing phase are attenuated if lateral escape maneuvers are applied in order to turn the aircraft away from the microburst core rather than flying straight through. If the lateral escape maneuver is initiated close to the microburst core, high bank angles tend to deteriorate aircraft performance. Lateral maneuvering is also found to reduce the advanced warning required to escape from microburst hazards but requires that information of the existence and location of the microburst is available (i.e., remote detection) in order to avoid an incorrect turn toward the microburst core.
Kavyani, Sajjad; Amjad-Iranagh, Sepideh; Modarress, Hamid
2014-03-27
Poly(amidoamine) (PAMAM) dendrimers play an important role in drug delivery systems, because the dendrimers are susceptible to gain unique features with modification of their structure such as changing their terminals or improving their interior core. To investigate the core improvement and the effect of core nature on PAMAM dendrimers, we studied two generations G3 and G4 PAMAM dendrimers with the interior cores of commonly used ethylendiamine (EDA), 1,5-diaminohexane (DAH), and bis(3-aminopropyl) ether (BAPE) solvated in water, as an aqueous dendrimer system, by using molecular dynamics simulation and applying a coarse-grained (CG) dendrimer force field. To consider the electrostatic interactions, the simulations were performed at two protonation states, pHs 5 and 7. The results indicated that the core improvement of PAMAM dendrimers with DAH produces the largest size for G3 and G4 dendrimers at both pHs 5 and 7. The increase in the size was also observed for BAPE core but it was not so significant as that for DAH core. By considering the internal structure of dendrimers, it was found that PAMAM dendrimer shell with DAH core had more cavities than with BAPE core at both pHs 5 and 7. Also the moment of inertia calculations showed that the generation G3 is more open-shaped and has higher structural asymmetry than the generation G4. Possessing these properties by G3, specially due to its structural asymmetry, make penetration of water beads into the dendrimer feasible. But for higher generation G4 with its relatively structural symmetry, the encapsulation efficiency for water molecules can be enhanced by changing its core to DAH or BAPE. It is also observed that for the higher generation G4 the effect of core modification is more profound than G3 because the core modification promotes the structural asymmetry development of G4 more significantly. Comparing the number of water beads that penetrate into the PAMAM dendrimers for EDA, DAH, and BAPE cores indicates a significant increase when their cores have been modified with DAH or BAPE and substantiates the effective influence of the core nature in the dendrimer encapsulation efficiency.
Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K
2016-05-01
We present an efficient and scalable scheme for implementing agent-based modeling (ABM) simulation with In Situ visualization of large complex systems on heterogeneous computing platforms. The scheme is designed to make optimal use of the resources available on a heterogeneous platform consisting of a multicore CPU and a GPU, resulting in minimal to no resource idle time. Furthermore, the scheme was implemented under a client-server paradigm that enables remote users to visualize and analyze simulation data as it is being generated at each time step of the model. Performance of a simulation case study of vocal fold inflammation and wound healing with 3.8 million agents shows 35× and 7× speedup in execution time over single-core and multi-core CPU respectively. Each iteration of the model took less than 200 ms to simulate, visualize and send the results to the client. This enables users to monitor the simulation in real-time and modify its course as needed.
Neutron-gamma flux and dose calculations in a Pressurized Water Reactor (PWR)
NASA Astrophysics Data System (ADS)
Brovchenko, Mariya; Dechenaux, Benjamin; Burn, Kenneth W.; Console Camprini, Patrizio; Duhamel, Isabelle; Peron, Arthur
2017-09-01
The present work deals with Monte Carlo simulations, aiming to determine the neutron and gamma responses outside the vessel and in the basemat of a Pressurized Water Reactor (PWR). The model is based on the Tihange-I Belgian nuclear reactor. With a large set of information and measurements available, this reactor has the advantage to be easily modelled and allows validation based on the experimental measurements. Power distribution calculations were therefore performed with the MCNP code at IRSN and compared to the available in-core measurements. Results showed a good agreement between calculated and measured values over the whole core. In this paper, the methods and hypotheses used for the particle transport simulation from the fission distribution in the core to the detectors outside the vessel of the reactor are also summarized. The results of the simulations are presented including the neutron and gamma doses and flux energy spectra. MCNP6 computational results comparing JEFF3.1 and ENDF-B/VII.1 nuclear data evaluations and sensitivity of the results to some model parameters are presented.
A New Capability for Nuclear Thermal Propulsion Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amiri, Benjamin W.; Nuclear and Radiological Engineering Department, University of Florida, Gainesville, FL 32611; Kapernick, Richard J.
2007-01-30
This paper describes a new capability for Nuclear Thermal Propulsion (NTP) design that has been developed, and presents the results of some analyses performed with this design tool. The purpose of the tool is to design to specified mission and material limits, while maximizing system thrust to weight. The head end of the design tool utilizes the ROCket Engine Transient Simulation (ROCETS) code to generate a system design and system design requirements as inputs to the core analysis. ROCETS is a modular system level code which has been used extensively in the liquid rocket engine industry for many years. Themore » core design tool performs high-fidelity reactor core nuclear and thermal-hydraulic design analysis. At the heart of this process are two codes TMSS-NTP and NTPgen, which together greatly automate the analysis, providing the capability to rapidly produce designs that meet all specified requirements while minimizing mass. A PERL based command script, called CORE DESIGNER controls the execution of these two codes, and checks for convergence throughout the process. TMSS-NTP is executed first, to produce a suite of core designs that meet the specified reactor core mechanical, thermal-hydraulic and structural requirements. The suite of designs consists of a set of core layouts and, for each core layout specific designs that span a range of core fuel volumes. NTPgen generates MCNPX models for each of the core designs from TMSS-NTP. Iterative analyses are performed in NTPgen until a reactor design (fuel volume) is identified for each core layout that meets cold and hot operation reactivity requirements and that is zoned to meet a radial core power distribution requirement.« less
[Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].
Furuta, Takuya; Sato, Tatsuhiko
2015-01-01
Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.
IPSL-CM5A2. An Earth System Model designed to run long simulations for past and future climates.
NASA Astrophysics Data System (ADS)
Sepulchre, Pierre; Caubel, Arnaud; Marti, Olivier; Hourdin, Frédéric; Dufresne, Jean-Louis; Boucher, Olivier
2017-04-01
The IPSL-CM5A model was developed and released in 2013 "to study the long-term response of the climate system to natural and anthropogenic forcings as part of the 5th Phase of the Coupled Model Intercomparison Project (CMIP5)" [Dufresne et al., 2013]. Although this model also has been used for numerous paleoclimate studies, a major limitation was its computation time, which averaged 10 model-years / day on 32 cores of the Curie supercomputer (on TGCC computing center, France). Such performances were compatible with the experimental designs of intercomparison projects (e.g. CMIP, PMIP) but became limiting for modelling activities involving several multi-millenial experiments, which are typical for Quaternary or "deeptime" paleoclimate studies, in which a fully-equilibrated deep-ocean is mandatory. Here we present the Earth-System model IPSL-CM5A2. Based on IPSL-CM5A, technical developments have been performed both on separate components and on the coupling system in order to speed up the whole coupled model. These developments include the integration of hybrid parallelization MPI-OpenMP in LMDz atmospheric component, the use of a new input-ouput library to perform parallel asynchronous input/output by using computing cores as "IO servers", the use of a parallel coupling library between the ocean and the atmospheric components. Running on 304 cores, the model can now simulate 55 years per day, opening new gates towards multi-millenial simulations. Apart from obtaining better computing performances, one aim of setting up IPSL-CM5A2 was also to overcome the cold bias depicted in global surface air temperature (t2m) in IPSL-CM5A. We present the tuning strategy to overcome this bias as well as the main characteristics (including biases) of the pre-industrial climate simulated by IPSL-CM5A2. Lastly, we shortly present paleoclimate simulations run with this model, for the Holocene and for deeper timescales in the Cenozoic, for which the particular continental configuration was overcome by a new design of the ocean tripolar grid.
Large-core single-mode rib SU8 waveguide using solvent-assisted microcontact molding.
Huang, Cheng-Sheng; Wang, Wei-Chih
2008-09-01
This paper describes a novel fabrication technique for constructing a polymer-based large-core single-mode rib waveguide. A negative tone SU8 photoresist with a high optical transmission over a large wavelength range and stable mechanical properties was used as a waveguide material. A waveguide was constructed by using a polydimethylsiloxane stamp combined with a solvent-assisted microcontact molding technique. The effects on the final pattern's geometry of four different process conditions were investigated. Optical simulations were performed using beam propagation method software. Single-mode beam propagation was observed at the output of the simulated waveguide as well as the actual waveguide through the microscope image.
Yang, Jie; Weng, Wenguo; Wang, Faming; Song, Guowen
2017-05-01
This paper aims to integrate a human thermoregulatory model with a clothing model to predict core and skin temperatures. The human thermoregulatory model, consisting of an active system and a passive system, was used to determine the thermoregulation and heat exchanges within the body. The clothing model simulated heat and moisture transfer from the human skin to the environment through the microenvironment and fabric. In this clothing model, the air gap between skin and clothing, as well as clothing properties such as thickness, thermal conductivity, density, porosity, and tortuosity were taken into consideration. The simulated core and mean skin temperatures were compared to the published experimental results of subject tests at three levels of ambient temperatures of 20 °C, 30 °C, and 40 °C. Although lower signal-to-noise-ratio was observed, the developed model demonstrated positive performance at predicting core temperatures with a maximum difference between the simulations and measurements of no more than 0.43 °C. Generally, the current model predicted the mean skin temperatures with reasonable accuracy. It could be applied to predict human physiological responses and assess thermal comfort and heat stress. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veneziani, Carmela
Two sets of simulations were performed within this allocation: 1) a 12-year fully-coupled experiment in preindustrial conditions, using the CICE4 version of the sea-ice model; 2) a set of multi-decadal ocean-ice-only experiments, forced with CORE-I atmospheric fields and using the CICE5 version of the sea-ice model. Results from simulation 1) are presented in Figures 1-3, and specific results from a simulation in 2) with tracer releases are presented in Figure 4.
NASA Technical Reports Server (NTRS)
Fasching, W. A.
1980-01-01
The improved single shank high pressure turbine design was evaluated in component tests consisting of performance, heat transfer and mechanical tests, and in core engine tests. The instrumented core engine test verified the thermal, mechanical, and aeromechanical characteristics of the improved turbine design. An endurance test subjected the improved single shank turbine to 1000 simulated flight cycles, the equivalent of approximately 3000 hours of typical airline service. Initial back-to-back engine tests demonstrated an improvement in cruise sfc of 1.3% and a reduction in exhaust gas temperature of 10 C. An additional improvement of 0.3% in cruise sfc and 6 C in EGT is projected for long service engines.
Optimization design of turbo-expander gas bearing for a 500W helium refrigerator
NASA Astrophysics Data System (ADS)
Li, S. S.; Fu, B.; Y Zhang, Q.
2017-12-01
Turbo-expander is the core machinery of the helium refrigerator. Bearing as the supporting element is the core technology to impact the design of turbo-expander. The perfect design and performance study for the gas bearing are essential to ensure the stability of turbo-expander. In this paper, numerical simulation is used to analyze the performance of gas bearing for a 500W helium refrigerator turbine, and the optimization design of the gas bearing has been completed. And the results of the gas bearing optimization have a guiding role in the processing technology. Finally, the turbine experiments verify that the gas bearing has good performance, and ensure the stable operation of the turbine.
Challenge toward the prediction of typhoon behaviour and down pour
NASA Astrophysics Data System (ADS)
Takahashi, K.; Onishi, R.; Baba, Y.; Kida, S.; Matsuda, K.; Goto, K.; Fuchigami, H.
2013-08-01
Mechanisms of interactions among different scale phenomena play important roles for forecasting of weather and climate. Multi-scale Simulator for the Geoenvironment (MSSG), which deals with multi-scale multi-physics phenomena, is a coupled non-hydrostatic atmosphere-ocean model designed to be run efficiently on the Earth Simulator. We present simulation results with the world-highest 1.9km horizontal resolution for the entire globe and regional heavy rain with 1km horizontal resolution and 5m horizontal/vertical resolution for urban area simulation. To gain high performance by exploiting the system capabilities, we propose novel performance evaluation metrics introduced in previous studies that incorporate the effects of the data caching mechanism between CPU and memory. With a useful code optimization guideline based on such metrics, we demonstrate that MSSG can achieve an excellent peak performance ratio of 32.2% on the Earth Simulator with the single-core performance found to be a key to a reduced time-to-solution.
Formation of massive, dense cores by cloud-cloud collisions
NASA Astrophysics Data System (ADS)
Takahira, Ken; Shima, Kazuhiro; Habe, Asao; Tasker, Elizabeth J.
2018-03-01
We performed sub-parsec (˜ 0.014 pc) scale simulations of cloud-cloud collisions of two idealized turbulent molecular clouds (MCs) with different masses in the range of (0.76-2.67) × 104 M_{⊙} and with collision speeds of 5-30 km s-1. Those parameters are larger than in Takahira, Tasker, and Habe (2014, ApJ, 792, 63), in which study the colliding system showed a partial gaseous arc morphology that supports the NANTEN observations of objects indicated to be colliding MCs using numerical simulations. Gas clumps with density greater than 10-20 g cm-3 were identified as pre-stellar cores and tracked through the simulation to investigate the effects of the mass of colliding clouds and the collision speeds on the resulting core population. Our results demonstrate that the smaller cloud property is more important for the results of cloud-cloud collisions. The mass function of formed cores can be approximated by a power-law relation with an index γ = -1.6 in slower cloud-cloud collisions (v ˜ 5 km s-1), and is in good agreement with observation of MCs. A faster relative speed increases the number of cores formed in the early stage of collisions and shortens the gas accretion phase of cores in the shocked region, leading to the suppression of core growth. The bending point appears in the high-mass part of the core mass function and the bending point mass decreases with increase in collision speed for the same combination of colliding clouds. The higher-mass part of the core mass function than the bending point mass can be approximated by a power law with γ = -2-3 that is similar to the power index of the massive part of the observed stellar initial mass function. We discuss implications of our results for the massive-star formation in our Galaxy.
Formation of massive, dense cores by cloud-cloud collisions
NASA Astrophysics Data System (ADS)
Takahira, Ken; Shima, Kazuhiro; Habe, Asao; Tasker, Elizabeth J.
2018-05-01
We performed sub-parsec (˜ 0.014 pc) scale simulations of cloud-cloud collisions of two idealized turbulent molecular clouds (MCs) with different masses in the range of (0.76-2.67) × 104 M_{⊙} and with collision speeds of 5-30 km s-1. Those parameters are larger than in Takahira, Tasker, and Habe (2014, ApJ, 792, 63), in which study the colliding system showed a partial gaseous arc morphology that supports the NANTEN observations of objects indicated to be colliding MCs using numerical simulations. Gas clumps with density greater than 10-20 g cm-3 were identified as pre-stellar cores and tracked through the simulation to investigate the effects of the mass of colliding clouds and the collision speeds on the resulting core population. Our results demonstrate that the smaller cloud property is more important for the results of cloud-cloud collisions. The mass function of formed cores can be approximated by a power-law relation with an index γ = -1.6 in slower cloud-cloud collisions (v ˜ 5 km s-1), and is in good agreement with observation of MCs. A faster relative speed increases the number of cores formed in the early stage of collisions and shortens the gas accretion phase of cores in the shocked region, leading to the suppression of core growth. The bending point appears in the high-mass part of the core mass function and the bending point mass decreases with increase in collision speed for the same combination of colliding clouds. The higher-mass part of the core mass function than the bending point mass can be approximated by a power law with γ = -2-3 that is similar to the power index of the massive part of the observed stellar initial mass function. We discuss implications of our results for the massive-star formation in our Galaxy.
Multiple Days of Heat Exposure on Firefighters' Work Performance and Physiology.
Larsen, Brianna; Snow, Rod; Vincent, Grace; Tran, Jacqueline; Wolkow, Alexander; Aisbett, Brad
2015-01-01
This study assessed the accumulated effect of ambient heat on the performance of, and physiological and perceptual responses to, intermittent, simulated wildfire fighting tasks over three consecutive days. Firefighters (n = 36) were matched and allocated to either the CON (19°C) or HOT (33°C) condition. They performed three days of intermittent, self-paced simulated firefighting work, interspersed with physiological testing. Task repetitions were counted (and converted to distance or area) to determine work performance. Participants were asked to rate their perceived exertion and thermal sensation after each task. Heart rate, core temperature (Tc), and skin temperature (Tsk) were recorded continuously throughout the simulation. Fluids were consumed ad libitum. Urine volume was measured throughout, and urine specific gravity (USG) analysed, to estimate hydration. All food and fluid consumption was recorded. There was no difference in work output between experimental conditions. However, significant variation in performance responses between individuals was observed. All measures of thermal stress were elevated in the HOT, with core and skin temperature reaching, on average, 0.24 ± 0.08°C and 2.81 ± 0.20°C higher than the CON group. Participants' doubled their fluid intake in the HOT condition, and this was reflected in the USG scores, where the HOT participants reported significantly lower values. Heart rate was comparable between conditions at nearly all time points, however the peak heart rate reached each circuit was 7 ± 3% higher in the CON trial. Likewise, RPE was slightly elevated in the CON trial for the majority of tasks. Participants' work output was comparable between the CON and HOT conditions, however the performance change over time varied significantly between individuals. It is likely that the increased fluid replacement in the heat, in concert with frequent rest breaks and task rotation, assisted with the regulation of physiological responses (e.g., heart rate, core temperature).
Multiple Days of Heat Exposure on Firefighters’ Work Performance and Physiology
Larsen, Brianna; Snow, Rod; Vincent, Grace; Tran, Jacqueline; Wolkow, Alexander; Aisbett, Brad
2015-01-01
This study assessed the accumulated effect of ambient heat on the performance of, and physiological and perceptual responses to, intermittent, simulated wildfire fighting tasks over three consecutive days. Firefighters (n = 36) were matched and allocated to either the CON (19°C) or HOT (33°C) condition. They performed three days of intermittent, self-paced simulated firefighting work, interspersed with physiological testing. Task repetitions were counted (and converted to distance or area) to determine work performance. Participants were asked to rate their perceived exertion and thermal sensation after each task. Heart rate, core temperature (Tc), and skin temperature (Tsk) were recorded continuously throughout the simulation. Fluids were consumed ad libitum. Urine volume was measured throughout, and urine specific gravity (USG) analysed, to estimate hydration. All food and fluid consumption was recorded. There was no difference in work output between experimental conditions. However, significant variation in performance responses between individuals was observed. All measures of thermal stress were elevated in the HOT, with core and skin temperature reaching, on average, 0.24 ± 0.08°C and 2.81 ± 0.20°C higher than the CON group. Participants’ doubled their fluid intake in the HOT condition, and this was reflected in the USG scores, where the HOT participants reported significantly lower values. Heart rate was comparable between conditions at nearly all time points, however the peak heart rate reached each circuit was 7 ± 3% higher in the CON trial. Likewise, RPE was slightly elevated in the CON trial for the majority of tasks. Participants’ work output was comparable between the CON and HOT conditions, however the performance change over time varied significantly between individuals. It is likely that the increased fluid replacement in the heat, in concert with frequent rest breaks and task rotation, assisted with the regulation of physiological responses (e.g., heart rate, core temperature). PMID:26379284
Computational multicore on two-layer 1D shallow water equations for erodible dambreak
NASA Astrophysics Data System (ADS)
Simanjuntak, C. A.; Bagustara, B. A. R. H.; Gunawan, P. H.
2018-03-01
The simulation of erodible dambreak using two-layer shallow water equations and SCHR scheme are elaborated in this paper. The results show that the two-layer SWE model in a good agreement with the data experiment which is performed by Louvain-la-Neuve Université Catholique de Louvain. Moreover, the parallel algorithm with multicore architecture are given in the results. The results show that Computer I with processor Intel(R) Core(TM) i5-2500 CPU Quad-Core has the best performance to accelerate the computational time. Moreover, Computer III with processor AMD A6-5200 APU Quad-Core is observed has higher speedup and efficiency. The speedup and efficiency of Computer III with number of grids 3200 are 3.716050530 times and 92.9% respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilke, Jeremiah J; Kenny, Joseph P.
2015-02-01
Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading frameworkmore » allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.« less
On the possible use of the MASURCA reactor as a flexible, high-intensity, fast neutron beam facility
NASA Astrophysics Data System (ADS)
Dioni, Luca; Jacqmin, Robert; Sumini, Marco; Stout, Brian
2017-09-01
In recent work [1, 2], we have shown that the MASURCA research reactor could be used to deliver a fairly-intense continuous fast neutron beam to an experimental room located next to the reactor core. As a consequence of the MASURCA favorable characteristics and diverse material inventories, the neutron beam intensity and spectrum can be further tailored to meet the users' needs, which could be of interest for several applications. Monte Carlo simulations have been performed to characterize in detail the extracted neutron (and photon) beam entering the experimental room. These numerical simulations were done for two different bare cores: A uranium metallic core (˜30% 235U enriched) and a plutonium oxide core (˜25% Pu fraction, ˜78% 239Pu). The results show that the distinctive resonance energy structures of the two core leakage spectra are preserved at the channel exit. As the experimental room is large enough to house a dedicated set of neutron spectrometry instruments, we have investigated several candidate neutron spectrum measurement techniques, which could be implemented to guarantee well-defined, repeatable beam conditions to users. Our investigation also includes considerations regarding the gamma rays in the beams.
Critical Resolution and Physical Dependenices of Supernovae: Stars in Heat and Under Pressure
NASA Astrophysics Data System (ADS)
Vartanyan, David; Burrows, Adam Seth
2017-01-01
For over five decades, the mechanism of explosion in core-collapse supernova continues to remain one of the last untoppled bastions in astrophysics, presenting both a technical and physical problem.Motivated by advances in computation and nuclear physics and the resilience of the core-collapse problem, collaborators Adam Burrows (Princeton), Joshua Dolence (LANL), and Aaron Skinner (LNL) have developed FORNAX - a highly parallelizable multidimensional supernova simulation code featuring an explicit hydrodynamic and radiation-transfer solver.We present the results (Vartanyan et. al 2016, Burrows et. al 2016, both in preparation) of a sequence of two-dimensional axisymmetric simulations of core-collapse supernovae using FORNAX, probing both progenitor mass dependence and the effect of physical inputs in explosiveness in our study on the revival of the stalled shock via the neutrino heating mechanism. We also performed a resolution study, testing spatial and energy group resolutions as well as compilation flags. We illustrate that, when the protoneutron star bounded by a stalled shock is close to the critical explosion condition (Burrows & Goshy 1993), small changes of order 10% in neutrino energies and luminosities can result in explosion, and that these effects couple nonlinearly.We show that many-body medium effects due to neutrino-nucleon scattering as well as inelastic neutrino-nucleon and neutrino-electron scattering are strongly favorable to earlier and more vigorous explosions by depositing energy in the gain region. Additionally, we probe the effects of a ray-by-ray+ transport solver (which does not include transverse velocity terms) employed by many groups and confirm that it artificially accelerates explosion (see also Skinner et. al 2016).In the coming year, we are gearing up for the first set of 3D simulations yet performed in the context of core-collapse supernovae employing 20 energy groups, and one of the most complete nuclear physics modules in the field with the ambitious goal of simulating supernova remants like Cas A. The current environment for core-collapse supernova provides for invigorating optimism that a robust explosion mechanism is within reach on graduate student lifetimes.
NASA Astrophysics Data System (ADS)
Rahmani, Farzin; Jeon, Jungmin; Jiang, Shan; Nouranian, Sasan
2018-05-01
Molecular dynamics (MD) simulations were performed to investigate the role of core volume fraction and number of fusing nanoparticles (NPs) on the melting and solidification of Cu/Al and Ti/Al bimetallic core/shell NPs during a superfast heating and slow cooling process, roughly mimicking the conditions of selective laser melting (SLM). One recent trend in the SLM process is the rapid prototyping of nanoscopically heterogeneous alloys, wherein the precious core metal maintains its particulate nature in the final manufactured part. With this potential application in focus, the current work reveals the fundamental role of the interface in the two-stage melting of the core/shell alloy NPs. For a two-NP system, the melting zone gets broader as the core volume fraction increases. This effect is more pronounced for the Ti/Al system than the Cu/Al system because of a larger difference between the melting temperatures of the shell and core metals in the former than the latter. In a larger six-NP system (more nanoscopically heterogeneous), the melting and solidification temperatures of the shell Al roughly coincide, irrespective of the heating or cooling rate, implying that in the SLM process, the part manufacturing time can be reduced due to solidification taking place at higher temperatures. The nanostructure evolution during the cooling of six-NP systems is further investigated. [Figure not available: see fulltext.
High performance in silico virtual drug screening on many-core processors.
McIntosh-Smith, Simon; Price, James; Sessions, Richard B; Ibarra, Amaurys A
2015-05-01
Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel's Xeon Phi and multi-core CPUs with SIMD instruction sets.
High performance in silico virtual drug screening on many-core processors
Price, James; Sessions, Richard B; Ibarra, Amaurys A
2015-01-01
Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel’s Xeon Phi and multi-core CPUs with SIMD instruction sets. PMID:25972727
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiu, Dongbin
2017-03-03
The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
The X-IFU end-to-end simulations performed for the TES array optimization exercise
NASA Astrophysics Data System (ADS)
Peille, Philippe; Wilms, J.; Brand, T.; Cobo, B.; Ceballos, M. T.; Dauser, T.; Smith, S. J.; Barret, D.; den Herder, J. W.; Piro, L.; Barcons, X.; Pointecouteau, E.; Bandler, S.; den Hartog, R.; de Plaa, J.
2015-09-01
The focal plane assembly of the Athena X-ray Integral Field Unit (X-IFU) includes as the baseline an array of ~4000 single size calorimeters based on Transition Edge Sensors (TES). Other sensor array configurations could however be considered, combining TES of different properties (e.g. size). In attempting to improve the X-IFU performance in terms of field of view, count rate performance, and even spectral resolution, two alternative TES array configurations to the baseline have been simulated, each combining a small and a large pixel array. With the X-IFU end-to-end simulator, a sub-sample of the Athena core science goals, selected by the X-IFU science team as potentially driving the optimal TES array configuration, has been simulated for the results to be scientifically assessed and compared. In this contribution, we will describe the simulation set-up for the various array configurations, and highlight some of the results of the test cases simulated.
Monte Carlo Analysis of the Battery-Type High Temperature Gas Cooled Reactor
NASA Astrophysics Data System (ADS)
Grodzki, Marcin; Darnowski, Piotr; Niewiński, Grzegorz
2017-12-01
The paper presents a neutronic analysis of the battery-type 20 MWth high-temperature gas cooled reactor. The developed reactor model is based on the publicly available data being an `early design' variant of the U-battery. The investigated core is a battery type small modular reactor, graphite moderated, uranium fueled, prismatic, helium cooled high-temperature gas cooled reactor with graphite reflector. The two core alternative designs were investigated. The first has a central reflector and 30×4 prismatic fuel blocks and the second has no central reflector and 37×4 blocks. The SERPENT Monte Carlo reactor physics computer code, with ENDF and JEFF nuclear data libraries, was applied. Several nuclear design static criticality calculations were performed and compared with available reference results. The analysis covered the single assembly models and full core simulations for two geometry models: homogenous and heterogenous (explicit). A sensitivity analysis of the reflector graphite density was performed. An acceptable agreement between calculations and reference design was obtained. All calculations were performed for the fresh core state.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rahnema, Farzad; Garimeela, Srinivas; Ougouag, Abderrafi
2013-11-29
This project will develop a 3D, advanced coarse mesh transport method (COMET-Hex) for steady- state and transient analyses in advanced very high-temperature reactors (VHTRs). The project will lead to a coupled neutronics and thermal hydraulic (T/H) core simulation tool with fuel depletion capability. The computational tool will be developed in hexagonal geometry, based solely on transport theory without (spatial) homogenization in complicated 3D geometries. In addition to the hexagonal geometry extension, collaborators will concurrently develop three additional capabilities to increase the code’s versatility as an advanced and robust core simulator for VHTRs. First, the project team will develop and implementmore » a depletion method within the core simulator. Second, the team will develop an elementary (proof-of-concept) 1D time-dependent transport method for efficient transient analyses. The third capability will be a thermal hydraulic method coupled to the neutronics transport module for VHTRs. Current advancements in reactor core design are pushing VHTRs toward greater core and fuel heterogeneity to pursue higher burn-ups, efficiently transmute used fuel, maximize energy production, and improve plant economics and safety. As a result, an accurate and efficient neutron transport, with capabilities to treat heterogeneous burnable poison effects, is highly desirable for predicting VHTR neutronics performance. This research project’s primary objective is to advance the state of the art for reactor analysis.« less
Numerical evaluation of gas core length in free surface vortices
NASA Astrophysics Data System (ADS)
Cristofano, L.; Nobili, M.; Caruso, G.
2014-11-01
The formation and evolution of free surface vortices represent an important topic in many hydraulic intakes, since strong whirlpools introduce swirl flow at the intake, and could cause entrainment of floating matters and gas. In particular, gas entrainment phenomena are an important safety issue for Sodium cooled Fast Reactors, because the introduction of gas bubbles within the core causes dangerous reactivity fluctuation. In this paper, a numerical evaluation of the gas core length in free surface vortices is presented, according to two different approaches. In the first one, a prediction method, developed by the Japanese researcher Sakai and his team, has been applied. This method is based on the Burgers vortex model, and it is able to estimate the gas core length of a free surface vortex starting from two parameters calculated with single-phase CFD simulations. The two parameters are the circulation and the downward velocity gradient. The other approach consists in performing a two-phase CFD simulation of a free surface vortex, in order to numerically reproduce the gas- liquid interface deformation. Mapped convergent mesh is used to reduce numerical error and a VOF (Volume Of Fluid) method was selected to track the gas-liquid interface. Two different turbulence models have been tested and analyzed. Experimental measurements of free surface vortices gas core length have been executed, using optical methods, and numerical results have been compared with experimental measurements. The computational domain and the boundary conditions of the CFD simulations were set consistently with the experimental test conditions.
NASA Astrophysics Data System (ADS)
Kuroda, Takami; Kotake, Kei; Hayama, Kazuhiro; Takiwaki, Tomoya
2017-12-01
We present results from general-relativistic (GR) three-dimensional (3D) core-collapse simulations with approximate neutrino transport for three nonrotating progenitors (11.2, 15, and 40 M ⊙) using different nuclear equations of state (EOSs). We find that the combination of progenitor’s higher compactness at bounce and the use of softer EOS leads to stronger activity of the standing accretion shock instability (SASI). We confirm previous predications that the SASI produces characteristic time modulations both in neutrino and gravitational-wave (GW) signals. By performing a correlation analysis of the SASI-modulated neutrino and GW signals, we find that the correlation becomes highest when we take into account the time-delay effect due to the advection of material from the neutrino sphere to the proto-neutron star core surface. Our results suggest that the correlation of the neutrino and GW signals, if detected, would provide a new signature of the vigorous SASI activity in the supernova core, which can be hardly seen if neutrino-convection dominates over the SASI.
Three-dimensional investigations of the threading regime in a microfluidic flow-focusing channel
NASA Astrophysics Data System (ADS)
Gowda, Krishne; Brouzet, Christophe; Lefranc, Thibault; Soderberg, L. Daniel; Lundell, Fredrik
2017-11-01
We study the flow dynamics of the threading regime in a microfluidic flow-focusing channel through 3D numerical simulations and experiments. Making strong filaments from cellulose nano-fibrils (CNF) could potentially steer to new high-performance bio-based composites competing with conventional glass fibre composites. CNF filaments can be obtained through hydrodynamic alignment of dispersed CNF by using the concept of flow-focusing. The aligned structure is locked by diffusion of ions resulting in a dispersion-gel transition. Flow-focusing typically refers to a microfluidic channel system where the core fluid is focused by the two sheath fluids, thereby creating an extensional flow at the intersection. In this study, threading regime corresponds to an extensional flow field generated by the water sheath fluid stretching the dispersed CNF core fluid and leading to formation of long threads. The experimental measurements are performed using optical coherence tomography (OCT) and 3D numerical simulations with OpenFOAM. The prime focus is laid on the 3D characteristics of thread formation such as wetting length of core fluid, shape, aspect ratio of the thread and velocity flow-field in the microfluidic channel.
Performance of the Cell processor for biomolecular simulations
NASA Astrophysics Data System (ADS)
De Fabritiis, G.
2007-06-01
The new Cell processor represents a turning point for computing intensive applications. Here, I show that for molecular dynamics it is possible to reach an impressive sustained performance in excess of 30 Gflops with a peak of 45 Gflops for the non-bonded force calculations, over one order of magnitude faster than a single core standard processor.
NASA Technical Reports Server (NTRS)
Barad, Michael F.; Brehm, Christoph; Kiris, Cetin C.; Biswas, Rupak
2014-01-01
This paper presents one-of-a-kind MPI-parallel computational fluid dynamics simulations for the Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA is an airborne, 2.5-meter infrared telescope mounted in an open cavity in the aft of a Boeing 747SP. These simulations focus on how the unsteady flow field inside and over the cavity interferes with the optical path and mounting of the telescope. A temporally fourth-order Runge-Kutta, and spatially fifth-order WENO-5Z scheme was used to perform implicit large eddy simulations. An immersed boundary method provides automated gridding for complex geometries and natural coupling to a block-structured Cartesian adaptive mesh refinement framework. Strong scaling studies using NASA's Pleiades supercomputer with up to 32,000 cores and 4 billion cells shows excellent scaling. Dynamic load balancing based on execution time on individual AMR blocks addresses irregularities caused by the highly complex geometry. Limits to scaling beyond 32K cores are identified, and targeted code optimizations are discussed.
Scaling a Convection-Resolving RCM to Near-Global Scales
NASA Astrophysics Data System (ADS)
Leutwyler, D.; Fuhrer, O.; Chadha, T.; Kwasniewski, G.; Hoefler, T.; Lapillonne, X.; Lüthi, D.; Osuna, C.; Schar, C.; Schulthess, T. C.; Vogt, H.
2017-12-01
In the recent years, first decade-long kilometer-scale resolution RCM simulations have been performed on continental-scale computational domains. However, the size of the planet Earth is still an order of magnitude larger and thus the computational implications of performing global climate simulations at this resolution are challenging. We explore the gap between the currently established RCM simulations and global simulations by scaling the GPU accelerated version of the COSMO model to a near-global computational domain. To this end, the evolution of an idealized moist baroclinic wave has been simulated over the course of 10 days with a grid spacing of up to 930 m. The computational mesh employs 36'000 x 16'001 x 60 grid points and covers 98.4% of the planet's surface. The code shows perfect weak scaling up to 4'888 Nodes of the Piz Daint supercomputer and yields 0.043 simulated years per day (SYPD) which is approximately one seventh of the 0.2-0.3 SYPD required to conduct AMIP-type simulations. However, at half the resolution (1.9 km) we've observed 0.23 SYPD. Besides formation of frontal precipitating systems containing embedded explicitly-resolved convective motions, the simulations reveal a secondary instability that leads to cut-off warm-core cyclonic vortices in the cyclone's core, once the grid spacing is refined to the kilometer scale. The explicit representation of embedded moist convection and the representation of the previously unresolved instabilities exhibit a physically different behavior in comparison to coarser-resolution simulations. The study demonstrates that global climate simulations using kilometer-scale resolution are imminent and serves as a baseline benchmark for global climate model applications and future exascale supercomputing systems.
NASA Astrophysics Data System (ADS)
Bhatia, Gurpreet Kaur; Sahijpal, Sandeep
2017-12-01
Numerical simulations are performed to understand the early thermal evolution and planetary scale differentiation of icy bodies with the radii in the range of 100-2500 km. These icy bodies include trans-Neptunian objects, minor icy planets (e.g., Ceres, Pluto); the icy satellites of Jupiter, Saturn, Uranus, and Neptune; and probably the icy-rocky cores of these planets. The decay energy of the radionuclides, 26Al, 60Fe, 40K, 235U, 238U, and 232Th, along with the impact-induced heating during the accretion of icy bodies were taken into account to thermally evolve these planetary bodies. The simulations were performed for a wide range of initial ice and rock (dust) mass fractions of the icy bodies. Three distinct accretion scenarios were used. The sinking of the rock mass fraction in primitive water oceans produced by the substantial melting of ice could lead to planetary scale differentiation with the formation of a rocky core that is surrounded by a water ocean and an icy crust within the initial tens of millions of years of the solar system in case the planetary bodies accreted prior to the substantial decay of 26Al. However, over the course of billions of years, the heat produced due to 40K, 235U, 238U, and 232Th could have raised the temperature of the interiors of the icy bodies to the melting point of iron and silicates, thereby leading to the formation of an iron core. Our simulations indicate the presence of an iron core even at the center of icy bodies with radii ≥500 km for different ice mass fractions.
Postcollapse Evolution of Globular Clusters
NASA Astrophysics Data System (ADS)
Makino, Junichiro
1996-11-01
A number of globular clusters appear to have undergone core collapse, in the sense that their predicted collapse times are much shorter than their current ages. Simulations with gas models and the Fokker-Planck approximation have shown that the central density of a globular cluster after the collapse undergoes nonlinear oscillation with a large amplitude (gravothermal oscillation). However, the question whether such an oscillation actually takes place in real N-body systems has remained unsolved because an N-body simulation with a sufficiently high resolution would have required computing resources of the order of several GFLOPS-yr. In the present paper, we report the results of such a simulation performed on a dedicated special-purpose computer, GRAPE-4. We have simulated the evolution of isolated point-mass systems with up to 32,768 particles. The largest number of particles reported previously is 10,000. We confirm that gravothermal oscillation takes place in an N-body system. The expansion phase shows all the signatures that are considered to be evidence of the gravothermal nature of the oscillation. At the maximum expansion, the core radius is ˜1% of the half-mass radius for the run with 32,768 particles. The maximum core size, rc, depends on N as
NASA Astrophysics Data System (ADS)
Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George
2017-09-01
Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.
NASA Astrophysics Data System (ADS)
Takeda, Takeshi; Maruyama, Yu; Watanabe, Tadashi; Nakamura, Hideo
Experiments simulating PWR intermediate-break loss-of-coolant accidents (IBLOCAs) with 17% break at hot leg or cold leg were conducted in OECD/NEA ROSA-2 Project using the Large Scale Test Facility (LSTF). In the hot leg IBLOCA test, core uncovery started simultaneously with liquid level drop in crossover leg downflow-side before loop seal clearing (LSC) induced by steam condensation on accumulator coolant injected into cold leg. Water remained on upper core plate in upper plenum due to counter-current flow limiting (CCFL) because of significant upward steam flow from the core. In the cold leg IBLOCA test, core dryout took place due to rapid liquid level drop in the core before LSC. Liquid was accumulated in upper plenum, steam generator (SG) U-tube upflow-side and SG inlet plenum before the LSC due to CCFL by high velocity vapor flow, causing enhanced decrease in the core liquid level. The RELAP5/MOD3.2.1.2 post-test analyses of the two LSTF experiments were performed employing critical flow model in the code with a discharge coefficient of 1.0. In the hot leg IBLOCA case, cladding surface temperature of simulated fuel rods was underpredicted due to overprediction of core liquid level after the core uncovery. In the cold leg IBLOCA case, the cladding surface temperature was underpredicted too due to later core uncovery than in the experiment. These may suggest that the code has remaining problems in proper prediction of primary coolant distribution.
Pressure of the hot gas in simulations of galaxy clusters
NASA Astrophysics Data System (ADS)
Planelles, S.; Fabjan, D.; Borgani, S.; Murante, G.; Rasia, E.; Biffi, V.; Truong, N.; Ragone-Figueroa, C.; Granato, G. L.; Dolag, K.; Pierpaoli, E.; Beck, A. M.; Steinborn, Lisa K.; Gaspari, M.
2017-06-01
We analyse the radial pressure profiles, the intracluster medium (ICM) clumping factor and the Sunyaev-Zel'dovich (SZ) scaling relations of a sample of simulated galaxy clusters and groups identified in a set of hydrodynamical simulations based on an updated version of the treepm-SPH GADGET-3 code. Three different sets of simulations are performed: the first assumes non-radiative physics, the others include, among other processes, active galactic nucleus (AGN) and/or stellar feedback. Our results are analysed as a function of redshift, ICM physics, cluster mass and cluster cool-coreness or dynamical state. In general, the mean pressure profiles obtained for our sample of groups and clusters show a good agreement with X-ray and SZ observations. Simulated cool-core (CC) and non-cool-core (NCC) clusters also show a good match with real data. We obtain in all cases a small (if any) redshift evolution of the pressure profiles of massive clusters, at least back to z = 1. We find that the clumpiness of gas density and pressure increases with the distance from the cluster centre and with the dynamical activity. The inclusion of AGN feedback in our simulations generates values for the gas clumping (√{C}_{ρ }˜ 1.2 at R200) in good agreement with recent observational estimates. The simulated YSZ-M scaling relations are in good accordance with several observed samples, especially for massive clusters. As for the scatter of these relations, we obtain a clear dependence on the cluster dynamical state, whereas this distinction is not so evident when looking at the subsamples of CC and NCC clusters.
ANNarchy: a code generation approach to neural simulations on parallel hardware
Vitay, Julien; Dinkelbach, Helge Ü.; Hamker, Fred H.
2015-01-01
Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions. PMID:26283957
Thermal Characterization of a Simulated Fission Engine via Distributed Fiber Bragg Gratings
NASA Astrophysics Data System (ADS)
Duncan, Roger G.; Fielder, Robert S.; Seeley, Ryan J.; Kozikowski, Carrie L.; Raum, Matthew T.
2005-02-01
We report the use of distributed fiber Bragg gratings to monitor thermal conditions within a simulated nuclear reactor core located at the Early Flight Fission Test Facility of the NASA Marshall Space Flight Center. Distributed fiber-optic temperature measurements promise to add significant capability and advance the state-of-the-art in high-temperature sensing. For the work reported herein, seven probes were constructed with ten sensors each for a total of 70 sensor locations throughout the core. These discrete temperature sensors were monitored over a nine hour period while the test article was heated to over 700 °C and cooled to ambient through two operational cycles. The sensor density available permits a significantly elevated understanding of thermal effects within the simulated reactor. Fiber-optic sensor performance is shown to compare very favorably with co-located thermocouples where such co-location was feasible.
Physics-based multiscale coupling for full core nuclear reactor simulation
Gaston, Derek R.; Permann, Cody J.; Peterson, John W.; ...
2015-10-01
Numerical simulation of nuclear reactors is a key technology in the quest for improvements in efficiency, safety, and reliability of both existing and future reactor designs. Historically, simulation of an entire reactor was accomplished by linking together multiple existing codes that each simulated a subset of the relevant multiphysics phenomena. Recent advances in the MOOSE (Multiphysics Object Oriented Simulation Environment) framework have enabled a new approach: multiple domain-specific applications, all built on the same software framework, are efficiently linked to create a cohesive application. This is accomplished with a flexible coupling capability that allows for a variety of different datamore » exchanges to occur simultaneously on high performance parallel computational hardware. Examples based on the KAIST-3A benchmark core, as well as a simplified Westinghouse AP-1000 configuration, demonstrate the power of this new framework for tackling—in a coupled, multiscale manner—crucial reactor phenomena such as CRUD-induced power shift and fuel shuffle. 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-SA license« less
A Low-Complexity and High-Performance 2D Look-Up Table for LDPC Hardware Implementation
NASA Astrophysics Data System (ADS)
Chen, Jung-Chieh; Yang, Po-Hui; Lain, Jenn-Kaie; Chung, Tzu-Wen
In this paper, we propose a low-complexity, high-efficiency two-dimensional look-up table (2D LUT) for carrying out the sum-product algorithm in the decoding of low-density parity-check (LDPC) codes. Instead of employing adders for the core operation when updating check node messages, in the proposed scheme, the main term and correction factor of the core operation are successfully merged into a compact 2D LUT. Simulation results indicate that the proposed 2D LUT not only attains close-to-optimal bit error rate performance but also enjoys a low complexity advantage that is suitable for hardware implementation.
Yong, Shan; JingZhou, Zhang; Yameng, Wang
2014-11-01
To improve the performance of the afterburner for the turbofan engine, an innovative type of mixer, namely, the chevron mixer, was considered to enhance the mixture between the core flow and the bypass flow. Computational fluid dynamics (CFD) simulations investigated the aerodynamic performances and combustion characteristics of the chevron mixer inside a typical afterburner. Three types of mixer, namely, CC (chevrons tilted into core flow), CB (chevrons tilted into bypass flow), and CA (chevrons tilted into core flow and bypass flow alternately), respectively, were studied on the aerodynamic performances of mixing process. The chevrons arrangement has significant effect on the mixing characteristics and the CA mode seems to be advantageous for the generation of the stronger streamwise vortices with lower aerodynamic loss. Further investigations on combustion characteristics for CA mode were performed. Calculation results reveal that the local temperature distribution at the leading edge section of flame holder is improved under the action of streamwise vortices shedding from chevron mixers. Consequently, the combustion efficiency increased by 3.5% compared with confluent mixer under the same fuel supply scheme.
NASA Astrophysics Data System (ADS)
Jackson, S. J.; Krevor, S. C.; Agada, S.
2017-12-01
A number of studies have demonstrated the prevalent impact that small-scale rock heterogeneity can have on larger scale flow in multiphase flow systems including petroleum production and CO2sequestration. Larger scale modeling has shown that this has a significant impact on fluid flow and is possibly a significant source of inaccuracy in reservoir simulation. Yet no core analysis protocol has been developed that faithfully represents the impact of these heterogeneities on flow functions used in modeling. Relative permeability is derived from core floods performed at conditions with high flow potential in which the impact of capillary heterogeneity is voided. A more accurate representation would be obtained if measurements were made at flow conditions where the impact of capillary heterogeneity on flow is scaled to be representative of the reservoir system. This, however, is generally impractical due to laboratory constraints and the role of the orientation of the rock heterogeneity. We demonstrate a workflow of combined observations and simulations, in which the impact of capillary heterogeneity may be faithfully represented in the derivation of upscaled flow properties. Laboratory measurements that are a variation of conventional protocols are used for the parameterization of an accurate digital rock model for simulation. The relative permeability at the range of capillary numbers relevant to flow in the reservoir is derived primarily from numerical simulations of core floods that include capillary pressure heterogeneity. This allows flexibility in the orientation of the heterogeneity and in the range of flow rates considered. We demonstrate the approach in which digital rock models have been developed alongside core flood observations for three applications: (1) A Bentheimer sandstone with a simple axial heterogeneity to demonstrate the validity and limitations of the approach, (2) a set of reservoir rocks from the Captain sandstone in the UK North Sea targeted for CO2 storage, and for which the use of capillary pressure hysteresis is necessary, and (3) a secondary CO2-EOR production of residual oil from a Berea sandstone with layered heterogeneities. In all cases the incorporation of heterogeneity is shown to be key to the ultimate derivation of flow properties representative of the reservoir system.
NASA Astrophysics Data System (ADS)
Hickmott, Curtis W.
Cellular core tooling is a new technology which has the capability to manufacture complex integrated monolithic composite structures. This novel tooling method utilizes thermoplastic cellular cores as inner tooling. The semi-rigid nature of the cellular cores makes them convenient for lay-up, and under autoclave temperature and pressure they soften and expand providing uniform compaction on all surfaces including internal features such as ribs and spar tubes. This process has the capability of developing fully optimized aerospace structures by reducing or eliminating assembly using fasteners or bonded joints. The technology is studied in the context of evaluating its capabilities, advantages, and limitations in developing high quality structures. The complex nature of these parts has led to development of a model using the Finite Element Analysis (FEA) software Abaqus and the plug-in COMPRO Common Component Architecture (CCA) provided by Convergent Manufacturing Technologies. This model utilizes a "virtual autoclave" technique to simulate temperature profiles, resin flow paths, and ultimately deformation from residual stress. A model has been developed simulating the temperature profile during curing of composite parts made with the cellular core technology. While modeling of composites has been performed in the past, this project will look to take this existing knowledge and apply it to this new manufacturing method capable of building more complex parts and develop a model designed specifically for building large, complex components with a high degree of accuracy. The model development has been carried out in conjunction with experimental validation. A double box beam structure was chosen for analysis to determine the effects of the technology on internal ribs and joints. Double box beams were manufactured and sectioned into T-joints for characterization. Mechanical behavior of T-joints was performed using the T-joint pull-off test and compared to traditional tooling methods. Components made with the cellular core tooling method showed an improved strength at the joints. It is expected that this knowledge will help optimize the processing of complex, integrated structures and benefit applications in aerospace where lighter, structurally efficient components would be advantageous.
High-Fidelity Simulations of Electromagnetic Propagation and RF Communication Systems
2017-05-01
addition to high -fidelity RF propagation modeling, lower-fidelity mod- els, which are less computationally burdensome, are available via a C++ API...expensive to perform, requiring roughly one hour of computer time with 36 available cores and ray tracing per- formed by a single high -end GPU...ER D C TR -1 7- 2 Military Engineering Applied Research High -Fidelity Simulations of Electromagnetic Propagation and RF Communication
Multi-threaded ATLAS simulation on Intel Knights Landing processors
NASA Astrophysics Data System (ADS)
Farrell, Steven; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea; ATLAS Collaboration
2017-10-01
The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with details on its multi-threaded design. Then, we will present a performance analysis of the application on KNL devices and compare it to a traditional x86 platform to demonstrate the capabilities of the architecture and evaluate the benefits of utilizing KNL platforms like Cori for ATLAS production.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gehin, Jess C; Godfrey, Andrew T; Evans, Thomas M
The Consortium for Advanced Simulation of Light Water Reactors (CASL) is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications, including a core simulation capability called VERA-CS. A key milestone for this endeavor is to validate VERA against measurements from operating nuclear power reactors. The first step in validation against plant data is to determine the ability of VERA to accurately simulate the initial startup physics tests for Watts Bar Nuclear Power Station, Unit 1 (WBN1) cycle 1. VERA-CS calculations were performed with the Insilico code developed at ORNL using cross sectionmore » processing from the SCALE system and the transport capabilities within the Denovo transport code using the SPN method. The calculations were performed with ENDF/B-VII.0 cross sections in 252 groups (collapsed to 23 groups for the 3D transport solution). The key results of the comparison of calculations with measurements include initial criticality, control rod worth critical configurations, control rod worth, differential boron worth, and isothermal temperature reactivity coefficient (ITC). The VERA results for these parameters show good agreement with measurements, with the exception of the ITC, which requires additional investigation. Results are also compared to those obtained with Monte Carlo methods and a current industry core simulator.« less
NASA Astrophysics Data System (ADS)
Matsui, H.; Buffett, B. A.
2017-12-01
The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Prescott, Steven R; Smith, Curtis L
2011-07-01
In the Risk Informed Safety Margin Characterization (RISMC) approach we want to understand not just the frequency of an event like core damage, but how close we are (or are not) to key safety-related events and how might we increase our safety margins. The RISMC Pathway uses the probabilistic margin approach to quantify impacts to reliability and safety by coupling both probabilistic (via stochastic simulation) and mechanistic (via physics models) approaches. This coupling takes place through the interchange of physical parameters and operational or accident scenarios. In this paper we apply the RISMC approach to evaluate the impact of amore » power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., system activation) and to perform statistical analyses (e.g., run multiple RELAP-7 simulations where sequencing/timing of events have been changed according to a set of stochastic distributions). By using the RISMC toolkit, we can evaluate how power uprate affects the system recovery measures needed to avoid core damage after the PWR lost all available AC power by a tsunami induced flooding. The simulation of the actual flooding is performed by using a smooth particle hydrodynamics code: NEUTRINO.« less
Shift Verification and Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G
2016-09-07
This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less
Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pecchia, M.; D'Auria, F.; Mazzantini, O.
2012-07-01
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI formore » performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)« less
Supercomputer simulations of structure formation in the Universe
NASA Astrophysics Data System (ADS)
Ishiyama, Tomoaki
2017-06-01
We describe the implementation and performance results of our massively parallel MPI†/OpenMP‡ hybrid TreePM code for large-scale cosmological N-body simulations. For domain decomposition, a recursive multi-section algorithm is used and the size of domains are automatically set so that the total calculation time is the same for all processes. We developed a highly-tuned gravity kernel for short-range forces, and a novel communication algorithm for long-range forces. For two trillion particles benchmark simulation, the average performance on the fullsystem of K computer (82,944 nodes, the total number of core is 663,552) is 5.8 Pflops, which corresponds to 55% of the peak speed.
Earth observing system instrument pointing control modeling for polar orbiting platforms
NASA Technical Reports Server (NTRS)
Briggs, H. C.; Kia, T.; Mccabe, S. A.; Bell, C. E.
1987-01-01
An approach to instrument pointing control performance assessment for large multi-instrument platforms is described. First, instrument pointing requirements and reference platform control systems for the Eos Polar Platforms are reviewed. Performance modeling tools including NASTRAN models of two large platforms, a modal selection procedure utilizing a balanced realization method, and reduced order platform models with core and instrument pointing control loops added are then described. Time history simulations of instrument pointing and stability performance in response to commanded slewing of adjacent instruments demonstrates the limits of tolerable slew activity. Simplified models of rigid body responses are also developed for comparison. Instrument pointing control methods required in addition to the core platform control system to meet instrument pointing requirements are considered.
Method for depleting BWRs using optimal control rod patterns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taner, M.S.; Levine, S.H.; Hsiao, M.Y.
1991-01-01
Control rod (CR) programming is an essential core management activity for boiling water reactors (BWRs). After establishing a core reload design for a BWR, CR programming is performed to develop a sequence of exposure-dependent CR patterns that assure the safe and effective depletion of the core through a reactor cycle. A time-variant target power distribution approach has been assumed in this study. The authors have developed OCTOPUS to implement a new two-step method for designing semioptimal CR programs for BWRs. The optimization procedure of OCTOPUS is based on the method of approximation programming and uses the SIMULATE-E code for nucleonicsmore » calculations.« less
Applications of Ferro-Nanofluid on a Micro-Transformer
Tsai, Tsung-Han; Kuo, Long-Sheng; Chen, Ping-Hei; Lee, Da-sheng; Yang, Chin-Ting
2010-01-01
An on-chip transformer with a ferrofluid magnetic core has been developed and tested. The transformer consists of solenoid-type coil and a magnetic core of ferrofluid, with the former fabricated by MEMS technology and the latter by a chemical co-precipitation method. The performance of the MEMS transformer with a ferrofluid magnetic core was measured and simulated with frequencies ranging from 100 kHz to 100 MHz. Experimental results reveal that the presence of the ferrofluid increases the inductance of coils and the coupling coefficient of transformer; however, it also increases the resistance owing to the lag between the external magnetic field and the magnetization of the material. PMID:22163647
Applications of ferro-nanofluid on a micro-transformer.
Tsai, Tsung-Han; Kuo, Long-Sheng; Chen, Ping-Hei; Lee, Da-Sheng; Yang, Chin-Ting
2010-01-01
An on-chip transformer with a ferrofluid magnetic core has been developed and tested. The transformer consists of solenoid-type coil and a magnetic core of ferrofluid, with the former fabricated by MEMS technology and the latter by a chemical co-precipitation method. The performance of the MEMS transformer with a ferrofluid magnetic core was measured and simulated with frequencies ranging from 100 kHz to 100 MHz. Experimental results reveal that the presence of the ferrofluid increases the inductance of coils and the coupling coefficient of transformer; however, it also increases the resistance owing to the lag between the external magnetic field and the magnetization of the material.
NASA Astrophysics Data System (ADS)
Krása, Antonín; Kochetkov, Anatoly; Baeten, Peter; Vittiglio, Guido; Wagemans, Jan; Bécares, Vicente
2017-09-01
VENUS-F is a fast, zero-power reactor with 30% wt. metallic uranium fuel and solid lead as coolant simulator. It serves as a mockup of the MYRRHA reactor core. This paper describes integral experiments performed in two critical VENUS-F core configurations (with and without graphite reflector). Discrepancies between experiments and Monte Carlo calculations (MCNP5) of keff, fission rate spatial distribution and reactivity effects (lead void and fuel Doppler) depending on a nuclear data library used (JENDL-4.0, ENDF-B-VII.1, JEFF-3.1.2, 3.2, 3.3T2) are presented.
Behavior-aware cache hierarchy optimization for low-power multi-core embedded systems
NASA Astrophysics Data System (ADS)
Zhao, Huatao; Luo, Xiao; Zhu, Chen; Watanabe, Takahiro; Zhu, Tianbo
2017-07-01
In modern embedded systems, the increasing number of cores requires efficient cache hierarchies to ensure data throughput, but such cache hierarchies are restricted by their tumid size and interference accesses which leads to both performance degradation and wasted energy. In this paper, we firstly propose a behavior-aware cache hierarchy (BACH) which can optimally allocate the multi-level cache resources to many cores and highly improved the efficiency of cache hierarchy, resulting in low energy consumption. The BACH takes full advantage of the explored application behaviors and runtime cache resource demands as the cache allocation bases, so that we can optimally configure the cache hierarchy to meet the runtime demand. The BACH was implemented on the GEM5 simulator. The experimental results show that energy consumption of a three-level cache hierarchy can be saved from 5.29% up to 27.94% compared with other key approaches while the performance of the multi-core system even has a slight improvement counting in hardware overhead.
NASA Astrophysics Data System (ADS)
Sima, Wenxia; Zou, Mi; Yang, Ming; Yang, Qing; Peng, Daixiao
2018-05-01
Amorphous alloy is increasingly widely used in the iron core of power transformer due to its excellent low loss performance. However, its potential harm to the power system is not fully studied during the electromagnetic transients of the transformer. This study develops a simulation model to analyze the effect of transformer iron core materials on ferroresonance. The model is based on the transformer π equivalent circuit. The flux linkage-current (ψ-i) Jiles-Atherton reactor is developed in an Electromagnetic Transients Program-Alternative Transients Program and is used to represent the magnetizing branches of the transformer model. Two ferroresonance cases are studied to compare the performance of grain-oriented Si-steel and amorphous alloy cores. The ferroresonance overvoltage and overcurrent are discussed under different system parameters. Results show that amorphous alloy transformer generates higher voltage and current than those of grain-oriented Si-steel transformer and significantly harms the power system safety.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, Brian; Jackson, R. Brian
2017-03-08
The project, Toward a Longer Life Core: Thermal Hydraulic CFD Simulations and Experimental Investigation of Deformed Fuel Assemblies, DOE Project code DE-NE0008321, was a verification and validation project for flow and heat transfer through wire wrapped simulated liquid metal fuel assemblies that included both experiments and computational fluid dynamics simulations of those experiments. This project was a two year collaboration between AREVA, TerraPower, Argonne National Laboratory and Texas A&M University. Experiments were performed by AREVA and Texas A&M University. Numerical simulations of these experiments were performed by TerraPower and Argonne National Lab. Project management was performed by AREVA Federal Services.more » The first of a kind project resulted in the production of both local point temperature measurements and local flow mixing experiment data paired with numerical simulation benchmarking of the experiments. The project experiments included the largest wire-wrapped pin assembly Mass Index of Refraction (MIR) experiment in the world, the first known wire-wrapped assembly experiment with deformed duct geometries and the largest numerical simulations ever produced for wire-wrapped bundles.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lell, R. M.; Morman, J. A.; Schaefer, R.W.
ZPR-6 Assembly 7 (ZPR-6/7) encompasses a series of experiments performed at the ZPR-6 facility at Argonne National Laboratory in 1970 and 1971 as part of the Demonstration Reactor Benchmark Program (Reference 1). Assembly 7 simulated a large sodium-cooled LMFBR with mixed oxide fuel, depleted uranium radial and axial blankets, and a core H/D near unity. ZPR-6/7 was designed to test fast reactor physics data and methods, so configurations in the Assembly 7 program were as simple as possible in terms of geometry and composition. ZPR-6/7 had a very uniform core assembled from small plates of depleted uranium, sodium, iron oxide,more » U{sub 3}O{sub 8} and Pu-U-Mo alloy loaded into stainless steel drawers. The steel drawers were placed in square stainless steel tubes in the two halves of a split table machine. ZPR-6/7 had a simple, symmetric core unit cell whose neutronic characteristics were dominated by plutonium and {sup 238}U. The core was surrounded by thick radial and axial regions of depleted uranium to simulate radial and axial blankets and to isolate the core from the surrounding room. The ZPR-6/7 program encompassed 139 separate core loadings which include the initial approach to critical and all subsequent core loading changes required to perform specific experiments and measurements. In this context a loading refers to a particular configuration of fueled drawers, radial blanket drawers and experimental equipment (if present) in the matrix of steel tubes. Two principal core configurations were established. The uniform core (Loadings 1-84) had a relatively uniform core composition. The high {sup 240}Pu core (Loadings 85-139) was a variant on the uniform core. The plutonium in the Pu-U-Mo fuel plates in the uniform core contains 11% {sup 240}Pu. In the high {sup 240}Pu core, all Pu-U-Mo plates in the inner core region (central 61 matrix locations per half of the split table machine) were replaced by Pu-U-Mo plates containing 27% {sup 240}Pu in the plutonium component to construct a central core zone with a composition closer to that in an LMFBR core with high burnup. The high {sup 240}Pu configuration was constructed for two reasons. First, the composition of the high {sup 240}Pu zone more closely matched the composition of LMFBR cores anticipated in design work in 1970. Second, comparison of measurements in the ZPR-6/7 uniform core with corresponding measurements in the high {sup 240}Pu zone provided an assessment of some of the effects of long-term {sup 240}Pu buildup in LMFBR cores. The uniform core version of ZPR-6/7 is evaluated in ZPR-LMFR-EXP-001. This document only addresses measurements in the high {sup 240}Pu core version of ZPR-6/7. Many types of measurements were performed as part of the ZPR-6/7 program. Measurements of criticality, sodium void worth, control rod worth and reaction rate distributions in the high {sup 240}Pu core configuration are evaluated here. For each category of measurements, the uncertainties are evaluated, and benchmark model data are provided.« less
2009-09-01
suffer the power and complexity requirements of a public key system. 28 In [18], a simulation of the SHA –1 algorithm is performed on a Xilinx FPGA ... 256 bits. Thus, the construction of a hash table would need 2512 independent comparisons. It is known that hash collisions of the SHA –1 algorithm... SHA –1 algorithm for small-core FPGA design. Small-core FPGA design is the process by which a circuit is adapted to use the minimal amount of logic
Numerical simulation of the geodynamo reaches Earth's core dynamical regime
NASA Astrophysics Data System (ADS)
Aubert, J.; Gastine, T.; Fournier, A.
2016-12-01
Numerical simulations of the geodynamo have been successful at reproducing a number of static (field morphology) and kinematic (secular variation patterns, core surface flows and westward drift) features of Earth's magnetic field, making them a tool of choice for the analysis and retrieval of geophysical information on Earth's core. However, classical numerical models have been run in a parameter regime far from that of the real system, prompting the question of whether we do get "the right answers for the wrong reasons", i.e. whether the agreement between models and nature simply occurs by chance and without physical relevance in the dynamics. In this presentation, we show that classical models succeed in describing the geodynamo because their large-scale spatial structure is essentially invariant as one progresses along a well-chosen path in parameter space to Earth's core conditions. This path is constrained by the need to enforce the relevant force balance (MAC or Magneto-Archimedes-Coriolis) and preserve the ratio of the convective overturn and magnetic diffusion times. Numerical simulations performed along this path are shown to be spatially invariant at scales larger than that where the magnetic energy is ohmically dissipated. This property enables the definition of large-eddy simulations that show good agreement with direct numerical simulations in the range where both are feasible, and that can be computed at unprecedented values of the control parameters, such as an Ekman number E=10-8. Combining direct and large-eddy simulations, large-scale invariance is observed over half the logarithmic distance in parameter space between classical models and Earth. The conditions reached at this mid-point of the path are furthermore shown to be representative of the rapidly-rotating, asymptotic dynamical regime in which Earth's core resides, with a MAC force balance undisturbed by viscosity or inertia, the enforcement of a Taylor state and strong-field dynamo action. We conclude that numerical modelling has advanced to a stage where it is possible to use models correctly representing the statics, kinematics and now the dynamics of the geodynamo. This opens the way to a better analysis of the geomagnetic field in the time and space domains.
GPU-accelerated phase-field simulation of dendritic solidification in a binary alloy
NASA Astrophysics Data System (ADS)
Yamanaka, Akinori; Aoki, Takayuki; Ogawa, Satoi; Takaki, Tomohiro
2011-03-01
The phase-field simulation for dendritic solidification of a binary alloy has been accelerated by using a graphic processing unit (GPU). To perform the phase-field simulation of the alloy solidification on GPU, a program code was developed with computer unified device architecture (CUDA). In this paper, the implementation technique of the phase-field model on GPU is presented. Also, we evaluated the acceleration performance of the three-dimensional solidification simulation by using a single NVIDIA TESLA C1060 GPU and the developed program code. The results showed that the GPU calculation for 5763 computational grids achieved the performance of 170 GFLOPS by utilizing the shared memory as a software-managed cache. Furthermore, it can be demonstrated that the computation with the GPU is 100 times faster than that with a single CPU core. From the obtained results, we confirmed the feasibility of realizing a real-time full three-dimensional phase-field simulation of microstructure evolution on a personal desktop computer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, Srdjan
2015-02-16
CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less
Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.
2011-01-01
We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Jiwen; Liu, Yi-Chin; Xu, Kuan-Man
2015-04-27
The ultimate goal of this study is to improve representation of convective transport by cumulus parameterization for meso-scale and climate models. As Part I of the study, we perform extensive evaluations of cloud-resolving simulations of a squall line and mesoscale convective complexes in mid-latitude continent and tropical regions using the Weather Research and Forecasting (WRF) model with spectral-bin microphysics (SBM) and with two double-moment bulk microphysics schemes: a modified Morrison (MOR) and Milbrandt and Yau (MY2). Compared to observations, in general, SBM gives better simulations of precipitation, vertical velocity of convective cores, and the vertically decreasing trend of radar reflectivitymore » than MOR and MY2, and therefore will be used for analysis of scale-dependence of eddy transport in Part II. The common features of the simulations for all convective systems are (1) the model tends to overestimate convection intensity in the middle and upper troposphere, but SBM can alleviate much of the overestimation and reproduce the observed convection intensity well; (2) the model greatly overestimates radar reflectivity in convective cores (SBM predicts smaller radar reflectivity but does not remove the large overestimation); and (3) the model performs better for mid-latitude convective systems than tropical system. The modeled mass fluxes of the mid latitude systems are not sensitive to microphysics schemes, but are very sensitive for the tropical case indicating strong microphysics modification to convection. Cloud microphysical measurements of rain, snow and graupel in convective cores will be critically important to further elucidate issues within cloud microphysics schemes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Kuo-Chuan; Liebendörfer, Matthias; Hempel, Matthias
2016-01-20
The neutrino mechanism of core-collapse supernova is investigated via non-relativistic, two-dimensional (2D), neutrino radiation–hydrodynamic simulations. For the transport of electron flavor neutrinos, we use the interaction rates defined by Bruenn and the isotropic diffusion source approximation (IDSA) scheme, which decomposes the transported particles into trapped-particle and streaming-particle components. Heavy neutrinos are described by a leakage scheme. Unlike the “ray-by-ray” approach in some other multidimensional supernova models, we use cylindrical coordinates and solve the trapped-particle component in multiple dimensions, improving the proto-neutron star resolution and the neutrino transport in angular and temporal directions. We provide an IDSA verification by performing one-dimensionalmore » (1D) and 2D simulations with 15 and 20 M{sub ⊙} progenitors from Woosley et al. and discuss the difference between our IDSA results and those existing in the literature. Additionally, we perform Newtonian 1D and 2D simulations from prebounce core collapse to several hundred milliseconds postbounce with 11, 15, 21, and 27 M{sub ⊙} progenitors from Woosley et al. with the HS(DD2) equation of state. General-relativistic effects are neglected. We obtain robust explosions with diagnostic energies E{sub dia} ≳ 0.1–0.5 B (1 B ≡ 10{sup 51} erg) for all considered 2D models within approximately 100–300 ms after bounce and find that explosions are mostly dominated by the neutrino-driven convection, although standing accretion shock instabilities are observed as well. We also find that the level of electron deleptonization during collapse dramatically affects the postbounce evolution, e.g., the neglect of neutrino–electron scattering during collapse will lead to a stronger explosion.« less
NASA Astrophysics Data System (ADS)
Bhatla, R.; Ghosh, Soumik; Mall, R. K.; Sinha, P.; Sarkar, Abhijit
2018-05-01
Establishment of Indian summer monsoon (ISM) rainfall passes through the different phases and is not uniformly distributed over the Indian subcontinent. This enhancement and reduction in daily rainfall anomaly over the Indian core monsoon region during peak monsoon season (i.e., July and August) are commonly termed as `active' and `break' phases of monsoon. The purpose of this study is to analyze REGional Climate Model (RegCM) results obtained using the most suitable convective parameterization scheme (CPS) to determine active/break phases of ISM. The model-simulated daily outgoing longwave radiation (OLR), mean sea level pressure (MSLP), and the wind at 850 hPa of spatial resolution of 0.5°× 0.5° are compared with NOAA, NCEP, and EIN15 data, respectively over the South-Asia Co-Ordinated Regional Climate Downscaling EXperiment (CORDEX) region. 25 years (1986-2010) composites of OLR, MSLP, and the wind at 850 hPa are considered from start to the dates of active/break phase and up to the end dates of active/break spell of monsoon. A negative/positive anomaly of OLR with active/break phase is found in simulations with CPSs Emanuel and Mix99 (Grell over land; Emanuel over ocean) over the core monsoon region as well as over Monsoon Convergence Zone (MCZ) of India. The appearance of monsoon trough during active phase over the core monsoon zone and its shifting towards the Himalayan foothills during break phase are also depicted well. Because of multi-cloud function over oceanic region and single cloud function over the land mass, the Mix99 CPSs perform well in simulating the synoptic features during the phases of monsoon.
Convective cooling in a pool-type research reactor
NASA Astrophysics Data System (ADS)
Sipaun, Susan; Usman, Shoaib
2016-01-01
A reactor produces heat arising from fission reactions in the nuclear core. In the Missouri University of Science and Technology research reactor (MSTR), this heat is removed by natural convection where the coolant/moderator is demineralised water. Heat energy is transferred from the core into the coolant, and the heated water eventually evaporates from the open pool surface. A secondary cooling system was installed to actively remove excess heat arising from prolonged reactor operations. The nuclear core consists of uranium silicide aluminium dispersion fuel (U3Si2Al) in the form of rectangular plates. Gaps between the plates allow coolant to pass through and carry away heat. A study was carried out to map out heat flow as well as to predict the system's performance via STAR-CCM+ simulation. The core was approximated as porous media with porosity of 0.7027. The reactor is rated 200kW and total heat density is approximately 1.07+E7 Wm-3. An MSTR model consisting of 20% of MSTR's nuclear core in a third of the reactor pool was developed. At 35% pump capacity, the simulation results for the MSTR model showed that water is drawn out of the pool at a rate 1.28 kg s-1 from the 4" pipe, and predicted pool surface temperature not exceeding 30°C.
Thermal Modeling of the Injection of Standard and Thermally Insulated Cored Wire
NASA Astrophysics Data System (ADS)
Castro-Cedeno, E.-I.; Jardy, A.; Carré, A.; Gerardin, S.; Bellot, J. P.
2017-12-01
Cored wire injection is a widespread method used to perform alloying additions during ferrous and non-ferrous liquid metal treatment. The wire consists of a metal casing that is tightly wrapped around a core of material; the casing delays the release of the material as the wire is immersed into the melt. This method of addition presents advantages such as higher repeatability and yield of cored material with respect to bulk additions. Experimental and numerical work has been performed by several authors on the subject of alloy additions, spherical and cylindrical geometries being mainly considered. Surprisingly this has not been the case for cored wire, where the reported experimental or numerical studies are scarce. This work presents a 1-D finite volume numerical model aimed for the simulation of the thermal phenomena which occurs when the wire is injected into a liquid metal bath. It is currently being used as a design tool for the conception of new types of cored wire. A parametric study on the effect of injection velocity and steel casing thickness for an Al cored wire immersed into a steel melt at 1863 K (1590 °C) is presented. The standard single casing wire is further compared against a wire with multiple casings. Numerical results show that over a certain range of injection velocities, the core contents' release is delayed in the multiple casing when compared to a single casing wire.
NASA Astrophysics Data System (ADS)
Heinzeller, Dominikus; Duda, Michael G.; Kunstmann, Harald
2017-04-01
With strong financial and political support from national and international initiatives, exascale computing is projected for the end of this decade. Energy requirements and physical limitations imply the use of accelerators and the scaling out to orders of magnitudes larger numbers of cores then today to achieve this milestone. In order to fully exploit the capabilities of these Exascale computing systems, existing applications need to undergo significant development. The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric core, an ocean core, a land-ice core and a sea-ice core. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. Here, we present work towards the application of the atmospheric core (MPAS-A) on current and future high performance computing systems for problems at extreme scale. In particular, we address the issue of massively parallel I/O by extending the model to support the highly scalable SIONlib library. Using global uniform meshes with a convection-permitting resolution of 2-3km, we demonstrate the ability of MPAS-A to scale out to half a million cores while maintaining a high parallel efficiency. We also demonstrate the potential benefit of a hybrid parallelisation of the code (MPI/OpenMP) on the latest generation of Intel's Many Integrated Core Architecture, the Intel Xeon Phi Knights Landing.
Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations
NASA Astrophysics Data System (ADS)
Hause, Benjamin; Parker, Scott; Chen, Yang
2013-10-01
We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the OpenACC compiler directives and Fortran CUDA. Mixed implementation of both Open-ACC and CUDA is demonstrated. CUDA is required for optimizing the particle deposition algorithm. We have implemented the GPU acceleration on a third generation Core I7 gaming PC with two NVIDIA GTX 680 GPUs. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. We also see enormous speedups (10 or more) on the Titan supercomputer at Oak Ridge with Kepler K20 GPUs. Results show speed-ups comparable or better than that of OpenMP models utilizing multiple cores. The use of hybrid OpenACC, CUDA Fortran, and MPI models across many nodes will also be discussed. Optimization strategies will be presented. We will discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.
NASA Astrophysics Data System (ADS)
Lahouij, I.; Bucholz, E. W.; Vacher, B.; Sinnott, S. B.; Martin, J. M.; Dassenoy, F.
2012-09-01
Inorganic fullerene-like (IF) nanoparticles made of metal dichalcogenides have previously been recognized to be good friction modifiers and anti-wear additives under boundary lubrication conditions. The tribological performance of these particles appears to be a result of their size, structure and morphology, along with the test conditions. However, the very small scale of the IF nanoparticles makes distinguishing the properties which affect the lubrication mechanism exceedingly difficult. In this work, a high resolution transmission electron microscope equipped with a nanoindentation holder is used to manipulate individual hollow IF-WS2 nanoparticles and to investigate their responses to compression. Additional atomistic molecular dynamics (MD) simulations of similarly structured, individual hollow IF-MoS2 nanoparticles are performed for compression studies between molybdenum surfaces on their major and minor axis diameters. MD simulations of these structures allows for characterization of the influence of structural orientation on the mechanical behavior and nano-sheet exfoliation of hollow-core IF nanoparticles. The experimental and theoretical results for these similar nanoparticles are qualitatively compared.
Lahouij, I; Bucholz, E W; Vacher, B; Sinnott, S B; Martin, J M; Dassenoy, F
2012-09-21
Inorganic fullerene-like (IF) nanoparticles made of metal dichalcogenides have previously been recognized to be good friction modifiers and anti-wear additives under boundary lubrication conditions. The tribological performance of these particles appears to be a result of their size, structure and morphology, along with the test conditions. However, the very small scale of the IF nanoparticles makes distinguishing the properties which affect the lubrication mechanism exceedingly difficult. In this work, a high resolution transmission electron microscope equipped with a nanoindentation holder is used to manipulate individual hollow IF-WS(2) nanoparticles and to investigate their responses to compression. Additional atomistic molecular dynamics (MD) simulations of similarly structured, individual hollow IF-MoS(2) nanoparticles are performed for compression studies between molybdenum surfaces on their major and minor axis diameters. MD simulations of these structures allows for characterization of the influence of structural orientation on the mechanical behavior and nano-sheet exfoliation of hollow-core IF nanoparticles. The experimental and theoretical results for these similar nanoparticles are qualitatively compared.
NASA Technical Reports Server (NTRS)
Stoker, C. R.; Clarke, J. D. A.; Direito, S.; Foing, B.
2011-01-01
The DOMEX program is a NASA-MMAMA funded project featuring simulations of human crews on Mars focused on science activities that involve collecting samples from the subsurface using both manual and robotic equipment methods and analyzing them in the field and post mission. A crew simulating a human mission to Mars performed activities focused on subsurface science for 2 weeks in November 2009 at Mars Desert Research Station near Hanksville, Utah --an important chemical and morphological Mars analog site. Activities performed included 1) survey of the area to identify geologic provinces, 2) obtaining soil and rock samples from each province and characterizing their mineralogy, chemistry, and biology; 3) site selection and reconnaissance for a future drilling mission; 4) deployment and testing of Mars Underground Mole, a percussive robotic soil sampling device; and 5) recording and analyzing how crew time was used to accomplish these tasks. This paper summarizes results from analysis of soil cores
Featured Image: The Simulated Collapse of a Core
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-11-01
This stunning snapshot (click for a closer look!) is from a simulation of a core-collapse supernova. Despite having been studied for many decades, the mechanism driving the explosions of core-collapse supernovae is still an area of active research. Extremely complex simulations such as this one represent best efforts to include as many realistic physical processes as is currently computationally feasible. In this study led by Luke Roberts (a NASA Einstein Postdoctoral Fellow at Caltech at the time), a core-collapse supernova is modeled long-term in fully 3D simulations that include the effects of general relativity, radiation hydrodynamics, and even neutrino physics. The authors use these simulations to examine the evolution of a supernova after its core bounce. To read more about the teams findings (and see more awesome images from their simulations), check out the paper below!CitationLuke F. Roberts et al 2016 ApJ 831 98. doi:10.3847/0004-637X/831/1/98
The effect of extreme ionization rates during the initial collapse of a molecular cloud core
NASA Astrophysics Data System (ADS)
Wurster, James; Bate, Matthew R.; Price, Daniel J.
2018-05-01
What cosmic ray ionization rate is required such that a non-ideal magnetohydrodynamics (MHD) simulation of a collapsing molecular cloud will follow the same evolutionary path as an ideal MHD simulation or as a purely hydrodynamics simulation? To investigate this question, we perform three-dimensional smoothed particle non-ideal MHD simulations of the gravitational collapse of rotating, one solar mass, magnetized molecular cloud cores, which include Ohmic resistivity, ambipolar diffusion, and the Hall effect. We assume a uniform grain size of ag = 0.1 μm, and our free parameter is the cosmic ray ionization rate, ζcr. We evolve our models, where possible, until they have produced a first hydrostatic core. Models with ζcr ≳ 10-13 s-1 are indistinguishable from ideal MHD models, and the evolution of the model with ζcr = 10-14 s-1 matches the evolution of the ideal MHD model within 1 per cent when considering maximum density, magnetic energy, and maximum magnetic field strength as a function of time; these results are independent of ag. Models with very low ionization rates (ζcr ≲ 10-24 s-1) are required to approach hydrodynamical collapse, and even lower ionization rates may be required for larger ag. Thus, it is possible to reproduce ideal MHD and purely hydrodynamical collapses using non-ideal MHD given an appropriate cosmic ray ionization rate. However, realistic cosmic ray ionization rates approach neither limit; thus, non-ideal MHD cannot be neglected in star formation simulations.
Toward a more efficient and scalable checkpoint/restart mechanism in the Community Atmosphere Model
NASA Astrophysics Data System (ADS)
Anantharaj, Valentine
2015-04-01
The number of cores (both CPU as well as accelerator) in large-scale systems has been increasing rapidly over the past several years. In 2008, there were only 5 systems in the Top500 list that had over 100,000 total cores (including accelerator cores) whereas the number of system with such capability has jumped to 31 in Nov 2014. This growth however has also increased the risk of hardware failure rates, necessitating the implementation of fault tolerance mechanism in applications. The checkpoint and restart (C/R) approach is commonly used to save the state of the application and restart at a later time either after failure or to continue execution of experiments. The implementation of an efficient C/R mechanism will make it more affordable to output the necessary C/R files more frequently. The availability of larger systems (more nodes, memory and cores) has also facilitated the scaling of applications. Nowadays, it is more common to conduct coupled global climate simulation experiments at 1 deg horizontal resolution (atmosphere), often requiring about 103 cores. At the same time, a few climate modeling teams that have access to a dedicated cluster and/or large scale systems are involved in modeling experiments at 0.25 deg horizontal resolution (atmosphere) and 0.1 deg resolution for the ocean. These ultrascale configurations require the order of 104 to 105 cores. It is not only necessary for the numerical algorithms to scale efficiently but the input/output (IO) mechanism must also scale accordingly. An ongoing series of ultrascale climate simulations, using the Titan supercomputer at the Oak Ridge Leadership Computing Facility (ORNL), is based on the spectral element dynamical core of the Community Atmosphere Model (CAM-SE), which is a component of the Community Earth System Model and the DOE Accelerated Climate Model for Energy (ACME). The CAM-SE dynamical core for a 0.25 deg configuration has been shown to scale efficiently across 100,000 cpu cores. At this scale, there is an increased risk that the simulation could be terminated due to hardware failures, resulting in a loss that could be as high as 105 - 106 titan core hours. Increasing the frequency of the output of C/R files could mitigate this loss but at the cost of additional C/R overhead. We are testing a more efficient C/R mechanism in CAM-SE. Our early implementation has demonstrated a nearly 3X performance improvement for a 1 deg CAM-SE (with CAM5 physics and MOZART chemistry) configuration using nearly 103 cores. We are in the process of scaling our implementation to 105 cores. This would allow us to run ultra scale simulations with more sophisticated physics and chemistry options while making better utilization of resources.
Treangen, Todd J; Ondov, Brian D; Koren, Sergey; Phillippy, Adam M
2014-01-01
Whole-genome sequences are now available for many microbial species and clades, however existing whole-genome alignment methods are limited in their ability to perform sequence comparisons of multiple sequences simultaneously. Here we present the Harvest suite of core-genome alignment and visualization tools for the rapid and simultaneous analysis of thousands of intraspecific microbial strains. Harvest includes Parsnp, a fast core-genome multi-aligner, and Gingr, a dynamic visual platform. Together they provide interactive core-genome alignments, variant calls, recombination detection, and phylogenetic trees. Using simulated and real data we demonstrate that our approach exhibits unrivaled speed while maintaining the accuracy of existing methods. The Harvest suite is open-source and freely available from: http://github.com/marbl/harvest.
A comparison of East Asian summer monsoon simulations from CAM3.1 with three dynamic cores
NASA Astrophysics Data System (ADS)
Wei, Ting; Wang, Lanning; Dong, Wenjie; Dong, Min; Zhang, Jingyong
2011-12-01
This paper examines the sensitivity of CAM3.1 simulations of East Asian summer monsoon (EASM) to the choice of dynamic cores using three long-term simulations, one with each of the following cores: the Eulerian spectral transform method (EUL), semi-Lagrangian scheme (SLD) and finite volume approach (FV). Our results indicate that the dynamic cores significantly influence the simulated fields not only through dynamics, such as wind, but also through physical processes, such as precipitation. Generally speaking, SLD is superior to EUL and FV in simulating the climatological features of EASM and its interannual variability. The SLD version of the CAM model partially reduces its known deficiency in simulating the climatological features of East Asian summer precipitation. The strength and position of simulated western Pacific subtropical high (WPSH) and its ridge line compare more favourably with observations in SLD and FV than in EUL. They contribute to the intensification of the south-easterly along the south of WPSH and the vertical motion through the troposphere around 30° N, where the subtropical rain belt exists. Additionally, SLD simulates the scope of the westerly jet core over East Asia more realistically than the other two dynamic cores do. Considerable systematic errors of the seasonal migration of monsoon rain belt and water vapour flux exist in all of the three versions of CAM3.1 model, although it captures the broad northward shift of convection, and the simulated results share similarities. The interannual variation of EASM is found to be more accurate in SLD simulation, which reasonably reproduces the leading combined patterns of precipitation and 850-hPa winds in East Asia, as well as the 2.5- and 10-year periods of Li-Zeng EASM index. These results emphasise the importance of dynamic cores for the EASM simulation as distinct from the simulation's sensitivity to the physical parameterisations.
Parallel simulation of tsunami inundation on a large-scale supercomputer
NASA Astrophysics Data System (ADS)
Oishi, Y.; Imamura, F.; Sugawara, D.
2013-12-01
An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the finite difference calculation, (2) communication between adjacent layers for the calculations to connect each layer, and (3) global communication to obtain the time step which satisfies the CFL condition in the whole domain. A preliminary test on the K computer showed the parallel efficiency on 1024 cores was 57% relative to 64 cores. We estimate that the parallel efficiency will be considerably improved by applying a 2-D domain decomposition instead of the present 1-D domain decomposition in future work. The present parallel tsunami model was applied to the 2011 Great Tohoku tsunami. The coarsest resolution layer covers a 758 km × 1155 km region with a 405 m grid spacing. A nesting of five layers was used with the resolution ratio of 1/3 between nested layers. The finest resolution region has 5 m resolution and covers most of the coastal region of Sendai city. To complete 2 hours of simulation time, the serial (non-parallel) computation took approximately 4 days on a workstation. To complete the same simulation on 1024 cores of the K computer, it took 45 minutes which is more than two times faster than real-time. This presentation discusses the updated parallel computational performance and the efficient use of the K computer when considering the characteristics of the tsunami inundation simulation model in relation to the characteristics and capabilities of the K computer.
Influence of deep defects on device performance of thin-film polycrystalline silicon solar cells
NASA Astrophysics Data System (ADS)
Fehr, M.; Simon, P.; Sontheimer, T.; Leendertz, C.; Gorka, B.; Schnegg, A.; Rech, B.; Lips, K.
2012-09-01
Employing quantitative electron-paramagnetic resonance analysis and numerical simulations, we investigate the performance of thin-film polycrystalline silicon solar cells as a function of defect density. We find that the open-circuit voltage is correlated to the density of defects, which we assign to coordination defects at grain boundaries and in dislocation cores. Numerical device simulations confirm the observed correlation and indicate that the device performance is limited by deep defects in the absorber bulk. Analyzing the defect density as a function of grain size indicates a high concentration of intra-grain defects. For large grains (>2 μm), we find that intra-grain defects dominate over grain boundary defects and limit the solar cell performance.
NASA Astrophysics Data System (ADS)
Hao, Y.; Smith, M. M.; Mason, H. E.; Carroll, S.
2015-12-01
It has long been appreciated that chemical interactions have a major effect on rock porosity and permeability evolution and may alter the behavior or performance of both natural and engineered reservoir systems. Such reaction-induced permeability evolution is of particular importance for geological CO2 sequestration and storage associated with enhanced oil recovery. In this study we used a three-dimensional Darcy scale reactive transport model to simulate CO2 core flood experiments in which the CO2-equilibrated brine was injected into dolostone cores collected from the Arbuckle carbonate reservoir, Wellington, Kansas. Heterogeneous distributions of macro pores, fractures, and mineral phases inside the cores were obtained from X-ray computed microtomography (XCMT) characterization data, and then used to construct initial model macroscopic properties including porosity, permeability, and mineral compositions. The reactive transport simulations were performed by using the Nonisothermal Unsaturated Flow and Transport (NUFT) code, and their results were compared with experimental data. It was observed both experimentally and numerically that the dissolution fronts became unstable in highly heterogeneous and less permeable formations, leading to the development of highly porous flow paths or wormholes. Our model results indicate that the continuum-scale reactive transport models are able to adequately capture the evolution of distinct dissolution fronts as observed in carbonate rocks at a core scale. The impacts of rock heterogeneity, chemical kinetics and porosity-permeability relationships were also examined in this study. The numerical model developed in this study will not only help improve understanding of coupled physical and chemical processes controlling carbonate dissolution, but also provide a useful basis for upscaling transport and reaction properties from core scale to field scale. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Real-time haptic cutting of high-resolution soft tissues.
Wu, Jun; Westermann, Rüdiger; Dick, Christian
2014-01-01
We present our systematic efforts in advancing the computational performance of physically accurate soft tissue cutting simulation, which is at the core of surgery simulators in general. We demonstrate a real-time performance of 15 simulation frames per second for haptic soft tissue cutting of a deformable body at an effective resolution of 170,000 finite elements. This is achieved by the following innovative components: (1) a linked octree discretization of the deformable body, which allows for fast and robust topological modifications of the simulation domain, (2) a composite finite element formulation, which thoroughly reduces the number of simulation degrees of freedom and thus enables to carefully balance simulation performance and accuracy, (3) a highly efficient geometric multigrid solver for solving the linear systems of equations arising from implicit time integration, (4) an efficient collision detection algorithm that effectively exploits the composition structure, and (5) a stable haptic rendering algorithm for computing the feedback forces. Considering that our method increases the finite element resolution for physically accurate real-time soft tissue cutting simulation by an order of magnitude, our technique has a high potential to significantly advance the realism of surgery simulators.
A Review of Inflammatory Processes of the Breast with a Focus on Diagnosis in Core Biopsy Samples
D’Alfonso, Timothy M.; Ginter, Paula S.; Shin, Sandra J.
2015-01-01
Inflammatory and reactive lesions of the breast are relatively uncommon among benign breast lesions and can be the source of an abnormality on imaging. Such lesions can simulate a malignant process, based on both clinical and radiographic findings, and core biopsy is often performed to rule out malignancy. Furthermore, some inflammatory processes can mimic carcinoma or other malignancy microscopically, and vice versa. Diagnostic difficulty may arise due to the small and fragmented sample of a core biopsy. This review will focus on the pertinent clinical, radiographic, and histopathologic features of the more commonly encountered inflammatory lesions of the breast that can be characterized in a core biopsy sample. These include fat necrosis, mammary duct ectasia, granulomatous lobular mastitis, diabetic mastopathy, and abscess. The microscopic differential diagnoses for these lesions when seen in a core biopsy sample will be discussed. PMID:26095437
A Review of Inflammatory Processes of the Breast with a Focus on Diagnosis in Core Biopsy Samples.
D'Alfonso, Timothy M; Ginter, Paula S; Shin, Sandra J
2015-07-01
Inflammatory and reactive lesions of the breast are relatively uncommon among benign breast lesions and can be the source of an abnormality on imaging. Such lesions can simulate a malignant process, based on both clinical and radiographic findings, and core biopsy is often performed to rule out malignancy. Furthermore, some inflammatory processes can mimic carcinoma or other malignancy microscopically, and vice versa. Diagnostic difficulty may arise due to the small and fragmented sample of a core biopsy. This review will focus on the pertinent clinical, radiographic, and histopathologic features of the more commonly encountered inflammatory lesions of the breast that can be characterized in a core biopsy sample. These include fat necrosis, mammary duct ectasia, granulomatous lobular mastitis, diabetic mastopathy, and abscess. The microscopic differential diagnoses for these lesions when seen in a core biopsy sample will be discussed.
Lashkari, A; Khalafi, H; Kazeminejad, H
2013-05-01
In this work, kinetic parameters of Tehran research reactor (TRR) mixed cores have been calculated. The mixed core configurations are made by replacement of the low enriched uranium control fuel elements with highly enriched uranium control fuel elements in the reference core. The MTR_PC package, a nuclear reactor analysis tool, is used to perform the analysis. Simulations were carried out to compute effective delayed neutron fraction and prompt neutron lifetime. Calculation of kinetic parameters is necessary for reactivity and power excursion transient analysis. The results of this research show that effective delayed neutron fraction decreases and prompt neutron lifetime increases with the fuels burn-up. Also, by increasing the number of highly enriched uranium control fuel elements in the reference core, the prompt neutron lifetime increases, but effective delayed neutron fraction does not show any considerable change.
Effective delayed neutron fraction and prompt neutron lifetime of Tehran research reactor mixed-core
Lashkari, A.; Khalafi, H.; Kazeminejad, H.
2013-01-01
In this work, kinetic parameters of Tehran research reactor (TRR) mixed cores have been calculated. The mixed core configurations are made by replacement of the low enriched uranium control fuel elements with highly enriched uranium control fuel elements in the reference core. The MTR_PC package, a nuclear reactor analysis tool, is used to perform the analysis. Simulations were carried out to compute effective delayed neutron fraction and prompt neutron lifetime. Calculation of kinetic parameters is necessary for reactivity and power excursion transient analysis. The results of this research show that effective delayed neutron fraction decreases and prompt neutron lifetime increases with the fuels burn-up. Also, by increasing the number of highly enriched uranium control fuel elements in the reference core, the prompt neutron lifetime increases, but effective delayed neutron fraction does not show any considerable change. PMID:24976672
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontogeorgakos, D.; Derstine, K.; Wright, A.
2013-06-01
The purpose of the TREAT reactor is to generate large transient neutron pulses in test samples without over-heating the core to simulate fuel assembly accident conditions. The power transients in the present HEU core are inherently self-limiting such that the core prevents itself from overheating even in the event of a reactivity insertion accident. The objective of this study was to support the assessment of the feasibility of the TREAT core conversion based on the present reactor performance metrics and the technical specifications of the HEU core. The LEU fuel assembly studied had the same overall design, materials (UO 2more » particles finely dispersed in graphite) and impurities content as the HEU fuel assembly. The Monte Carlo N–Particle code (MCNP) and the point kinetics code TREKIN were used in the analyses.« less
Equilibrium cycle pin by pin transport depletion calculations with DeCART
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kochunas, B.; Downar, T.; Taiwo, T.
As the Advanced Fuel Cycle Initiative (AFCI) program has matured it has become more important to utilize more advanced simulation methods. The work reported here was performed as part of the AFCI fellowship program to develop and demonstrate the capability of performing high fidelity equilibrium cycle calculations. As part of the work here, a new multi-cycle analysis capability was implemented in the DeCART code which included modifying the depletion modules to perform nuclide decay calculations, implementing an assembly shuffling pattern description, and modifying iteration schemes. During the work, stability issues were uncovered with respect to converging simultaneously the neutron flux,more » isotopics, and fluid density and temperature distributions in 3-D. Relaxation factors were implemented which considerably improved the stability of the convergence. To demonstrate the capability two core designs were utilized, a reference UOX core and a CORAIL core. Full core equilibrium cycle calculations were performed on both cores and the discharge isotopics were compared. From this comparison it was noted that the improved modeling capability was not drastically different in its prediction of the discharge isotopics when compared to 2-D single assembly or 2-D core models. For fissile isotopes such as U-235, Pu-239, and Pu-241 the relative differences were 1.91%, 1.88%, and 0.59%), respectively. While this difference may not seem large it translates to mass differences on the order of tens of grams per assembly, which may be significant for the purposes of accounting of special nuclear material. (authors)« less
Integrated fusion simulation with self-consistent core-pedestal coupling
Meneghini, O.; Snyder, P. B.; Smith, S. P.; ...
2016-04-20
In this study, accurate prediction of fusion performance in present and future tokamaks requires taking into account the strong interplay between core transport, pedestal structure, current profile and plasma equilibrium. An integrated modeling workflow capable of calculating the steady-state self- consistent solution to this strongly-coupled problem has been developed. The workflow leverages state-of-the-art components for collisional and turbulent core transport, equilibrium and pedestal stability. Validation against DIII-D discharges shows that the workflow is capable of robustly pre- dicting the kinetic profiles (electron and ion temperature and electron density) from the axis to the separatrix in good agreement with the experiments.more » An example application is presented, showing self-consistent optimization for the fusion performance of the 15 MA D-T ITER baseline scenario as functions of the pedestal density and ion effective charge Z eff.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Ramazan Sonat; Hummel, Andrew John; Hiruta, Hikaru
The deterministic full core simulators require homogenized group constants covering the operating and transient conditions over the entire lifetime. Traditionally, the homogenized group constants are generated using lattice physics code over an assembly or block in the case of prismatic high temperature reactors (HTR). For the case of strong absorbers that causes strong local depressions on the flux profile require special techniques during homogenization over a large volume. Fuel blocks with burnable poisons or control rod blocks are example of such cases. Over past several decades, there have been a tremendous number of studies performed for improving the accuracy ofmore » full-core calculations through the homogenization procedure. However, those studies were mostly performed for light water reactor (LWR) analyses, thus, may not be directly applicable to advanced thermal reactors such as HTRs. This report presents the application of SuPer-Homogenization correction method to a hypothetical HTR core.« less
2015-02-02
CHRISTOPHER CRUMBLY, MANAGER OF THE SPACECRAFT PAYLOAD INTEGRATION AND EVOLUTION OFFICE, GAVE VISITORS AN INSIDER'S PERSPECTIVE ON THE CORE STAGE SIMULATOR AT MARSHALL AND ITS IMPORTANCE TO DEVELOPMENT OF THE SPACE LAUNCH SYSTEM. CHRISTOPHER CRUMBLY, MANAGER OF THE SPACECRAFT PAYLOAD INTEGRATION AND EVOLUTION OFFICE, GAVE VISITORS AN INSIDER'S PERSPECTIVE ON THE CORE STAGE SIMULATOR AT MARSHALL AND ITS IMPORTANCE TO DEVELOPMENT OF THE SPACE LAUNCH SYSTEM.
Development of the V4.2m5 and V5.0m0 Multigroup Cross Section Libraries for MPACT for PWR and BWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kang Seog; Clarno, Kevin T.; Gentry, Cole
2017-03-01
The MPACT neutronics module of the Consortium for Advanced Simulation of Light Water Reactors (CASL) core simulator is a 3-D whole core transport code being developed for the CASL toolset, Virtual Environment for Reactor Analysis (VERA). Key characteristics of the MPACT code include (1) a subgroup method for resonance selfshielding and (2) a whole-core transport solver with a 2-D/1-D synthesis method. The MPACT code requires a cross section library to support all the MPACT core simulation capabilities which would be the most influencing component for simulation accuracy.
NASA Astrophysics Data System (ADS)
Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav
2017-10-01
In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, X. G.; Kim, Y. S.; Choi, K. Y.
2012-07-01
A SBO (station blackout) experiment named SBO-01 was performed at full-pressure IET (Integral Effect Test) facility ATLAS (Advanced Test Loop for Accident Simulation) which is scaled down from the APR1400 (Advanced Power Reactor 1400 MWe). In this study, the transient of SBO-01 is discussed and is subdivided into three phases: the SG fluid loss phase, the RCS fluid loss phase, and the core coolant depletion and core heatup phase. In addition, the typical phenomena in SBO-01 test - SG dryout, natural circulation, core coolant boiling, the PRZ full, core heat-up - are identified. Furthermore, the SBO-01 test is reproduced bymore » the MARS code calculation with the ATLAS model which represents the ATLAS test facility. The experimental and calculated transients are then compared and discussed. The comparison reveals there was malfunction of equipments: the SG leakage through SG MSSV and the measurement error of loop flow meter. As the ATLAS model is validated against the experimental results, it can be further employed to investigate the other possible SBO scenarios and to study the scaling distortions in the ATLAS. (authors)« less
Evaluation of Neutron Radiography Reactor LEU-Core Start-Up Measurements
Bess, John D.; Maddock, Thomas L.; Smolinski, Andrew T.; ...
2014-11-04
Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. Experiments include criticality, control-rod worth measurements, shutdown margin, and excess reactivity for four core loadings with 56, 60, 62, and 64 fuel elements. The worth of four graphite reflector block assemblies and an empty dry tube used for experiment irradiations were also measured and evaluated for the 60-fuel-element core configuration. Dominant uncertainties in the experimental k eff come from uncertainties in the manganese content and impurities in the stainless steel fuel claddingmore » as well as the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 neutron nuclear data are approximately 1.4% (9σ) greater than the benchmark model eigenvalues, which is commonly seen in Monte Carlo simulations of other TRIGA reactors. Simulations of the worth measurements are within the 2σ uncertainty for most of the benchmark experiment worth values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less
Evaluation of Neutron Radiography Reactor LEU-Core Start-Up Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Maddock, Thomas L.; Smolinski, Andrew T.
Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. Experiments include criticality, control-rod worth measurements, shutdown margin, and excess reactivity for four core loadings with 56, 60, 62, and 64 fuel elements. The worth of four graphite reflector block assemblies and an empty dry tube used for experiment irradiations were also measured and evaluated for the 60-fuel-element core configuration. Dominant uncertainties in the experimental k eff come from uncertainties in the manganese content and impurities in the stainless steel fuel claddingmore » as well as the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 neutron nuclear data are approximately 1.4% (9σ) greater than the benchmark model eigenvalues, which is commonly seen in Monte Carlo simulations of other TRIGA reactors. Simulations of the worth measurements are within the 2σ uncertainty for most of the benchmark experiment worth values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less
NASA Astrophysics Data System (ADS)
Yoshida, M.
2015-12-01
An east-west hemispherically asymmetric structure for Earth's inner core has been suggested by various seismological evidence, but its origin is not clearly understood. Here, to investigate the possibility of an "endogenic origin" for the degree-one thermal/mechanical structure of the inner core, I performed new numerical simulations of thermal convection in the growing inner core. A setup value that controls the viscosity contrast between the inner core boundary and the interior of the inner core, ΔηT, was taken as a free parameter. Results show that the degree-one structure only appeared for a limited range of ΔηT; such a scenario may be possible but is not considered probable for the real Earth. The degree-one structure may have been realized by an "exogenous factor" due to the planetary-scale thermal coupling among the lower mantle, the outer core, and the inner core, not by an endogenic factor due to the internal rheological heterogeneity.
Thakral, Preston P.; Benoit, Roland G.; Schacter, Daniel L.
2017-01-01
Neuroimaging data indicate that episodic memory (i.e., remembering specific past experiences) and episodic simulation (i.e., imagining specific future experiences) are associated with enhanced activity in a common set of neural regions, often referred to as the core network. This network comprises the hippocampus, parahippocampal cortex, lateral and medial parietal cortex, lateral temporal cortex, and medial prefrontal cortex. Evidence for a core network has been taken as support for the idea that episodic memory and episodic simulation are supported by common processes. Much remains to be learned about how specific core network regions contribute to specific aspects of episodic simulation. Prior neuroimaging studies of episodic memory indicate that certain regions within the core network are differentially sensitive to the amount of information recollected (e.g., the left lateral parietal cortex). In addition, certain core network regions dissociate as a function of their timecourse of engagement during episodic memory (e.g., transient activity in the posterior hippocampus and sustained activity in the left lateral parietal cortex). In the current study, we assessed whether similar dissociations could be observed during episodic simulation. We found that the left lateral parietal cortex modulates as a function of the amount of simulated details. Of particular interest, while the hippocampus was insensitive to the amount of simulated details, we observed a temporal dissociation within the hippocampus: transient activity occurred in relatively posterior portions of the hippocampus and sustained activity occurred in anterior portions. Because the posterior hippocampal and lateral parietal findings parallel those observed previously during episodic memory, the present results add to the evidence that episodic memory and episodic simulation are supported by common processes. Critically, the present study also provides evidence that regions within the core network support dissociable processes. PMID:28324695
NASA Astrophysics Data System (ADS)
Yang, Bo; Tong, Yuting
2017-04-01
With the rapid development of economy, the development of logistics enterprises in China is also facing a huge challenge, especially the logistics enterprises generally lack of core competitiveness, and service innovation awareness is not strong. Scholars in the process of studying the core competitiveness of logistics enterprises are mainly from the perspective of static stability, not from the perspective of dynamic evolution to explore. So the author analyzes the influencing factors and the evolution process of the core competence of logistics enterprises, using the method of system dynamics to study the cause and effect of the evolution of the core competence of logistics enterprises, construct a system dynamics model of evolution of core competence logistics enterprises, which can be simulated by vensim PLE. The analysis for the effectiveness and sensitivity of simulation model indicates the model can be used as the fitting of the evolution process of the core competence of logistics enterprises and reveal the process and mechanism of the evolution of the core competence of logistics enterprises, and provide management strategies for improving the core competence of logistics enterprises. The construction and operation of computer simulation model offers a kind of effective method for studying the evolution of logistics enterprise core competence.
Three-dimensional discrete element method simulation of core disking
NASA Astrophysics Data System (ADS)
Wu, Shunchuan; Wu, Haoyan; Kemeny, John
2018-04-01
The phenomenon of core disking is commonly seen in deep drilling of highly stressed regions in the Earth's crust. Given its close relationship with the in situ stress state, the presence and features of core disking can be used to interpret the stresses when traditional in situ stress measuring techniques are not available. The core disking process was simulated in this paper using the three-dimensional discrete element method software PFC3D (particle flow code). In particular, PFC3D is used to examine the evolution of fracture initiation, propagation and coalescence associated with core disking under various stress states. In this paper, four unresolved problems concerning core disking are investigated with a series of numerical simulations. These simulations also provide some verification of existing results by other researchers: (1) Core disking occurs when the maximum principal stress is about 6.5 times the tensile strength. (2) For most stress situations, core disking occurs from the outer surface, except for the thrust faulting stress regime, where the fractures were found to initiate from the inner part. (3) The anisotropy of the two horizontal principal stresses has an effect on the core disking morphology. (4) The thickness of core disk has a positive relationship with radial stress and a negative relationship with axial stresses.
Design and analysis of large-core single-mode windmill single crystal sapphire optical fiber
Cheng, Yujie; Hill, Cary; Liu, Bo; ...
2016-06-01
We present a large-core single-mode “windmill” single crystal sapphire optical fiber (SCSF) design, which exhibits single-mode operation by stripping off the higher-order modes (HOMs) while maintaining the fundamental mode. The “windmill” SCSF design was analyzed using the finite element analysis method, in which all the HOMs are leaky. The numerical simulation results show single-mode operation in the spectral range from 0.4 to 2 μm in the windmill SCSF, with an effective core diameter as large as 14 μm. Such fiber is expected to improve the performance of many of the current sapphire fiber optic sensor structures.
The microwave properties of composites including lightweight core-shell ellipsoids
NASA Astrophysics Data System (ADS)
Yuan, Liming; Xu, Yonggang; Dai, Fei; Liao, Yi; Zhang, Deyuan
2016-12-01
In order to study the microwave properties of suspensions including lightweight core-shell ellipsoids, the calculation formula was obtained by substituting an equivalent ellipsoid for the original core-shell ellipsoid. Simulations for Fe-coated diatomite/paraffin suspensions were performed. Results reveal that the calculated results fitted the measured results very well when the inclusion concentration was no more than 15 vol%, but there was an obvious deviation when the inclusion concentration reached 24 vol%. By comparisons, the formula for less diluted suspensions was more suitable for calculating the electromagnetic parameter of suspensions especially when the ratio was smaller between the electromagnetic parameter of the inclusion and that of the host medium.
Multidimensional simulations of core-collapse supernovae with CHIMERA
NASA Astrophysics Data System (ADS)
Lentz, Eric J.; Bruenn, S. W.; Yakunin, K.; Endeve, E.; Blondin, J. M.; Harris, J. A.; Hix, W. R.; Marronetti, P.; Messer, O. B.; Mezzacappa, A.
2014-01-01
Core-collapse supernovae are driven by a multidimensional neutrino radiation hydrodynamic (RHD) engine, and full simulation requires at least axisymmetric (2D) and ultimately symmetry-free 3D RHD simulation. We present recent and ongoing work with our multidimensional RHD supernova code CHIMERA to understand the nature of the core-collapse explosion mechanism and its consequences. Recently completed simulations of 12-25 solar mass progenitors(Woosley & Heger 2007) in well resolved (0.7 degrees in latitude) 2D simulations exhibit robust explosions meeting the observationally expected explosion energy. We examine the role of hydrodynamic instabilities (standing accretion shock instability, neutrino driven convection, etc.) on the explosion dynamics and the development of the explosion energy. Ongoing 3D and 2D simulations examine the role that simulation resolution and the removal of the imposed axisymmetry have in the triggering and development of an explosion from stellar core collapse. Companion posters will explore the gravitational wave signals (Yakunin et al.) and nucleosynthesis (Harris et al.) of our simulations.
Gettman, Matthew T; Pereira, Claudio W; Lipsky, Katja; Wilson, Torrence; Arnold, Jacqueline J; Leibovich, Bradley C; Karnes, R Jeffrey; Dong, Yue
2009-03-01
Structured opportunities for learning communication, teamwork and laparoscopic principles are limited for urology residents. We evaluated and taught teamwork, communication and laparoscopic skills to urology residents in a simulated operating room. Scenarios related to laparoscopy (insufflator failure, carbon dioxide embolism) were developed using mannequins, urology residents and nurses. These scenarios were developed based on Accreditation Council for Graduate Medical Education core competencies and performed in a simulation center. Between the pretest scenario (insufflation failure) and the posttest scenario (carbon dioxide embolism) instruction was given on teamwork, communication and laparoscopic skills. A total of 19 urology residents participated in the training that involved participation in at least 2 scenarios. Performance was evaluated using validated teamwork instruments, questionnaires and videotape analysis. Significant improvement was noted on validated teamwork instruments between scenarios based on resident (pretest 24, posttest 27, p = 0.01) and expert (pretest 16, posttest 25, p = 0.008) evaluation. Increased teamwork and team performance were also noted between scenarios on videotape analysis with significant improvement for adherence to best practice (p = 0.01) and maintenance of positive rapport among team members (p = 0.02). Significant improvement in the setup of the laparoscopic procedure was observed (p = 0.01). Favorable face and content validity was noted for both scenarios. Teamwork, intraoperative communication and laparoscopic skills of urology residents improved during the high fidelity simulation course. Face and content validity of the individual sessions was favorable. In this study high fidelity simulation was effective for assessing and teaching Accreditation Council for Graduate Medical Education core competencies related to intraoperative communication, teamwork and laparoscopic skills.
SOLPS simulations of X-divertor in NSTX-U
NASA Astrophysics Data System (ADS)
Chen, Zhongping; Kotschenreuther, Mike; Mahajan, Swadesh
2017-10-01
The X-divertor (XD) geometry in NSTX-U has demonstrated, in SOLPS simulations, a better performance than the standard divertor (SD) regarding detachment: achieving detachment with a lower upstream density and stabilizing the detachment front near the target. The benefits of such a localized front is that the power exhaust requirement can be satisfied without the radiation front encroaching on the core plasma. It is also found by our simulations that at similar states of detachment the XD outperforms the SD by reducing the heat fluxes to the target and maintaining higher upstream temperatures. These advantages are attributed to the unique geometric characteristics of XD - poloidal flaring near the target. The detailed physical mechanisms behind the better XD performance that is found in the simulations will be examined. Work supported by US DOE under DE-FG02-04ER54742 and SC 0012956.
A dual-waveband dynamic IR scene projector based on DMD
NASA Astrophysics Data System (ADS)
Hu, Yu; Zheng, Ya-wei; Gao, Jiao-bo; Sun, Ke-feng; Li, Jun-na; Zhang, Lei; Zhang, Fang
2016-10-01
Infrared scene simulation system can simulate multifold objects and backgrounds to perform dynamic test and evaluate EO detecting system in the hardware in-the-loop test. The basic structure of a dual-waveband dynamic IR scene projector was introduced in the paper. The system's core device is an IR Digital Micro-mirror Device (DMD) and the radiant source is a mini-type high temperature IR plane black-body. An IR collimation optical system which transmission range includes 3-5μm and 8-12μm is designed as the projection optical system. Scene simulation software was developed with Visual C++ and Vega soft tools and a software flow chart was presented. The parameters and testing results of the system were given, and this system was applied with satisfying performance in an IR imaging simulation testing.
Computational Cosmology at the Bleeding Edge
NASA Astrophysics Data System (ADS)
Habib, Salman
2013-04-01
Large-area sky surveys are providing a wealth of cosmological information to address the mysteries of dark energy and dark matter. Observational probes based on tracking the formation of cosmic structure are essential to this effort, and rely crucially on N-body simulations that solve the Vlasov-Poisson equation in an expanding Universe. As statistical errors from survey observations continue to shrink, and cosmological probes increase in number and complexity, simulations are entering a new regime in their use as tools for scientific inference. Changes in supercomputer architectures provide another rationale for developing new parallel simulation and analysis capabilities that can scale to computational concurrency levels measured in the millions to billions. In this talk I will outline the motivations behind the development of the HACC (Hardware/Hybrid Accelerated Cosmology Code) extreme-scale cosmological simulation framework and describe its essential features. By exploiting a novel algorithmic structure that allows flexible tuning across diverse computer architectures, including accelerated and many-core systems, HACC has attained a performance of 14 PFlops on the IBM BG/Q Sequoia system at 69% of peak, using more than 1.5 million cores.
Parallel Adjective High-Order CFD Simulations Characterizing SOFIA Cavity Acoustics
NASA Technical Reports Server (NTRS)
Barad, Michael F.; Brehm, Christoph; Kiris, Cetin C.; Biswas, Rupak
2016-01-01
This paper presents large-scale MPI-parallel computational uid dynamics simulations for the Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA is an airborne, 2.5-meter infrared telescope mounted in an open cavity in the aft fuselage of a Boeing 747SP. These simulations focus on how the unsteady ow eld inside and over the cavity interferes with the optical path and mounting structure of the telescope. A temporally fourth-order accurate Runge-Kutta, and spatially fth-order accurate WENO- 5Z scheme was used to perform implicit large eddy simulations. An immersed boundary method provides automated gridding for complex geometries and natural coupling to a block-structured Cartesian adaptive mesh re nement framework. Strong scaling studies using NASA's Pleiades supercomputer with up to 32k CPU cores and 4 billion compu- tational cells shows excellent scaling. Dynamic load balancing based on execution time on individual AMR blocks addresses irregular numerical cost associated with blocks con- taining boundaries. Limits to scaling beyond 32k cores are identi ed, and targeted code optimizations are discussed.
Parallel Adaptive High-Order CFD Simulations Characterizing SOFIA Cavitiy Acoustics
NASA Technical Reports Server (NTRS)
Barad, Michael F.; Brehm, Christoph; Kiris, Cetin C.; Biswas, Rupak
2015-01-01
This paper presents large-scale MPI-parallel computational uid dynamics simulations for the Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA is an airborne, 2.5-meter infrared telescope mounted in an open cavity in the aft fuselage of a Boeing 747SP. These simulations focus on how the unsteady ow eld inside and over the cavity interferes with the optical path and mounting structure of the telescope. A tempo- rally fourth-order accurate Runge-Kutta, and a spatially fth-order accurate WENO-5Z scheme were used to perform implicit large eddy simulations. An immersed boundary method provides automated gridding for complex geometries and natural coupling to a block-structured Cartesian adaptive mesh re nement framework. Strong scaling studies using NASA's Pleiades supercomputer with up to 32k CPU cores and 4 billion compu- tational cells shows excellent scaling. Dynamic load balancing based on execution time on individual AMR blocks addresses irregular numerical cost associated with blocks con- taining boundaries. Limits to scaling beyond 32k cores are identi ed, and targeted code optimizations are discussed.
Systematic void fraction studies with RELAP5, FRANCESCA and HECHAN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stosic, Z.; Preusche, G.
1996-08-01
In enhancing the scope of standard thermal-hydraulic codes applications beyond its capabilities, i.e. coupling with a one and/or three-dimensional kinetics core model, the void fraction, transferred from thermal-hydraulics to the core model, plays a determining role in normal operating range and high core flow, as the generated heat and axial power profiles are direct functions of void distribution in the core. Hence, it is very important to know if the void quality models in the programs which have to be coupled are compatible to allow the interactive exchange of data which are based on these constitutive void-quality relations. The presentedmore » void fraction study is performed in order to give the basis for the conclusion whether a transient core simulation using the RELAP5 void fractions can calculate the axial power shapes adequately. Because of that, the void fractions calculated with RELAP5 are compared with those calculated by BWR safety code for licensing--FRANCESCA and the best estimate model for pre- and post-dryout calculation in BWR heated channel--HECHAN. In addition, a comparison with standard experimental void-quality benchmark tube data is performed for the HECHAN code.« less
Hollow-Core Photonic Band Gap Fibers for Particle Acceleration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noble, Robert J.; Spencer, James E.; /SLAC
Photonic band gap (PBG) dielectric fibers with hollow cores are being studied both theoretically and experimentally for use as laser driven accelerator structures. The hollow core functions as both a longitudinal waveguide for the transverse-magnetic (TM) accelerating fields and a channel for the charged particles. The dielectric surrounding the core is permeated by a periodic array of smaller holes to confine the mode, forming a photonic crystal fiber in which modes exist in frequency pass-bands, separated by band gaps. The hollow core acts as a defect which breaks the crystal symmetry, and so-called defect, or trapped modes having frequencies inmore » the band gap will only propagate near the defect. We describe the design of 2-D hollow-core PBG fibers to support TM defect modes with high longitudinal fields and high characteristic impedance. Using as-built dimensions of industrially-made fibers, we perform a simulation analysis of the first prototype PBG fibers specifically designed to support speed-of-light TM modes.« less
Varshney, Rickul; Frenkiel, Saul; Nguyen, Lily H P; Young, Meredith; Del Maestro, Rolando; Zeitouni, Anthony; Tewfik, Marc A
2014-01-01
The technical challenges of endoscopic sinus surgery (ESS) and the high risk of complications support the development of alternative modalities to train residents in these procedures. Virtual reality simulation is becoming a useful tool for training the skills necessary for minimally invasive surgery; however, there are currently no ESS virtual reality simulators available with valid evidence supporting their use in resident education. Our aim was to develop a new rhinology simulator, as well as to define potential performance metrics for trainee assessment. The McGill simulator for endoscopic sinus surgery (MSESS), a new sinus surgery virtual reality simulator with haptic feedback, was developed (a collaboration between the McGill University Department of Otolaryngology-Head and Neck Surgery, the Montreal Neurologic Institute Simulation Lab, and the National Research Council of Canada). A panel of experts in education, performance assessment, rhinology, and skull base surgery convened to identify core technical abilities that would need to be taught by the simulator, as well as performance metrics to be developed and captured. The MSESS allows the user to perform basic sinus surgery skills, such as an ethmoidectomy and sphenoidotomy, through the use of endoscopic tools in a virtual nasal model. The performance metrics were developed by an expert panel and include measurements of safety, quality, and efficiency of the procedure. The MSESS incorporates novel technological advancements to create a realistic platform for trainees. To our knowledge, this is the first simulator to combine novel tools such as the endonasal wash and elaborate anatomic deformity with advanced performance metrics for ESS.
NASA Astrophysics Data System (ADS)
O’Connor, Evan P.; Couch, Sean M.
2018-02-01
We present results from simulations of core-collapse supernovae in FLASH using a newly implemented multidimensional neutrino transport scheme and a newly implemented general relativistic (GR) treatment of gravity. We use a two-moment method with an analytic closure (so-called M1 transport) for the neutrino transport. This transport is multienergy, multispecies, velocity dependent, and truly multidimensional, i.e., we do not assume the commonly used “ray-by-ray” approximation. Our GR gravity is implemented in our Newtonian hydrodynamics simulations via an effective relativistic potential that closely reproduces the GR structure of neutron stars and has been shown to match GR simulations of core collapse quite well. In axisymmetry, we simulate core-collapse supernovae with four different progenitor models in both Newtonian and GR gravity. We find that the more compact proto–neutron star structure realized in simulations with GR gravity gives higher neutrino luminosities and higher neutrino energies. These differences in turn give higher neutrino heating rates (upward of ∼20%–30% over the corresponding Newtonian gravity simulations) that increase the efficacy of the neutrino mechanism. Three of the four models successfully explode in the simulations assuming GREP gravity. In our Newtonian gravity simulations, two of the four models explode, but at times much later than observed in our GR gravity simulations. Our results, in both Newtonian and GR gravity, compare well with several other studies in the literature. These results conclusively show that the approximation of Newtonian gravity for simulating the core-collapse supernova central engine is not acceptable. We also simulate four additional models in GR gravity to highlight the growing disparity between parameterized 1D models of core-collapse supernovae and the current generation of 2D models.
Performance Evaluation of the Honeywell GG1308 Miniature Ring Laser Gyroscope
1993-01-01
information. The final display line provides the current DSB configuration status. An external strobe was established between the Contraves motion...components and systems. The core of the facility is a Contraves -Goerz Model 57CD 2-axis motion simulator capable of highly precise position, rate and
NASA Technical Reports Server (NTRS)
Kadambi, J. R.; Schneider, S. J.; Stewart, W. A.
1986-01-01
The natural circulation of a single phase fluid in a scale model of a pressurized water reactor system during a postulated grade core accident is analyzed. The fluids utilized were water and SF6. The design of the reactor model and the similitude requirements are described. Four LDA tests were conducted: water with 28 kW of heat in the simulated core, with and without the participation of simulated steam generators; water with 28 kW of heat in the simulated core, with the participation of simulated steam generators and with cold upflow of 12 lbm/min from the lower plenum; and SF6 with 0.9 kW of heat in the simulated core and without the participation of the simulated steam generators. For the water tests, the velocity of the water in the center of the core increases with vertical height and continues to increase in the upper plenum. For SF6, it is observed that the velocities are an order of magnitude higher than those of water; however, the velocity patterns are similar.
NASA Astrophysics Data System (ADS)
Pradeep, K. R.; Thomas, A. M.; Basker, V. T.
2018-03-01
Structural health monitoring (SHM) is an essential component of futuristic civil, mechanical and aerospace structures. It detects the damages in system or give warning about the degradation of structure by evaluating performance parameters. This is achieved by the integration of sensors and actuators into the structure. Study of damage detection process in piezoelectric sensor and actuator integrated sandwich cantilever beam is carried out in this paper. Possible skin-core debond at the root of the cantilever beam is simulated and compared with undamaged case. The beam is actuated using piezoelectric actuators and performance differences are evaluated using Polyvinylidene fluoride (PVDF) sensors. The methodology utilized is the voltage/strain response of the damaged versus undamaged beam against transient actuation. Finite element model of piezo-beam is simulated in ANSYSTM using 8 noded coupled field element, with nodal degrees of freedoms are translations in the x, y directions and voltage. An aluminium sandwich beam with a length of 800mm, thickness of core 22.86mm and thickness of skin 0.3mm is considered. Skin-core debond is simulated in the model as unmerged nodes. Reduction in the fundamental frequency of the damaged beam is found to be negligible. But the voltage response of the PVDF sensor under transient excitation shows significantly visible change indicating the debond. Piezo electric based damage detection system is an effective tool for the damage detection of aerospace and civil structural system having inaccessible/critical locations and enables online monitoring possibilities as the power requirement is minimal.
NASA Astrophysics Data System (ADS)
Totz, Sonja; Eliseev, Alexey V.; Petri, Stefan; Flechsig, Michael; Caesar, Levke; Petoukhov, Vladimir; Coumou, Dim
2018-02-01
We present and validate a set of equations for representing the atmosphere's large-scale general circulation in an Earth system model of intermediate complexity (EMIC). These dynamical equations have been implemented in Aeolus 1.0, which is a statistical-dynamical atmosphere model (SDAM) and includes radiative transfer and cloud modules (Coumou et al., 2011; Eliseev et al., 2013). The statistical dynamical approach is computationally efficient and thus enables us to perform climate simulations at multimillennia timescales, which is a prime aim of our model development. Further, this computational efficiency enables us to scan large and high-dimensional parameter space to tune the model parameters, e.g., for sensitivity studies.Here, we present novel equations for the large-scale zonal-mean wind as well as those for planetary waves. Together with synoptic parameterization (as presented by Coumou et al., 2011), these form the mathematical description of the dynamical core of Aeolus 1.0.We optimize the dynamical core parameter values by tuning all relevant dynamical fields to ERA-Interim reanalysis data (1983-2009) forcing the dynamical core with prescribed surface temperature, surface humidity and cumulus cloud fraction. We test the model's performance in reproducing the seasonal cycle and the influence of the El Niño-Southern Oscillation (ENSO). We use a simulated annealing optimization algorithm, which approximates the global minimum of a high-dimensional function.With non-tuned parameter values, the model performs reasonably in terms of its representation of zonal-mean circulation, planetary waves and storm tracks. The simulated annealing optimization improves in particular the model's representation of the Northern Hemisphere jet stream and storm tracks as well as the Hadley circulation.The regions of high azonal wind velocities (planetary waves) are accurately captured for all validation experiments. The zonal-mean zonal wind and the integrated lower troposphere mass flux show good results in particular in the Northern Hemisphere. In the Southern Hemisphere, the model tends to produce too-weak zonal-mean zonal winds and a too-narrow Hadley circulation. We discuss possible reasons for these model biases as well as planned future model improvements and applications.
Analysis of the return to power scenario following a LBLOCA in a PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macian, R.; Tyler, T.N.; Mahaffy, J.H.
1995-09-01
The risk of reactivity accidents has been considered an important safety issue since the beginning of the nuclear power industry. In particular, several events leading to such scenarios for PWR`s have been recognized and studied to assess the potential risk of fuel damage. The present paper analyzes one such event: the possible return to power during the reflooding phase following a LBLOCA. TRAC-PF1/MOD2 coupled with a three-dimensional neutronic model of the core based on the Nodal Expansion Method (NEM) was used to perform the analysis. The system computer model contains a detailed representation of a complete typical 4-loop PWR. Thus,more » the simulation can follow complex system interactions during reflooding, which may influence the neutronics feedback in the core. Analyses were made with core models bases on cross sections generated by LEOPARD. A standard and a potentially more limiting case, with increased pressurizer and accumulator inventories, were run. In both simulations, the reactor reaches a stable state after the reflooding is completed. The lower core region, filled with cold water, generates enough power to boil part of the incoming liquid, thus preventing the core average liquid fraction from reaching a value high enough to cause a return to power. At the same time, the mass flow rate through the core is adequate to maintain the rod temperature well below the fuel damage limit.« less
Cardiovascular Deconditioning in Humans: Human Studies Core
NASA Technical Reports Server (NTRS)
Williams, Gordon
1999-01-01
Major cardiovascular problems, secondary to cardiovascular deconditioning, may occur on extended space missions. While it is generally assumed that the microgravity state is the primary cause of cardiovascular deconditioning, sleep deprivation and disruption of diurnal rhythms may also play an important role. Factors that could be modified by either or both of these perturbations include: autonomic function and short-term cardiovascular reflexes, vasoreactivity, circadian rhythm of cardiovascular hormones (specifically the renin-angiotensin system) and renal sodium handling and hormonal influences on that process, venous compliance, cardiac mass, and cardiac conduction processes. The purpose of the Human Studies Core is to provide the infrastructure to conduct human experiments which will allow for the assessment of the likely role of such factors in the space travel associated cardiovascular deconditioning process and to develop appropriate countermeasures. The Core takes advantage of a newly-created Intensive Physiologic Monitoring (IPM) Unit at the Brigham and Women's Hospital, Boston, MA, to perform these studies. The Core includes two general experimental protocols. The first protocol involves a head down tilt bed-rest study to simulate microgravity. The second protocol includes the addition of a disruption of circadian rhythms to the simulated microgravity environment. Before and after each of these environmental manipulations, the subjects will undergo acute stressors simulating changes in volume and/or stress, which could occur in space and on return to Earth. The subjects are maintained in a rigidly controlled environment with fixed light/dark cycles, activity pattern, and dietary intake of nutrients, fluids, ions and calories.
Towards Reconfigurable, Separable and Hard Real-Time Hybrid Simulation and Test Systems
NASA Astrophysics Data System (ADS)
Quartier, F.; Delatte, B.; Joubert, M.
2009-05-01
Formation flight needs several new technologies, new disciplines, new approaches and above all, more concurrent engineering by more players. One of the problems to be addressed are more complex simulation and test systems that are easy to re-configure to include parts of the target hardware and that can provide sufficient power to handle simulation cores that are requiring one to two orders of magnitude more processing power than the current technology provides. Critical technologies that are already addressed by CNES and Spacebel are study model reuse and simulator reconfigurability (Basiles), model portability (SMP2) and the federation of several simulators using HLA. Two more critical issues are addressed in ongoing R&D work by CNES and Spacebel and are covered by this paper and concern the time engineering and management. The first issue concerns separability (characterisation, identification and handling of separable subsystems) and the consequences on practical systems. Experiments on the Pleiades operational simulator have shown that adding precise simulation of instruments such as Doris and the Star Tracker can be added without significantly impacting overall performance. Improved time analysis leads to better system understanding and testability. The second issue concerns architectures for distributed hybrid simulators systems that provide hard real-time capabilities and can react with a relative time precision and jitter that is in the 10 to 50 µsecond range using mainstream PC's and mainstream Operating Systems. This opens a way to make smaller economic hardware test systems that can be reconfigured to make large hardware test systems without restarting development. Although such systems were considered next to impossible till now, distributed hard real-time systems are getting in reach when modern but mainstream electronics are used and when processor cores can be isolated and reserved for real-time cores. This requires a complete rethinking of the overall system, but needs very little overall changes. Automated identification of potential parallel simulation capability might become possible in a not so distant future.
HACC: Simulating sky surveys on state-of-the-art supercomputing architectures
NASA Astrophysics Data System (ADS)
Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng
2016-01-01
Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.
HACC: Simulating sky surveys on state-of-the-art supercomputing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Pope, Adrian; Finkel, Hal
2016-01-01
Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers thatmore » enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.« less
Supplemental Thermal-Hydraulic Transient Analyses of BR2 in Support of Conversion to LEU Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Licht, J.; Dionne, B.; Sikik, E.
2016-01-01
Belgian Reactor 2 (BR2) is a research and test reactor located in Mol, Belgium and is primarily used for radioisotope production and materials testing. The Materials Management and Minimization (M3) Reactor Conversion Program of the National Nuclear Security Administration (NNSA) is supporting the conversion of the BR2 reactor from Highly Enriched Uranium (HEU) fuel to Low Enriched Uranium (LEU) fuel. The RELAP5/Mod 3.3 code has been used to perform transient thermal-hydraulic safety analyses of the BR2 reactor to support reactor conversion. A RELAP5 model of BR2 has been validated against select transient BR2 reactor experiments performed in 1963 by showingmore » agreement with measured cladding temperatures. Following the validation, the RELAP5 model was then updated to represent the current use of the reactor; taking into account core configuration, neutronic parameters, trip settings, component changes, etc. Simulations of the 1963 experiments were repeated with this updated model to re-evaluate the boiling risks associated with the currently allowed maximum heat flux limit of 470 W/cm 2 and temporary heat flux limit of 600 W/cm 2. This document provides analysis of additional transient simulations that are required as part of a modern BR2 safety analysis report (SAR). The additional simulations included in this report are effect of pool temperature, reduced steady-state flow rate, in-pool loss of coolant accidents, and loss of external cooling. The simulations described in this document have been performed for both an HEU- and LEU-fueled core.« less
Development of an Efficient CFD Model for Nuclear Thermal Thrust Chamber Assembly Design
NASA Technical Reports Server (NTRS)
Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See
2007-01-01
The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed thermo-fluid environments and global characteristics of the internal ballistics for a hypothetical solid-core nuclear thermal thrust chamber assembly (NTTCA). Several numerical and multi-physics thermo-fluid models, such as real fluid, chemically reacting, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver as the underlying computational methodology. The numerical simulations of detailed thermo-fluid environment of a single flow element provide a mechanism to estimate the thermal stress and possible occurrence of the mid-section corrosion of the solid core. In addition, the numerical results of the detailed simulation were employed to fine tune the porosity model mimic the pressure drop and thermal load of the coolant flow through a single flow element. The use of the tuned porosity model enables an efficient simulation of the entire NTTCA system, and evaluating its performance during the design cycle.
Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messer, Bronson; Sewell, Christopher; Heitmann, Katrin
2015-01-01
Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial inmore » situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.« less
Repeat work bouts increase thermal strain for Australian firefighters working in the heat.
Walker, Anthony; Argus, Christos; Driller, Matthew; Rattray, Ben
2015-01-01
Firefighters regularly re-enter fire scenes during long duration emergency events with limited rest between work bouts. It is unclear whether this practice is impacting on the safety of firefighters. To evaluate the effects of multiple work bouts on firefighter physiology, strength, and cognitive performance when working in the heat. Seventy-seven urban firefighters completed two 20-minute simulated search and rescue tasks in a heat chamber (105 ± 5°C), separated by a 10-minute passive recovery. Core and skin temperature, rate of perceived exertion (RPE), thermal sensation (TS), grip strength, and cognitive changes between simulations were evaluated. Significant increases in core temperature and perceptual responses along with declines in strength were observed following the second simulation. No differences for other measures were observed. A significant increase in thermal strain was observed when firefighters re-entered a hot working environment. We recommend that longer recovery periods or active cooling methods be employed prior to re-entry.
Repeat work bouts increase thermal strain for Australian firefighters working in the heat
Walker, Anthony; Argus, Christos; Driller, Matthew; Rattray, Ben
2015-01-01
Background: Firefighters regularly re-enter fire scenes during long duration emergency events with limited rest between work bouts. It is unclear whether this practice is impacting on the safety of firefighters. Objectives:To evaluate the effects of multiple work bouts on firefighter physiology, strength, and cognitive performance when working in the heat. Methods: Seventy-seven urban firefighters completed two 20-minute simulated search and rescue tasks in a heat chamber (105 ± 5°C), separated by a 10-minute passive recovery. Core and skin temperature, rate of perceived exertion (RPE), thermal sensation (TS), grip strength, and cognitive changes between simulations were evaluated. Results: Significant increases in core temperature and perceptual responses along with declines in strength were observed following the second simulation. No differences for other measures were observed. Conclusions: A significant increase in thermal strain was observed when firefighters re-entered a hot working environment. We recommend that longer recovery periods or active cooling methods be employed prior to re-entry. PMID:25849044
Incorporation of a two metre long PET scanner in STIR
NASA Astrophysics Data System (ADS)
Tsoumpas, C.; Brain, C.; Dyke, T.; Gold, D.
2015-09-01
The Explorer project aims to investigate the potential benefits of a total-body 2 metre long PET scanner. The following investigation incorporates this scanner in STIR library and demonstrates the capabilities and weaknesses of existing reconstruction (FBP and OSEM) and single scatter simulation algorithms. It was found that sensible images are reconstructed but at the expense of high memory and processing time demands. FBP requires 4 hours on a core; OSEM: 2 hours per iteration if ran in parallel on 15-cores of a high performance computer. The single scatter simulation algorithm shows that on a short scale, up to a fifth of the scanner length, the assumption that the scatter between direct rings is similar to the scatter between the oblique rings is approximately valid. However, for more extreme cases this assumption is not longer valid, which illustrates that consideration of the oblique rings within the single scatter simulation will be necessary, if this scatter correction is the method of choice.
MUTILS - a set of efficient modeling tools for multi-core CPUs implemented in MEX
NASA Astrophysics Data System (ADS)
Krotkiewski, Marcin; Dabrowski, Marcin
2013-04-01
The need for computational performance is common in scientific applications, and in particular in numerical simulations, where high resolution models require efficient processing of large amounts of data. Especially in the context of geological problems the need to increase the model resolution to resolve physical and geometrical complexities seems to have no limits. Alas, the performance of new generations of CPUs does not improve any longer by simply increasing clock speeds. Current industrial trends are to increase the number of computational cores. As a result, parallel implementations are required in order to fully utilize the potential of new processors, and to study more complex models. We target simulations on small to medium scale shared memory computers: laptops and desktop PCs with ~8 CPU cores and up to tens of GB of memory to high-end servers with ~50 CPU cores and hundereds of GB of memory. In this setting MATLAB is often the environment of choice for scientists that want to implement their own models with little effort. It is a useful general purpose mathematical software package, but due to its versatility some of its functionality is not as efficient as it could be. In particular, the challanges of modern multi-core architectures are not fully addressed. We have developed MILAMIN 2 - an efficient FEM modeling environment written in native MATLAB. Amongst others, MILAMIN provides functions to define model geometry, generate and convert structured and unstructured meshes (also through interfaces to external mesh generators), compute element and system matrices, apply boundary conditions, solve the system of linear equations, address non-linear and transient problems, and perform post-processing. MILAMIN strives to combine the ease of code development and the computational efficiency. Where possible, the code is optimized and/or parallelized within the MATLAB framework. Native MATLAB is augmented with the MUTILS library - a set of MEX functions that implement the computationally intensive, performance critical parts of the code, which we have identified to be bottlenecks. Here, we discuss the functionality and performance of the MUTILS library. Currently, it includes: 1. time and memory efficient assembly of sparse matrices for FEM simulations 2. parallel sparse matrix - vector product with optimizations speficic to symmetric matrices and multiple degrees of freedom per node 3. parallel point in triangle location and point in tetrahedron location for unstructured, adaptive 2D and 3D meshes (useful for 'marker in cell' type of methods) 4. parallel FEM interpolation for 2D and 3D meshes of elements of different types and orders, and for different number of degrees of freedom per node 5. a stand-alone, MEX implementation of the Conjugate Gradients iterative solver 6. interface to METIS graph partitioning and a fast implementation of RCM reordering
Characterizing core-periphery structure of complex network by h-core and fingerprint curve
NASA Astrophysics Data System (ADS)
Li, Simon S.; Ye, Adam Y.; Qi, Eric P.; Stanley, H. Eugene; Ye, Fred Y.
2018-02-01
It is proposed that the core-periphery structure of complex networks can be simulated by h-cores and fingerprint curves. While the features of core structure are characterized by h-core, the features of periphery structure are visualized by rose or spiral curve as the fingerprint curve linking to entire-network parameters. It is suggested that a complex network can be approached by h-core and rose curves as the first-order Fourier-approach, where the core-periphery structure is characterized by five parameters: network h-index, network radius, degree power, network density and average clustering coefficient. The simulation looks Fourier-like analysis.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Pitman, Michael C; Rice, John J
2011-06-01
We present the orthogonal recursive bisection algorithm that hierarchically segments the anatomical model structure into subvolumes that are distributed to cores. The anatomy is derived from the Visible Human Project, with electrophysiology based on the FitzHugh-Nagumo (FHN) and ten Tusscher (TT04) models with monodomain diffusion. Benchmark simulations with up to 16,384 and 32,768 cores on IBM Blue Gene/P and L supercomputers for both FHN and TT04 results show good load balancing with almost perfect speedup factors that are close to linear with the number of cores. Hence, strong scaling is demonstrated. With 32,768 cores, a 1000 ms simulation of full heart beat requires about 6.5 min of wall clock time for a simulation of the FHN model. For the largest machine partitions, the simulations execute at a rate of 0.548 s (BG/P) and 0.394 s (BG/L) of wall clock time per 1 ms of simulation time. To our knowledge, these simulations show strong scaling to substantially higher numbers of cores than reported previously for organ-level simulation of the heart, thus significantly reducing run times. The ability to reduce runtimes could play a critical role in enabling wider use of cardiac models in research and clinical applications.
Diffusive Transport and Structural Properties of Liquid Iron Alloys at High Pressure
NASA Astrophysics Data System (ADS)
Posner, E.; Rubie, D. C.; Steinle-Neumann, G.; Frost, D. J.
2017-12-01
Diffusive transport properties of liquid iron alloys at high pressures (P) and temperatures (T) place important kinetic constraints on processes related to the origin and evolution of planetary cores. Earth's core composition is largely controlled by the extent of chemical equilibration achieved between liquid metal bodies and a silicate magma ocean during core formation, which can be estimated using chemical diffusion data. In order to estimate the time and length scales of metal-silicate chemical equilibration, we have measured chemical diffusion rates of Si, O and Cr in liquid iron over the P-T range of 1-18 GPa and 1873-2643 K using a multi-anvil apparatus. We have also performed first-principles molecular dynamic simulations of comparable binary liquid compositions, in addition to pure liquid Fe, over a much wider P-T range (1 bar-330 GPa, 2200-5500 K) in order to both validate the simulation results with experimental data at conditions accessible in the laboratory and to extend our dataset to conditions of the Earth's core. Over the entire P-T range studied using both methods, diffusion coefficients are described consistently and well using an exponential function of the homologous temperature relation. Si, Cr and Fe diffusivities of approximately 5 × 10-9 m2 s-1 are constant along the melting curve from ambient to core pressures, while oxygen diffusion is 2-3 times faster. Our results indicate that in order for the composition of the Earth's core to represent chemical equilibrium, impactor cores must have broken up into liquid droplet sizes no larger than a few tens of cm. Structural properties, analyzed using partial radial distribution functions from the molecular dynamics simulations, reveal a pressure-induced structural change in liquid Fe0.96O0.04 at densities of 8 g cm-3, in agreement with previous experimental studies. For densities above 8 g cm-3, the liquid is essentially close packed with a local CsCl-like (B2) packing of Fe around O under conditions of the Earth's core.
Digital core based transmitted ultrasonic wave simulation and velocity accuracy analysis
NASA Astrophysics Data System (ADS)
Zhu, Wei; Shan, Rui
2016-06-01
Transmitted ultrasonic wave simulation (TUWS) in a digital core is one of the important elements of digital rock physics and is used to study wave propagation in porous cores and calculate equivalent velocity. When simulating wave propagates in a 3D digital core, two additional layers are attached to its two surfaces vertical to the wave-direction and one planar wave source and two receiver-arrays are properly installed. After source excitation, the two receivers then record incident and transmitted waves of the digital rock. Wave propagating velocity, which is the velocity of the digital core, is computed by the picked peak-time difference between the two recorded waves. To evaluate the accuracy of TUWS, a digital core is fully saturated with gas, oil, and water to calculate the corresponding velocities. The velocities increase with decreasing wave frequencies in the simulation frequency band, and this is considered to be the result of scattering. When the pore fluids are varied from gas to oil and finally to water, the velocity-variation characteristics between the different frequencies are similar, thereby approximately following the variation law of velocities obtained from linear elastic statics simulation (LESS), although their absolute values are different. However, LESS has been widely used. The results of this paper show that the transmission ultrasonic simulation has high relative precision.
Performance investigation on DCSFCL considering different magnetic materials
NASA Astrophysics Data System (ADS)
Yuan, Jiaxin; Zhou, Hang; Zhong, Yongheng; Gan, Pengcheng; Gao, Yanhui; Muramatsu, Kazuhiro; Du, Zhiye; Chen, Baichao
2018-05-01
In order to protect high voltage direct current (HVDC) system from destructive consequences caused by fault current, a novel concept of HVDC system fault current limiter (DCSFCL) was proposed previously. Since DCSFCL is based on saturable core reactor theory, iron core becomes the key to the final performance of it. Therefore, three typical kinds of soft magnetic materials were chosen to find out their impact on performances of DCSFCL. Different characteristics of materials were compared and their theoretical deductions were carried out, too. In the meanwhile, 3D models applying those three materials were built separately and finite element analysis simulations were performed to compare these results and further verify the assumptions. It turns out that materials with large saturation flux density value Bs like silicon steel and short demagnetization time like ferrite might be the best choice for DCSFCL, which can be a future research direction of magnetic materials.
NASA Astrophysics Data System (ADS)
Obergaulinger, M.; Aloy, M. A.; Dimmelmeier, H.; Müller, E.
2006-10-01
We continue our investigations of the magnetorotational collapse of stellar cores by discussing simulations performed with a modified Newtonian gravitational potential that mimics general relativistic effects. The approximate TOV gravitational potential used in our simulations captures several basic features of fully relativistic simulations quite well. In particular, it is able to correctly reproduce the behavior of models that show a qualitative change both of the dynamics and the gravitational wave signal when switching from Newtonian to fully relativistic simulations. For models where the dynamics and gravitational wave signals are already captured qualitatively correctly by a Newtonian potential, the results of the Newtonian and the approximate TOV models differ quantitatively. The collapse proceeds to higher densities with the approximate TOV potential, allowing for a more efficient amplification of the magnetic field by differential rotation. The strength of the saturation fields (˜ 1015 ~ G at the surface of the inner core) is a factor of two to three higher than in Newtonian gravity. Due to the more efficient field amplification, the influence of magnetic fields is considerably more pronounced than in the Newtonian case for some of the models. As in the Newtonian case, sufficiently strong magnetic fields slow down the core's rotation and trigger a secular contraction phase to higher densities. More clearly than in Newtonian models, the collapsed cores of these models exhibit two different kinds of shock generation. Due to magnetic braking, a first shock wave created during the initial centrifugal bounce at subnuclear densities does not suffice for ejecting any mass, and the temporarily stabilized core continues to collapse to supranuclear densities. Another stronger shock wave is generated during the second bounce as the core exceeds nuclear matter density. The gravitational wave signal of these models does not fit into the standard classification. Therefore, in the first paper of this series we introduced a new type of gravitational wave signal, which we call type IV or “magnetic type”. This signal type is more frequent for the approximate relativistic potential than for the Newtonian one. Most of our weak-field models are marginally detectable with the current LIGO interferometer for a source located at a distance of 10 kpc. Strongly magnetized models emit a substantial fraction of their GW power at very low frequencies. A flat spectrum between 10 Hz and ⪉ 100 kHz denotes the generation of a jet-like hydromagnetic outflow.
Williams-Bell, F Michael; Aisbett, Brad; Murphy, Bernadette A; Larsen, Brianna
2017-01-01
Background: The severity of wildland fires is increasing due to continually hotter and drier summers. Firefighters are required to make life altering decisions on the fireground, which requires analytical thinking, problem solving, and situational awareness. This study aimed to determine the effects of very hot (45°C; HOT) conditions on cognitive function following periods of simulated wildfire suppression work when compared to a temperate environment (18°C; CON). Methods: Ten male volunteer firefighters intermittently performed a simulated fireground task for 3 h in both the CON and HOT environments, with cognitive function tests (paired associates learning and spatial span) assessed at baseline (cog 1) and during the final 20-min of each hour (cog 2, 3, and 4). Reaction time was also assessed at cog 1 and cog 4. Pre- and post- body mass were recorded, and core and skin temperature were measured continuously throughout the protocol. Results: There were no differences between the CON and HOT trials for any of the cognitive assessments, regardless of complexity. While core temperature reached 38.7°C in the HOT (compared to only 37.5°C in the CON; p < 0.01), core temperature declined during the cognitive assessments in both conditions (at a rate of -0.15 ± 0.20°C·hr -1 and -0.63 ± 0.12°C·hr -1 in the HOT and CON trial respectively). Firefighters also maintained their pre-exercise body mass in both conditions, indicating euhydration. Conclusions: It is likely that this maintenance of euhydration and the relative drop in core temperature experienced between physical work bouts was responsible for the preservation of firefighters' cognitive function in the present study.
Williams-Bell, F. Michael; Aisbett, Brad; Murphy, Bernadette A.; Larsen, Brianna
2017-01-01
Background: The severity of wildland fires is increasing due to continually hotter and drier summers. Firefighters are required to make life altering decisions on the fireground, which requires analytical thinking, problem solving, and situational awareness. This study aimed to determine the effects of very hot (45°C; HOT) conditions on cognitive function following periods of simulated wildfire suppression work when compared to a temperate environment (18°C; CON). Methods: Ten male volunteer firefighters intermittently performed a simulated fireground task for 3 h in both the CON and HOT environments, with cognitive function tests (paired associates learning and spatial span) assessed at baseline (cog 1) and during the final 20-min of each hour (cog 2, 3, and 4). Reaction time was also assessed at cog 1 and cog 4. Pre- and post- body mass were recorded, and core and skin temperature were measured continuously throughout the protocol. Results: There were no differences between the CON and HOT trials for any of the cognitive assessments, regardless of complexity. While core temperature reached 38.7°C in the HOT (compared to only 37.5°C in the CON; p < 0.01), core temperature declined during the cognitive assessments in both conditions (at a rate of −0.15 ± 0.20°C·hr−1 and −0.63 ± 0.12°C·hr−1 in the HOT and CON trial respectively). Firefighters also maintained their pre-exercise body mass in both conditions, indicating euhydration. Conclusions: It is likely that this maintenance of euhydration and the relative drop in core temperature experienced between physical work bouts was responsible for the preservation of firefighters' cognitive function in the present study. PMID:29114230
Modeling Core Collapse Supernovae
NASA Astrophysics Data System (ADS)
Mezzacappa, Anthony
2017-01-01
Core collapse supernovae, or the death throes of massive stars, are general relativistic, neutrino-magneto-hydrodynamic events. The core collapse supernova mechanism is still not in hand, though key components have been illuminated, and the potential for multiple mechanisms for different progenitors exists. Core collapse supernovae are the single most important source of elements in the Universe, and serve other critical roles in galactic chemical and thermal evolution, the birth of neutron stars, pulsars, and stellar mass black holes, the production of a subclass of gamma-ray bursts, and as potential cosmic laboratories for fundamental nuclear and particle physics. Given this, the so called ``supernova problem'' is one of the most important unsolved problems in astrophysics. It has been fifty years since the first numerical simulations of core collapse supernovae were performed. Progress in the past decade, and especially within the past five years, has been exponential, yet much work remains. Spherically symmetric simulations over nearly four decades laid the foundation for this progress. Two-dimensional modeling that assumes axial symmetry is maturing. And three-dimensional modeling, while in its infancy, has begun in earnest. I will present some of the recent work from the ``Oak Ridge'' group, and will discuss this work in the context of the broader work by other researchers in the field. I will then point to future requirements and challenges. Connections with other experimental, observational, and theoretical efforts will be discussed, as well.
Convective cooling in a pool-type research reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sipaun, Susan, E-mail: susan@nm.gov.my; Usman, Shoaib, E-mail: usmans@mst.edu
2016-01-22
A reactor produces heat arising from fission reactions in the nuclear core. In the Missouri University of Science and Technology research reactor (MSTR), this heat is removed by natural convection where the coolant/moderator is demineralised water. Heat energy is transferred from the core into the coolant, and the heated water eventually evaporates from the open pool surface. A secondary cooling system was installed to actively remove excess heat arising from prolonged reactor operations. The nuclear core consists of uranium silicide aluminium dispersion fuel (U{sub 3}Si{sub 2}Al) in the form of rectangular plates. Gaps between the plates allow coolant to passmore » through and carry away heat. A study was carried out to map out heat flow as well as to predict the system’s performance via STAR-CCM+ simulation. The core was approximated as porous media with porosity of 0.7027. The reactor is rated 200kW and total heat density is approximately 1.07+E7 Wm{sup −3}. An MSTR model consisting of 20% of MSTR’s nuclear core in a third of the reactor pool was developed. At 35% pump capacity, the simulation results for the MSTR model showed that water is drawn out of the pool at a rate 1.28 kg s{sup −1} from the 4” pipe, and predicted pool surface temperature not exceeding 30°C.« less
Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.
2014-11-23
This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.
NASA Astrophysics Data System (ADS)
Zverev, V. V.; Izmozherov, I. M.; Filippov, B. N.
2018-02-01
Three-dimensional computer simulation of dynamic processes in a moving domain boundary separating domains in a soft magnetic uniaxial film with planar anisotropy is performed by numerical solution of Landau-Lifshitz-Gilbert equations. The developed visualization methods are used to establish the connection between the motion of surface vortices and antivortices, singular (Bloch) points, and core lines of intrafilm vortex structures. A relation between the character of magnetization dynamics and the film thickness is found. The analytical models of spatial vortex structures for imitation of topological properties of the structures observed in micromagnetic simulation are constructed.
High-performance, scalable optical network-on-chip architectures
NASA Astrophysics Data System (ADS)
Tan, Xianfang
The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of GWOR in optical communication and BFT in non-uniform traffic communication and three-dimension (3D) implementation. 5. A cycle-accurate NoC simulator is developed to evaluate the performance of proposed HONoC architectures. It is a comprehensive platform that can simulate both electronic and optical NoCs. Different size HONoC architectures are evaluated in terms of throughput, latency and energy dissipation. Simulation results confirm that HONoC achieves good network performance with lower power consumption.
Parallel Agent-Based Simulations on Clusters of GPUs and Multi-Core Processors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaby, Brandon G; Perumalla, Kalyan S; Seal, Sudip K
2010-01-01
An effective latency-hiding mechanism is presented in the parallelization of agent-based model simulations (ABMS) with millions of agents. The mechanism is designed to accommodate the hierarchical organization as well as heterogeneity of current state-of-the-art parallel computing platforms. We use it to explore the computation vs. communication trade-off continuum available with the deep computational and memory hierarchies of extant platforms and present a novel analytical model of the tradeoff. We describe our implementation and report preliminary performance results on two distinct parallel platforms suitable for ABMS: CUDA threads on multiple, networked graphical processing units (GPUs), and pthreads on multi-core processors. Messagemore » Passing Interface (MPI) is used for inter-GPU as well as inter-socket communication on a cluster of multiple GPUs and multi-core processors. Results indicate the benefits of our latency-hiding scheme, delivering as much as over 100-fold improvement in runtime for certain benchmark ABMS application scenarios with several million agents. This speed improvement is obtained on our system that is already two to three orders of magnitude faster on one GPU than an equivalent CPU-based execution in a popular simulator in Java. Thus, the overall execution of our current work is over four orders of magnitude faster when executed on multiple GPUs.« less
HEAVY AND THERMAL OIL RECOVERY PRODUCTION MECHANISMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anthony R. Kovscek
2003-04-01
This technical progress report describes work performed from January 1 through March 31, 2003 for the project ''Heavy and Thermal Oil Recovery Production Mechanisms,'' DE-FC26-00BC15311. In this project, a broad spectrum of research is undertaken related to thermal and heavy-oil recovery. The research tools and techniques span from pore-level imaging of multiphase fluid flow to definition of reservoir-scale features through streamline-based history matching techniques. During this period, previous analysis of experimental data regarding multidimensional imbibition to obtain shape factors appropriate for dual-porosity simulation was verified by comparison among analytic, dual-porosity simulation, and fine-grid simulation. We continued to study the mechanismsmore » by which oil is produced from fractured porous media at high pressure and high temperature. Temperature has a beneficial effect on recovery and reduces residual oil saturation. A new experiment was conducted on diatomite core. Significantly, we show that elevated temperature induces fines release in sandstone cores and this behavior may be linked to wettability. Our work in the area of primary production of heavy oil continues with field cores and crude oil. On the topic of reservoir definition, work continued on developing techniques that integrate production history into reservoir models using streamline-based properties.« less
Inviscid and Viscous CFD Analysis of Booster Separation for the Space Launch System Vehicle
NASA Technical Reports Server (NTRS)
Dalle, Derek J.; Rogers, Stuart E.; Chan, William M.; Lee, Henry C.
2016-01-01
This paper presents details of Computational Fluid Dynamic (CFD) simulations of the Space Launch System during solid-rocket booster separation using the Cart3D inviscid and Overflow viscous CFD codes. The discussion addresses the use of multiple data sources of computational aerodynamics, experimental aerodynamics, and trajectory simulations for this critical phase of flight. Comparisons are shown between Cart3D simulations and a wind tunnel test performed at NASA Langley Research Center's Unitary Plan Wind Tunnel, and further comparisons are shown between Cart3D and viscous Overflow solutions for the flight vehicle. The Space Launch System (SLS) is a new exploration-class launch vehicle currently in development that includes two Solid Rocket Boosters (SRBs) modified from Space Shuttle hardware. These SRBs must separate from the SLS core during a phase of flight where aerodynamic loads are nontrivial. The main challenges for creating a separation aerodynamic database are the large number of independent variables (including orientation of the core, relative position and orientation of the boosters, and rocket thrust levels) and the complex flow caused by exhaust plumes of the booster separation motors (BSMs), which are small rockets designed to push the boosters away from the core by firing partially in the direction opposite to the motion of the vehicle.
Scalable nuclear density functional theory with Sky3D
NASA Astrophysics Data System (ADS)
Afibuzzaman, Md; Schuetrumpf, Bastian; Aktulga, Hasan Metin
2018-02-01
In nuclear astrophysics, quantum simulations of large inhomogeneous dense systems as they appear in the crusts of neutron stars present big challenges. The number of particles in a simulation with periodic boundary conditions is strongly limited due to the immense computational cost of the quantum methods. In this paper, we describe techniques for an efficient and scalable parallel implementation of Sky3D, a nuclear density functional theory solver that operates on an equidistant grid. Presented techniques allow Sky3D to achieve good scaling and high performance on a large number of cores, as demonstrated through detailed performance analysis on a Cray XC40 supercomputer.
A Virtual Rat for Simulating Environmental and Exertional Heat Stress
2014-10-02
unsuitable for accurately determin- ing the spatiotemporal temperature distribution in the animal due to heat stress and for performing mechanistic analysis ...possible in the original experiments. Finally, we performed additional simu- lations using the virtual rat to facilitate comparative analysis of the...capability of the virtual rat to account for the circadian rhythmicity in core temperatures during an in- crease in the external temperature from 22
A flooding induced station blackout analysis for a pressurized water reactor using the RISMC toolkit
Mandelli, Diego; Prescott, Steven; Smith, Curtis; ...
2015-05-17
In this paper we evaluate the impact of a power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: the RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., component/system activation) and to perform statistical analyses. In our case, the simulation of the flooding is performed by using an advanced smooth particle hydrodynamics code calledmore » NEUTRINO. The obtained results allow the user to investigate and quantify the impact of timing and sequencing of events on system safety. The impact of power uprate is determined in terms of both core damage probability and safety margins.« less
Simulating an Exploding Fission-Bomb Core
NASA Astrophysics Data System (ADS)
Reed, Cameron
2016-03-01
A time-dependent desktop-computer simulation of the core of an exploding fission bomb (nuclear weapon) has been developed. The simulation models a core comprising a mixture of two isotopes: a fissile one (such as U-235) and an inert one (such as U-238) that captures neutrons and removes them from circulation. The user sets the enrichment percentage and scattering and fission cross-sections of the fissile isotope, the capture cross-section of the inert isotope, the number of neutrons liberated per fission, the number of ``initiator'' neutrons, the radius of the core, and the neutron-reflection efficiency of a surrounding tamper. The simulation, which is predicated on ordinary kinematics, follows the three-dimensional motions and fates of neutrons as they travel through the core. Limitations of time and computer memory render it impossible to model a real-life core, but results of numerous runs clearly demonstrate the existence of a critical mass for a given set of parameters and the dramatic effects of enrichment and tamper efficiency on the growth (or decay) of the neutron population. The logic of the simulation will be described and results of typical runs will be presented and discussed.
Modeling of carbonate reservoir variable secondary pore space based on CT images
NASA Astrophysics Data System (ADS)
Nie, X.; Nie, S.; Zhang, J.; Zhang, C.; Zhang, Z.
2017-12-01
Digital core technology has brought convenience to us, and X-ray CT scanning is one of the most common way to obtain 3D digital cores. However, it can only provide the original information of the only samples being scanned, and we can't modify the porosity of the scanned cores. For numerical rock physical simulations, a series of cores with variable porosities are needed to determine the relationship between the physical properties and porosity. In carbonate rocks, the secondary pore space including dissolution pores, caves and natural fractures is the key reservoir space, which makes the study of carbonate secondary porosity very important. To achieve the variation of porosities in one rock sample, based on CT scanned digital cores, according to the physical and chemical properties of carbonate rocks, several mathematical methods are chosen to simulate the variation of secondary pore space. We use the erosion and dilation operations of mathematical morphology method to simulate the pore space changes of dissolution pores and caves. We also use the Fractional Brownian Motion model to generate natural fractures with different widths and angles in digital cores to simulate fractured carbonate rocks. The morphological opening-and-closing operations in mathematical morphology method are used to simulate distribution of fluid in the pore space. The established 3D digital core models with different secondary porosities and water saturation status can be used in the study of the physical property numerical simulations of carbonate reservoir rocks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salko, Robert K; Sung, Yixing; Kucukboyaci, Vefa
The Virtual Environment for Reactor Applications core simulator (VERA-CS) being developed by the Consortium for the Advanced Simulation of Light Water Reactors (CASL) includes coupled neutronics, thermal-hydraulics, and fuel temperature components with an isotopic depletion capability. The neutronics capability employed is based on MPACT, a three-dimensional (3-D) whole core transport code. The thermal-hydraulics and fuel temperature models are provided by the COBRA-TF (CTF) subchannel code. As part of the CASL development program, the VERA-CS (MPACT/CTF) code system was applied to model and simulate reactor core response with respect to departure from nucleate boiling ratio (DNBR) at the limiting time stepmore » of a postulated pressurized water reactor (PWR) main steamline break (MSLB) event initiated at the hot zero power (HZP), either with offsite power available and the reactor coolant pumps in operation (high-flow case) or without offsite power where the reactor core is cooled through natural circulation (low-flow case). The VERA-CS simulation was based on core boundary conditions from the RETRAN-02 system transient calculations and STAR-CCM+ computational fluid dynamics (CFD) core inlet distribution calculations. The evaluation indicated that the VERA-CS code system is capable of modeling and simulating quasi-steady state reactor core response under the steamline break (SLB) accident condition, the results are insensitive to uncertainties in the inlet flow distributions from the CFD simulations, and the high-flow case is more DNB limiting than the low-flow case.« less
Impact of memory bottleneck on the performance of graphics processing units
NASA Astrophysics Data System (ADS)
Son, Dong Oh; Choi, Hong Jun; Kim, Jong Myon; Kim, Cheol Hong
2015-12-01
Recent graphics processing units (GPUs) can process general-purpose applications as well as graphics applications with the help of various user-friendly application programming interfaces (APIs) supported by GPU vendors. Unfortunately, utilizing the hardware resource in the GPU efficiently is a challenging problem, since the GPU architecture is totally different to the traditional CPU architecture. To solve this problem, many studies have focused on the techniques for improving the system performance using GPUs. In this work, we analyze the GPU performance varying GPU parameters such as the number of cores and clock frequency. According to our simulations, the GPU performance can be improved by 125.8% and 16.2% on average as the number of cores and clock frequency increase, respectively. However, the performance is saturated when memory bottleneck problems incur due to huge data requests to the memory. The performance of GPUs can be improved as the memory bottleneck is reduced by changing GPU parameters dynamically.
Preparation macroconstants to simulate the core of VVER-1000 reactor
NASA Astrophysics Data System (ADS)
Seleznev, V. Y.
2017-01-01
Dynamic model is used in simulators of VVER-1000 reactor for training of operating staff and students. As a code for the simulation of neutron-physical characteristics is used DYNCO code that allows you to perform calculations of stationary, transient and emergency processes in real time to a different geometry of the reactor lattices [1]. To perform calculations using this code, you need to prepare macroconstants for each FA. One way of getting macroconstants is to use the WIMS code, which is based on the use of its own 69-group macroconstants library. This paper presents the results of calculations of FA obtained by the WIMS code for VVER-1000 reactor with different parameters of fuel and coolant, as well as the method of selection of energy groups for further calculation macroconstants.
Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations
NASA Astrophysics Data System (ADS)
Hause, Benjamin; Parker, Scott
2012-10-01
We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the GPU accelerator compiler directives. We have implemented the GPU acceleration on a Core I7 gaming PC with a NVIDIA GTX 580 GPU. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. Optimization strategies and comparisons between DIRAC and the gaming PC will be presented. We will also discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.
Design of a family of ring-core fibers for OAM transmission studies.
Brunet, Charles; Ung, Bora; Wang, Lixian; Messaddeq, Younès; LaRochelle, Sophie; Rusch, Leslie A
2015-04-20
We propose a family of ring-core fibers, designed for the transmission of OAM modes, that can be fabricated by drawing five different fibers from a single preform. This novel technique allows us to experimentally sweep design parameters and speed up the fiber design optimization process. Such a family of fibers could be used to examine system performance, but also facilitate understanding of parameter impact in the transition from design to fabrication. We present design parameters characterizing our fiber, and enumerate criteria to be satisfied. We determine targeted fiber dimensions and explain our strategy for examining a design family rather than a single fiber design. We simulate modal properties of the designed fibers, and compare the results with measurements performed on fabricated fibers.
Crystal MD: The massively parallel molecular dynamics software for metal with BCC structure
NASA Astrophysics Data System (ADS)
Hu, Changjun; Bai, He; He, Xinfu; Zhang, Boyao; Nie, Ningming; Wang, Xianmeng; Ren, Yingwen
2017-02-01
Material irradiation effect is one of the most important keys to use nuclear power. However, the lack of high-throughput irradiation facility and knowledge of evolution process, lead to little understanding of the addressed issues. With the help of high-performance computing, we could make a further understanding of micro-level-material. In this paper, a new data structure is proposed for the massively parallel simulation of the evolution of metal materials under irradiation environment. Based on the proposed data structure, we developed the new molecular dynamics software named Crystal MD. The simulation with Crystal MD achieved over 90% parallel efficiency in test cases, and it takes more than 25% less memory on multi-core clusters than LAMMPS and IMD, which are two popular molecular dynamics simulation software. Using Crystal MD, a two trillion particles simulation has been performed on Tianhe-2 cluster.
Ferbonink, G F; Rodrigues, T S; Dos Santos, D P; Camargo, P H C; Albuquerque, R Q; Nome, R A
2018-05-29
In this study, we investigated hollow AgAu nanoparticles with the goal of improving our understanding of the composition-dependent catalytic activity of these nanoparticles. AgAu nanoparticles were synthesized via the galvanic replacement method with controlled size and nanoparticle compositions. We studied extinction spectra with UV-Vis spectroscopy and simulations based on Mie theory and the boundary element method, and ultrafast spectroscopy measurements to characterize decay constants and the overall energy transfer dynamics as a function of AgAu composition. Electron-phonon coupling times for each composition were obtained from pump-power dependent pump-probe transients. These spectroscopic studies showed how nanoscale surface segregation, hollow interiors and porosity affect the surface plasmon resonance wavelength and fundamental electron-phonon coupling times. Analysis of the spectroscopic data was used to correlate electron-phonon coupling times to AgAu composition, and thus to surface segregation and catalytic activity. We have performed all-atom molecular dynamics simulations of model hollow AgAu core-shell nanoparticles to characterize nanoparticle stability and equilibrium structures, besides providing atomic level views of nanoparticle surface segregation. Overall, the basic atomistic and electron-lattice dynamics of core-shell AgAu nanoparticles characterized here thus aid the mechanistic understanding and performance optimization of AgAu nanoparticle catalysts.
Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duque, Earl P.N.; Whitlock, Brad J.
High performance computers have for many years been on a trajectory that gives them extraordinary compute power with the addition of more and more compute cores. At the same time, other system parameters such as the amount of memory per core and bandwidth to storage have remained constant or have barely increased. This creates an imbalance in the computer, giving it the ability to compute a lot of data that it cannot reasonably save out due to time and storage constraints. While technologies have been invented to mitigate this problem (burst buffers, etc.), software has been adapting to employ inmore » situ libraries which perform data analysis and visualization on simulation data while it is still resident in memory. This avoids the need to ever have to pay the costs of writing many terabytes of data files. Instead, in situ enables the creation of more concentrated data products such as statistics, plots, and data extracts, which are all far smaller than the full-sized volume data. With the increasing popularity of in situ, multiple in situ infrastructures have been created, each with its own mechanism for integrating with a simulation. To make it easier to instrument a simulation with multiple in situ infrastructures and include custom analysis algorithms, this project created the SENSEI framework.« less
Multi-scale imaging and elastic simulation of carbonates
NASA Astrophysics Data System (ADS)
Faisal, Titly Farhana; Awedalkarim, Ahmed; Jouini, Mohamed Soufiane; Jouiad, Mustapha; Chevalier, Sylvie; Sassi, Mohamed
2016-05-01
Digital Rock Physics (DRP) is an emerging technology that can be used to generate high quality, fast and cost effective special core analysis (SCAL) properties compared to conventional experimental techniques and modeling techniques. The primary workflow of DRP conssits of three elements: 1) image the rock sample using high resolution 3D scanning techniques (e.g. micro CT, FIB/SEM), 2) process and digitize the images by segmenting the pore and matrix phases 3) simulate the desired physical properties of the rocks such as elastic moduli and velocities of wave propagation. A Finite Element Method based algorithm, that discretizes the basic Hooke's Law equation of linear elasticity and solves it numerically using a fast conjugate gradient solver, developed by Garboczi and Day [1] is used for mechanical and elastic property simulations. This elastic algorithm works directly on the digital images by treating each pixel as an element. The images are assumed to have periodic constant-strain boundary condition. The bulk and shear moduli of the different phases are required inputs. For standard 1.5" diameter cores however the Micro-CT scanning reoslution (around 40 μm) does not reveal smaller micro- and nano- pores beyond the resolution. This results in an unresolved "microporous" phase, the moduli of which is uncertain. Knackstedt et al. [2] assigned effective elastic moduli to the microporous phase based on self-consistent theory (which gives good estimation of velocities for well cemented granular media). Jouini et al. [3] segmented the core plug CT scan image into three phases and assumed that micro porous phase is represented by a sub-extracted micro plug (which too was scanned using Micro-CT). Currently the elastic numerical simulations based on CT-images alone largely overpredict the bulk, shear and Young's modulus when compared to laboratory acoustic tests of the same rocks. For greater accuracy of numerical simulation prediction, better estimates of moduli inputs for this current unresolved phase is important. In this work we take a multi-scale imaging approach by first extracting a smaller 0.5" core and scanning at approx 13 µm, then further extracting a 5mm diameter core scanned at 5 μm. From this last scale, region of interests (containing unresolved areas) are identified for scanning at higher resolutions using Focalised Ion Beam (FIB/SEM) scanning technique reaching 50 nm resolution. Numerical simulation is run on such a small unresolved section to obtain a better estimate of the effective moduli which is then used as input for simulations performed using CT-images. Results are compared with expeirmental acoustic test moduli obtained also at two scales: 1.5" and 0.5" diameter cores.
NASA Astrophysics Data System (ADS)
Turinsky, Paul J.; Kothe, Douglas B.
2016-05-01
The Consortium for the Advanced Simulation of Light Water Reactors (CASL), the first Energy Innovation Hub of the Department of Energy, was established in 2010 with the goal of providing modeling and simulation (M&S) capabilities that support and accelerate the improvement of nuclear energy's economic competitiveness and the reduction of spent nuclear fuel volume per unit energy, and all while assuring nuclear safety. To accomplish this requires advances in M&S capabilities in radiation transport, thermal-hydraulics, fuel performance and corrosion chemistry. To focus CASL's R&D, industry challenge problems have been defined, which equate with long standing issues of the nuclear power industry that M&S can assist in addressing. To date CASL has developed a multi-physics ;core simulator; based upon pin-resolved radiation transport and subchannel (within fuel assembly) thermal-hydraulics, capitalizing on the capabilities of high performance computing. CASL's fuel performance M&S capability can also be optionally integrated into the core simulator, yielding a coupled multi-physics capability with untapped predictive potential. Material models have been developed to enhance predictive capabilities of fuel clad creep and growth, along with deeper understanding of zirconium alloy clad oxidation and hydrogen pickup. Understanding of corrosion chemistry (e.g., CRUD formation) has evolved at all scales: micro, meso and macro. CFD R&D has focused on improvement in closure models for subcooled boiling and bubbly flow, and the formulation of robust numerical solution algorithms. For multiphysics integration, several iterative acceleration methods have been assessed, illuminating areas where further research is needed. Finally, uncertainty quantification and data assimilation techniques, based upon sampling approaches, have been made more feasible for practicing nuclear engineers via R&D on dimensional reduction and biased sampling. Industry adoption of CASL's evolving M&S capabilities, which is in progress, will assist in addressing long-standing and future operational and safety challenges of the nuclear industry.
Finding the First Cosmic Explosions. II. Core-collapse Supernovae
NASA Astrophysics Data System (ADS)
Whalen, Daniel J.; Joggerst, Candace C.; Fryer, Chris L.; Stiavelli, Massimo; Heger, Alexander; Holz, Daniel E.
2013-05-01
Understanding the properties of Population III (Pop III) stars is prerequisite to elucidating the nature of primeval galaxies, the chemical enrichment and reionization of the early intergalactic medium, and the origin of supermassive black holes. While the primordial initial mass function (IMF) remains unknown, recent evidence from numerical simulations and stellar archaeology suggests that some Pop III stars may have had lower masses than previously thought, 15-50 M ⊙ in addition to 50-500 M ⊙. The detection of Pop III supernovae (SNe) by JWST, WFIRST, or the TMT could directly probe the primordial IMF for the first time. We present numerical simulations of 15-40 M ⊙ Pop III core-collapse SNe performed with the Los Alamos radiation hydrodynamics code RAGE. We find that they will be visible in the earliest galaxies out to z ~ 10-15, tracing their star formation rates and in some cases revealing their positions on the sky. Since the central engines of Pop III and solar-metallicity core-collapse SNe are quite similar, future detection of any Type II SNe by next-generation NIR instruments will in general be limited to this epoch.
PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems.
Ghaffarizadeh, Ahmadreza; Heiland, Randy; Friedman, Samuel H; Mumenthaler, Shannon M; Macklin, Paul
2018-02-01
Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal "virtual laboratory" for such multicellular systems simulates both the biochemical microenvironment (the "stage") and many mechanically and biochemically interacting cells (the "players" upon the stage). PhysiCell-physics-based multicellular simulator-is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility "out of the box." The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 105-106 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a "cellular cargo delivery" system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net.
Fuel Cycle Performance of Thermal Spectrum Small Modular Reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Worrall, Andrew; Todosow, Michael
2016-01-01
Small modular reactors may offer potential benefits, such as enhanced operational flexibility. However, it is vital to understand the holistic impact of small modular reactors on the nuclear fuel cycle and fuel cycle performance. The focus of this paper is on the fuel cycle impacts of light water small modular reactors in a once-through fuel cycle with low-enriched uranium fuel. A key objective of this paper is to describe preliminary reactor core physics and fuel cycle analyses conducted in support of the U.S. Department of Energy Office of Nuclear Energy Fuel Cycle Options Campaign. Challenges with small modular reactors include:more » increased neutron leakage, fewer assemblies in the core (and therefore fewer degrees of freedom in the core design), complex enrichment and burnable absorber loadings, full power operation with inserted control rods, the potential for frequent load-following operation, and shortened core height. Each of these will impact the achievable discharge burn-up in the reactor and the fuel cycle performance. This paper summarizes the results of an expert elicitation focused on developing a list of the factors relevant to small modular reactor fuel, core, and operation that will impact fuel cycle performance. Preliminary scoping analyses were performed using a regulatory-grade reactor core simulator. The hypothetical light water small modular reactor considered in these preliminary scoping studies is a cartridge type one-batch core with 4.9% enrichment. Some core parameters, such as the size of the reactor and general assembly layout, are similar to an example small modular reactor concept from industry. The high-level issues identified and preliminary scoping calculations in this paper are intended to inform on potential fuel cycle impacts of one-batch thermal spectrum SMRs. In particular, this paper highlights the impact of increased neutron leakage and reduced number of batches on the achievable burn-up of the reactor. Fuel cycle performance metrics for a small modular reactor are compared to a conventional three-batch light water reactor in the following areas: nuclear waste management, environmental impact, and resource utilization. Metrics performance for a small modular reactor are degraded for mass of spent nuclear fuel and high level waste disposed, mass of depleted uranium disposed, land use per energy generated, and carbon emission per energy generated« less
NASA Astrophysics Data System (ADS)
Needham, Perri J.; Bhuiyan, Ashraf; Walker, Ross C.
2016-04-01
We present an implementation of explicit solvent particle mesh Ewald (PME) classical molecular dynamics (MD) within the PMEMD molecular dynamics engine, that forms part of the AMBER v14 MD software package, that makes use of Intel Xeon Phi coprocessors by offloading portions of the PME direct summation and neighbor list build to the coprocessor. We refer to this implementation as pmemd MIC offload and in this paper present the technical details of the algorithm, including basic models for MPI and OpenMP configuration, and analyze the resultant performance. The algorithm provides the best performance improvement for large systems (>400,000 atoms), achieving a ∼35% performance improvement for satellite tobacco mosaic virus (1,067,095 atoms) when 2 Intel E5-2697 v2 processors (2 ×12 cores, 30M cache, 2.7 GHz) are coupled to an Intel Xeon Phi coprocessor (Model 7120P-1.238/1.333 GHz, 61 cores). The implementation utilizes a two-fold decomposition strategy: spatial decomposition using an MPI library and thread-based decomposition using OpenMP. We also present compiler optimization settings that improve the performance on Intel Xeon processors, while retaining simulation accuracy.
Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, P. T.; Shadid, J. N.; Hu, J. J.
Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less
Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD
Lin, P. T.; Shadid, J. N.; Hu, J. J.; ...
2017-11-06
Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less
Comparison of thunderstorm simulations from WRF-NMM and WRF-ARW models over East Indian Region.
Litta, A J; Mary Ididcula, Sumam; Mohanty, U C; Kiran Prasad, S
2012-01-01
The thunderstorms are typical mesoscale systems dominated by intense convection. Mesoscale models are essential for the accurate prediction of such high-impact weather events. In the present study, an attempt has been made to compare the simulated results of three thunderstorm events using NMM and ARW model core of WRF system and validated the model results with observations. Both models performed well in capturing stability indices which are indicators of severe convective activity. Comparison of model-simulated radar reflectivity imageries with observations revealed that NMM model has simulated well the propagation of the squall line, while the squall line movement was slow in ARW. From the model-simulated spatial plots of cloud top temperature, we can see that NMM model has better captured the genesis, intensification, and propagation of thunder squall than ARW model. The statistical analysis of rainfall indicates the better performance of NMM than ARW. Comparison of model-simulated thunderstorm affected parameters with that of the observed showed that NMM has performed better than ARW in capturing the sharp rise in humidity and drop in temperature. This suggests that NMM model has the potential to provide unique and valuable information for severe thunderstorm forecasters over east Indian region.
Finite element simulation of core inspection in helicopter rotor blades using guided waves.
Chakrapani, Sunil Kishore; Barnard, Daniel; Dayal, Vinay
2015-09-01
This paper extends the work presented earlier on inspection of helicopter rotor blades using guided Lamb modes by focusing on inspecting the spar-core bond. In particular, this research focuses on structures which employ high stiffness, high density core materials. Wave propagation in such structures deviate from the generic Lamb wave propagation in sandwich panels. To understand the various mode conversions, finite element models of a generalized helicopter rotor blade were created and subjected to transient analysis using a commercial finite element code; ANSYS. Numerical simulations showed that a Lamb wave excited in the spar section of the blade gets converted into Rayleigh wave which travels across the spar-core section and mode converts back into Lamb wave. Dispersion of Rayleigh waves in multi-layered half-space was also explored. Damage was modeled in the form of a notch in the core section to simulate a cracked core, and delamination was modeled between the spar and core material to simulate spar-core disbond. Mode conversions under these damaged conditions were examined numerically. The numerical models help in assessing the difficulty of using nondestructive evaluation for complex structures and also highlight the physics behind the mode conversions which occur at various discontinuities. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsunoda, Hirokazu; Sato, Osamu; Okajima, Shigeaki
2002-07-01
In order to achieve fully automated reactor operation of RAPID-L reactor, innovative reactivity control systems LEM, LIM, and LRM are equipped with lithium-6 as a liquid poison. Because lithium-6 has not been used as a neutron absorbing material of conventional fast reactors, measurements of the reactivity worth of Lithium-6 were performed at the Fast Critical Assembly (FCA) of Japan Atomic Energy Research Institute (JAERI). The FCA core was composed of highly enriched uranium and stainless steel samples so as to simulate the core spectrum of RAPID-L. The samples of 95% enriched lithium-6 were inserted into the core parallel to themore » core axis for the measurement of the reactivity worth at each position. It was found that the measured reactivity worth in the core region well agreed with calculated value by the method for the core designs of RAPID-L. Bias factors for the core design method were obtained by comparing between experimental and calculated results. The factors were used to determine the number of LEM and LIM equipped in the core to achieve fully automated operation of RAPID-L. (authors)« less
NASA Astrophysics Data System (ADS)
Hill, C.
2008-12-01
Low cost graphic cards today use many, relatively simple, compute cores to deliver support for memory bandwidth of more than 100GB/s and theoretical floating point performance of more than 500 GFlop/s. Right now this performance is, however, only accessible to highly parallel algorithm implementations that, (i) can use a hundred or more, 32-bit floating point, concurrently executing cores, (ii) can work with graphics memory that resides on the graphics card side of the graphics bus and (iii) can be partially expressed in a language that can be compiled by a graphics programming tool. In this talk we describe our experiences implementing a complete, but relatively simple, time dependent shallow-water equations simulation targeting a cluster of 30 computers each hosting one graphics card. The implementation takes into account the considerations (i), (ii) and (iii) listed previously. We code our algorithm as a series of numerical kernels. Each kernel is designed to be executed by multiple threads of a single process. Kernels are passed memory blocks to compute over which can be persistent blocks of memory on a graphics card. Each kernel is individually implemented using the NVidia CUDA language but driven from a higher level supervisory code that is almost identical to a standard model driver. The supervisory code controls the overall simulation timestepping, but is written to minimize data transfer between main memory and graphics memory (a massive performance bottle-neck on current systems). Using the recipe outlined we can boost the performance of our cluster by nearly an order of magnitude, relative to the same algorithm executing only on the cluster CPU's. Achieving this performance boost requires that many threads are available to each graphics processor for execution within each numerical kernel and that the simulations working set of data can fit into the graphics card memory. As we describe, this puts interesting upper and lower bounds on the problem sizes for which this technology is currently most useful. However, many interesting problems fit within this envelope. Looking forward, we extrapolate our experience to estimate full-scale ocean model performance and applicability. Finally we describe preliminary hybrid mixed 32-bit and 64-bit experiments with graphics cards that support 64-bit arithmetic, albeit at a lower performance.
The Application of Simulated Experimental Teaching in International Trade Course
ERIC Educational Resources Information Center
Ma, Tao; Chen, Wen
2009-01-01
International Trade Practice is a professional basic course for specialty of International Economy and Trade. As the core of International Trade Practice, it is extremely related to foreign affairs and needs much practical experience. This paper puts forward some suggestions on how to improve the performance of teaching in order to educate the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amadio, G.; et al.
An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less
NASA Astrophysics Data System (ADS)
Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri
2015-04-01
Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
Development of a New 47-Group Library for the CASL Neutronics Simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kang Seog; Williams, Mark L; Wiarda, Dorothea
The CASL core simulator MPACT is under development for the neutronics and thermal-hydraulics coupled simulation for the pressurized light water reactors. The key characteristics of the MPACT code include a subgroup method for resonance self-shielding, and a whole core solver with a 1D/2D synthesis method. The ORNL AMPX/SCALE code packages have been significantly improved to support various intermediate resonance self-shielding approximations such as the subgroup and embedded self-shielding methods. New 47-group AMPX and MPACT libraries based on ENDF/B-VII.0 have been generated for the CASL core simulator MPACT of which group structure comes from the HELIOS library. The new 47-group MPACTmore » library includes all nuclear data required for static and transient core simulations. This study discusses a detailed procedure to generate the 47-group AMPX and MPACT libraries and benchmark results for the VERA progression problems.« less
Multithreaded Stochastic PDES for Reactions and Diffusions in Neurons.
Lin, Zhongwei; Tropper, Carl; Mcdougal, Robert A; Patoary, Mohammand Nazrul Ishlam; Lytton, William W; Yao, Yiping; Hines, Michael L
2017-07-01
Cells exhibit stochastic behavior when the number of molecules is small. Hence a stochastic reaction-diffusion simulator capable of working at scale can provide a more accurate view of molecular dynamics within the cell. This paper describes a parallel discrete event simulator, Neuron Time Warp-Multi Thread (NTW-MT), developed for the simulation of reaction diffusion models of neurons. To the best of our knowledge, this is the first parallel discrete event simulator oriented towards stochastic simulation of chemical reactions in a neuron. The simulator was developed as part of the NEURON project. NTW-MT is optimistic and thread-based, which attempts to capitalize on multi-core architectures used in high performance machines. It makes use of a multi-level queue for the pending event set and a single roll-back message in place of individual anti-messages to disperse contention and decrease the overhead of processing rollbacks. Global Virtual Time is computed asynchronously both within and among processes to get rid of the overhead for synchronizing threads. Memory usage is managed in order to avoid locking and unlocking when allocating and de-allocating memory and to maximize cache locality. We verified our simulator on a calcium buffer model. We examined its performance on a calcium wave model, comparing it to the performance of a process based optimistic simulator and a threaded simulator which uses a single priority queue for each thread. Our multi-threaded simulator is shown to achieve superior performance to these simulators. Finally, we demonstrated the scalability of our simulator on a larger CICR model and a more detailed CICR model.
NASA Astrophysics Data System (ADS)
Chang, Qiang; Herbst, Eric
2016-03-01
The recent discovery of methyl formate and dimethyl ether in the gas phase of cold cores with temperatures as cold as 10 K challenges our previous astrochemical models concerning the formation of complex organic molecules (COMs). The strong correlation between the abundances and distributions of methyl formate and dimethyl ether further shows that current astrochemical models may be missing important chemical processes in cold astronomical sources. We investigate a scenario in which COMs and the methoxy radical can be formed on dust grains via a so-called chain reaction mechanism, in a similar manner to CO2. A unified gas-grain microscopic-macroscopic Monte Carlo approach with both normal and interstitial sites for icy grain mantles is used to perform the chemical simulations. Reactive desorption with varying degrees of efficiency is included to enhance the nonthermal desorption of species formed on cold dust grains. In addition, varying degrees of efficiency for the surface formation of methoxy are also included. The observed abundances of a variety of organic molecules in cold cores can be reproduced in our models. The strong correlation between the abundances of methyl formate and dimethyl ether in cold cores can also be explained. Nondiffusive chemical reactions on dust grain surfaces may play a key role in the formation of some COMs.
Molecular Dynamics Simulations of Star Polymeric Molecules with Diblock Arms, a Comparative Study.
Swope, William C; Carr, Amber C; Parker, Amanda J; Sly, Joseph; Miller, Robert D; Rice, Julia E
2012-10-09
We have performed all atom explicit solvent molecular dynamics simulations of three different star polymeric systems in water, each star molecule consisting of 16 diblock copolymer arms bound to a small adamantane core. The arms of each system consist of an inner "hydrophobic" block (either polylactide, polyvalerolactone, or polyethylene) and an outer hydrophilic block (polyethylene oxide, PEO). These models exhibit unusual structure very close to the core (clearly an artifact of our model) but which we believe becomes "normal" or bulk-like at relatively short distances from this core. We report on a number of temperature-dependent thermodynamic (structural/energetic) properties as well as kinetic properties. Our observations suggest that under physiological conditions, the hydrophobic regions of these systems may be solid and glassy, with only rare and shallow penetration by water, and that a sharp boundary exists between the hydrophobic cores and either the PEO or water. The PEO in these models is seen to be fully water-solvated at low temperatures but tends to phase separate from water as the temperature is increased, reminiscent of a lower critical solution temperature exhibited by PEO-water mixtures. Water penetration concentration and depth is composition and temperature dependent with greater water penetration for the most ester-rich star polymer.
Nishima, Wataru; Miyashita, Naoyuki; Yamaguchi, Yoshiki; Sugita, Yuji; Re, Suyong
2012-07-26
The introduction of bisecting GlcNAc and core fucosylation in N-glycans is essential for fine functional regulation of glycoproteins. In this paper, the effect of these modifications on the conformational properties of N-glycans is examined at the atomic level by performing replica-exchange molecular dynamics (REMD) simulations. We simulate four biantennary complex-type N-glycans, namely, unmodified, two single-substituted with either bisecting GlcNAc or core fucose, and disubstituted forms. By using REMD as an enhanced sampling technique, five distinct conformers in solution, each of which is characterized by its local orientation of the Manα1-6Man glycosidic linkage, are observed for all four N-glycans. The chemical modifications significantly change their conformational equilibria. The number of major conformers is reduced from five to two and from five to four upon the introduction of bisecting GlcNAc and core fucosylation, respectively. The population change is attributed to specific inter-residue hydrogen bonds, including water-mediated ones. The experimental NMR data, including nuclear Overhauser enhancement and scalar J-coupling constants, are well reproduced taking the multiple conformers into account. Our structural model supports the concept of "conformer selection", which emphasizes the conformational flexibility of N-glycans in protein-glycan interactions.
Measurement of neutron spectra in the experimental reactor LR-0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prenosil, Vaclav; Mravec, Filip; Veskrna, Martin
2015-07-01
The measurement of fast neutron fluxes is important in many areas of nuclear technology. It affects the stability of the reactor structural components, performance of fuel, and also the fuel manner. The experiments performed at the LR-0 reactor were in the past focused on the measurement of neutron field far from the core, in reactor pressure vessel simulator or in biological shielding simulator. In the present the measurement in closer regions to core became more important, especially measurements in structural components like reactor baffle. This importance increases with both reactor power increase and also long term operation. Other important taskmore » is an increasing need for the measurement close to the fuel. The spectra near the fuel are aimed due to the planned measurements with the FLIBE salt, in FHR / MSR research, where one of the task is the measurement of the neutron spectra in it. In both types of experiments there is strong demand for high working count rate. The high count rate is caused mainly by high gamma background and by high fluxes. The fluxes in core or in its vicinity are relatively high to ensure safe reactor operation. This request is met in the digital spectroscopic apparatus. All experiments were realized in the LR-0 reactor. It is an extremely flexible light water zero-power research reactor, operated by the Research Center Rez (Czech Republic). (authors)« less
Rater Training to Support High-Stakes Simulation-Based Assessments
Feldman, Moshe; Lazzara, Elizabeth H.; Vanderbilt, Allison A.; DiazGranados, Deborah
2013-01-01
Competency-based assessment and an emphasis on obtaining higher-level outcomes that reflect physicians’ ability to demonstrate their skills has created a need for more advanced assessment practices. Simulation-based assessments provide medical education planners with tools to better evaluate the 6 Accreditation Council for Graduate Medical Education (ACGME) and American Board of Medical Specialties (ABMS) core competencies by affording physicians opportunities to demonstrate their skills within a standardized and replicable testing environment, thus filling a gap in the current state of assessment for regulating the practice of medicine. Observational performance assessments derived from simulated clinical tasks and scenarios enable stronger inferences about the skill level a physician may possess, but also introduce the potential of rater errors into the assessment process. This article reviews the use of simulation-based assessments for certification, credentialing, initial licensure, and relicensing decisions and describes rater training strategies that may be used to reduce rater errors, increase rating accuracy, and enhance the validity of simulation-based observational performance assessments. PMID:23280532
NASA Technical Reports Server (NTRS)
Bragg-Sitton, Shannon M.; Hervol, David S.; Godfroy, Thomas J.
2009-01-01
A Direct Drive Gas-Cooled (DDG) reactor core simulator has been coupled to a Brayton Power Conversion Unit (BPCU) for integrated system testing at NASA Glenn Research Center (GRC) in Cleveland, OH. This is a closed-cycle system that incorporates an electrically heated reactor core module, turbo alternator, recuperator, and gas cooler. Nuclear fuel elements in the gas-cooled reactor design are replaced with electric resistance heaters to simulate the heat from nuclear fuel in the corresponding fast spectrum nuclear reactor. The thermodynamic transient behavior of the integrated system was the focus of this test series. In order to better mimic the integrated response of the nuclear-fueled system, a simulated reactivity feedback control loop was implemented. Core power was controlled by a point kinetics model in which the reactivity feedback was based on core temperature measurements; the neutron generation time and the temperature feedback coefficient are provided as model inputs. These dynamic system response tests demonstrate the overall capability of a non-nuclear test facility in assessing system integration issues and characterizing integrated system response times and response characteristics.
NASA Technical Reports Server (NTRS)
Bragg-Sitton, Shannon M.; Hervol, David S.; Godfroy, Thomas J.
2010-01-01
A Direct Drive Gas-Cooled (DDG) reactor core simulator has been coupled to a Brayton Power Conversion Unit (BPCU) for integrated system testing at NASA Glenn Research Center (GRC) in Cleveland, Ohio. This is a closed-cycle system that incorporates an electrically heated reactor core module, turboalternator, recuperator, and gas cooler. Nuclear fuel elements in the gas-cooled reactor design are replaced with electric resistance heaters to simulate the heat from nuclear fuel in the corresponding fast spectrum nuclear reactor. The thermodynamic transient behavior of the integrated system was the focus of this test series. In order to better mimic the integrated response of the nuclear-fueled system, a simulated reactivity feedback control loop was implemented. Core power was controlled by a point kinetics model in which the reactivity feedback was based on core temperature measurements; the neutron generation time and the temperature feedback coefficient are provided as model inputs. These dynamic system response tests demonstrate the overall capability of a non-nuclear test facility in assessing system integration issues and characterizing integrated system response times and response characteristics.
Effects of Rifle Handling, Target Acquisition, and Trigger Control on Simulated Shooting Performance
2014-05-06
qualification task, and covers all of the training requirements listed in the Soldier’s Manual of Common Tasks: Warrior Skills Level 1 handbook...allow for more direct and standardized training based on common Soldier errors. If discernible patterns in these core elements of marksmanship were...more than 50 percent of variance in marksmanship performance on a standard EST weapons qualification task for participants whose 3 Snellen acuity
Modeling Large Scale Circuits Using Massively Parallel Descrete-Event Simulation
2013-06-01
exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power consumption...grow to exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power...Warp Speed 10.0. 2.0 INTRODUCTION As supercomputer systems approach exascale , the core count will exceed 1024 and number of transistors used in
Virtual Design Method for Controlled Failure in Foldcore Sandwich Panels
NASA Astrophysics Data System (ADS)
Sturm, Ralf; Fischer, S.
2015-12-01
For certification, novel fuselage concepts have to prove equivalent crashworthiness standards compared to the existing metal reference design. Due to the brittle failure behaviour of CFRP this requirement can only be fulfilled by a controlled progressive crash kinematics. Experiments showed that the failure of a twin-walled fuselage panel can be controlled by a local modification of the core through-thickness compression strength. For folded cores the required change in core properties can be integrated by a modification of the fold pattern. However, the complexity of folded cores requires a virtual design methodology for tailoring the fold pattern according to all static and crash relevant requirements. In this context a foldcore micromodel simulation method is presented to identify the structural response of a twin-walled fuselage panels with folded core under crash relevant loading condition. The simulations showed that a high degree of correlation is required before simulation can replace expensive testing. In the presented studies, the necessary correlation quality could only be obtained by including imperfections of the core material in the micromodel simulation approach.
NASA Astrophysics Data System (ADS)
Abeywickrama, Sandu; Furdek, Marija; Monti, Paolo; Wosinska, Lena; Wong, Elaine
2016-12-01
Core network survivability affects the reliability performance of telecommunication networks and remains one of the most important network design considerations. This paper critically examines the benefits arising from utilizing dual-homing in the optical access networks to provide resource-efficient protection against link and node failures in the optical core segment. Four novel, heuristic-based RWA algorithms that provide dedicated path protection in networks with dual-homing are proposed and studied. These algorithms protect against different failure scenarios (i.e. single link or node failures) and are implemented with different optimization objectives (i.e., minimization of wavelength usage and path length). Results obtained through simulations and comparison with baseline architectures indicate that exploiting dual-homed architecture in the access segment can bring significant improvements in terms of core network resource usage, connection availability, and power consumption.
NASA Astrophysics Data System (ADS)
Li, Zebo; Trinkle, Dallas R.
2017-04-01
We use a continuum method informed by transport coefficients computed using self-consistent mean field theory to model vacancy-mediated diffusion of substitutional Si solutes in FCC Ni near an a/2 [1 1 ¯0 ] (111 ) edge dislocation. We perform two sequential simulations: first under equilibrium boundary conditions and then under irradiation. The strain field around the dislocation induces heterogeneity and anisotropy in the defect transport properties and determines the steady-state vacancy and Si distributions. At equilibrium both vacancies and Si solutes diffuse to form Cottrell atmospheres with vacancies accumulating in the compressive region above the dislocation core while Si segregates to the tensile region below the core. Irradiation raises the bulk vacancy concentration, driving vacancies to flow into the dislocation core. The out-of-equilibrium vacancy fluxes drag Si atoms towards the core, causing segregation to the compressive region, despite Si being an oversized solute in Ni.
A Tissue Propagation Model for Validating Close-Proximity Biomedical Radiometer Measurements
NASA Technical Reports Server (NTRS)
Bonds, Q.; Herzig, P.; Weller, T.
2016-01-01
The propagation of thermally-generated electromagnetic emissions through stratified human tissue is studied herein using a non-coherent mathematical model. The model is developed to complement subsurface body temperature measurements performed using a close proximity microwave radiometer. The model takes into account losses and reflections as thermal emissions propagate through the body, before being emitted at the skin surface. The derivation is presented in four stages and applied to the human core phantom, a physical representation of a stomach volume of skin, muscle, and blood-fatty tissue. A drop in core body temperature is simulated via the human core phantom and the response of the propagation model is correlated to the radiometric measurement. The results are comparable, with differences on the order of 1.5 - 3%. Hence the plausibility of core body temperature extraction via close proximity radiometry is demonstrated, given that the electromagnetic characteristics of the stratified tissue layers are known.
Core compressor exit stage study. 1: Aerodynamic and mechanical design
NASA Technical Reports Server (NTRS)
Burdsall, E. A.; Canal, E., Jr.; Lyons, K. A.
1979-01-01
The effect of aspect ratio on the performance of core compressor exit stages was demonstrated using two three stage, highly loaded, core compressors. Aspect ratio was identified as having a strong influence on compressors endwall loss. Both compressors simulated the last three stages of an advanced eight stage core compressor and were designed with the same 0.915 hub/tip ratio, 4.30 kg/sec (9.47 1bm/sec) inlet corrected flow, and 167 m/sec (547 ft/sec) corrected mean wheel speed. The first compressor had an aspect ratio of 0.81 and an overall pressure ratio of 1.357 at a design adiabatic efficiency of 88.3% with an average diffusion factor or 0.529. The aspect ratio of the second compressor was 1.22 with an overall pressure ratio of 1.324 at a design adiabatic efficiency of 88.7% with an average diffusion factor of 0.491.
Simulation on reactor TRIGA Puspati core kinetics fueled with thorium (Th) based fuel element
NASA Astrophysics Data System (ADS)
Mohammed, Abdul Aziz; Pauzi, Anas Muhamad; Rahman, Shaik Mohmmed Haikhal Abdul; Zin, Muhamad Rawi Muhammad; Jamro, Rafhayudi; Idris, Faridah Mohamad
2016-01-01
In confronting global energy requirement and the search for better technologies, there is a real case for widening the range of potential variations in the design of nuclear power plants. Smaller and simpler reactors are attractive, provided they can meet safety and security standards and non-proliferation issues. On fuel cycle aspect, thorium fuel cycles produce much less plutonium and other radioactive transuranic elements than uranium fuel cycles. Although not fissile itself, Th-232 will absorb slow neutrons to produce uranium-233 (233U), which is fissile. By introducing Thorium, the numbers of highly enriched uranium fuel element can be reduced while maintaining the core neutronic performance. This paper describes the core kinetic of a small research reactor core like TRIGA fueled with a Th filled fuel element matrix using a general purpose Monte Carlo N-Particle (MCNP) code.
Simulation on reactor TRIGA Puspati core kinetics fueled with thorium (Th) based fuel element
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohammed, Abdul Aziz, E-mail: azizM@uniten.edu.my; Rahman, Shaik Mohmmed Haikhal Abdul; Pauzi, Anas Muhamad, E-mail: anas@uniten.edu.my
2016-01-22
In confronting global energy requirement and the search for better technologies, there is a real case for widening the range of potential variations in the design of nuclear power plants. Smaller and simpler reactors are attractive, provided they can meet safety and security standards and non-proliferation issues. On fuel cycle aspect, thorium fuel cycles produce much less plutonium and other radioactive transuranic elements than uranium fuel cycles. Although not fissile itself, Th-232 will absorb slow neutrons to produce uranium-233 ({sup 233}U), which is fissile. By introducing Thorium, the numbers of highly enriched uranium fuel element can be reduced while maintainingmore » the core neutronic performance. This paper describes the core kinetic of a small research reactor core like TRIGA fueled with a Th filled fuel element matrix using a general purpose Monte Carlo N-Particle (MCNP) code.« less
An approach to secure weather and climate models against hardware faults
NASA Astrophysics Data System (ADS)
Düben, Peter D.; Dawson, Andrew
2017-03-01
Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelization to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. In this paper, we present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform model simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13 % for the shallow water model.
An approach to secure weather and climate models against hardware faults
NASA Astrophysics Data System (ADS)
Düben, Peter; Dawson, Andrew
2017-04-01
Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelisation to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. We present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13% for the shallow water model.
Tudek, John; Crandall, Dustin; Fuchs, Samantha; ...
2017-01-30
Three techniques to measure and understand the contact angle, θ, of a CO 2/brine/rock system relevant to geologic carbon storage were performed with Mount Simon sandstone. Traditional sessile drop measurements of CO 2/brine on the sample were conducted and a water-wet system was observed, as is expected. A novel series of measurements inside of a Mount Simon core, using a micro X-ray computed tomography imaging system with the ability to scan samples at elevated pressures, was used to examine the θ of residual bubbles of CO 2. Within the sandstone core the matrix appeared to be neutrally wetting, with anmore » average θ around 90°. A large standard deviation of θ (20.8°) within the core was also observed. To resolve this discrepancy between experimental measurements, a series of Lattice Boltzmann model simulations were performed with differing intrinsic θ values. The model results with a θ = 80° were shown to match the core measurements closely, in both magnitude and variation. The small volume and complex geometry of the pore spaces that CO 2 was trapped in is the most likely explanation of this discrepancy between measured values, though further work is warranted.« less
Confirmation of a realistic reactor model for BNCT dosimetry at the TRIGA Mainz
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ziegner, Markus, E-mail: Markus.Ziegner.fl@ait.ac.at; Schmitz, Tobias; Hampel, Gabriele
2014-11-01
Purpose: In order to build up a reliable dose monitoring system for boron neutron capture therapy (BNCT) applications at the TRIGA reactor in Mainz, a computer model for the entire reactor was established, simulating the radiation field by means of the Monte Carlo method. The impact of different source definition techniques was compared and the model was validated by experimental fluence and dose determinations. Methods: The depletion calculation code ORIGEN2 was used to compute the burn-up and relevant material composition of each burned fuel element from the day of first reactor operation to its current core. The material composition ofmore » the current core was used in a MCNP5 model of the initial core developed earlier. To perform calculations for the region outside the reactor core, the model was expanded to include the thermal column and compared with the previously established ATTILA model. Subsequently, the computational model is simplified in order to reduce the calculation time. Both simulation models are validated by experiments with different setups using alanine dosimetry and gold activation measurements with two different types of phantoms. Results: The MCNP5 simulated neutron spectrum and source strength are found to be in good agreement with the previous ATTILA model whereas the photon production is much lower. Both MCNP5 simulation models predict all experimental dose values with an accuracy of about 5%. The simulations reveal that a Teflon environment favorably reduces the gamma dose component as compared to a polymethyl methacrylate phantom. Conclusions: A computer model for BNCT dosimetry was established, allowing the prediction of dosimetric quantities without further calibration and within a reasonable computation time for clinical applications. The good agreement between the MCNP5 simulations and experiments demonstrates that the ATTILA model overestimates the gamma dose contribution. The detailed model can be used for the planning of structural modifications in the thermal column irradiation channel or the use of different irradiation sites than the thermal column, e.g., the beam tubes.« less
Mitigating IASCC of Reactor Core Internals by Post-Irradiation Annealing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Was, Gary
This final report summarizes research performed during the period between September 2012 and December 2016, with the objective of establishing the effectiveness of post-irradiation annealing (PIA) as an advanced mitigation strategy for irradiation-assisted stress corrosion cracking (IASCC). This was completed by using irradiated 304SS control blade material to conduct crack initiation and crack growth rate (CGR) experiments in simulated BWR environment. The mechanism by which PIA affects IASCC susceptibility will also be verified. The success of this project will provide a foundation for the use of PIA as a mitigation strategy for core internal components in commercial reactors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, Kyle W.; Gauntt, Randall O.; Cardoni, Jeffrey N.
2013-11-01
Data, a brief description of key boundary conditions, and results of Sandia National Laboratories’ ongoing MELCOR analysis of the Fukushima Unit 2 accident are given for the reactor core isolation cooling (RCIC) system. Important assumptions and related boundary conditions in the current analysis additional to or different than what was assumed/imposed in the work of SAND2012-6173 are identified. This work is for the U.S. Department of Energy’s Nuclear Energy University Programs fiscal year 2014 Reactor Safety Technologies Research and Development Program RC-7: RCIC Performance under Severe Accident Conditions.
High-contrast grating hollow-core waveguide splitter applied to optical phased array
NASA Astrophysics Data System (ADS)
Zhao, Che; Xue, Ping; Zhang, Hanxing; Chen, Te; Peng, Chao; Hu, Weiwei
2014-11-01
A novel hollow-core (HW) Y-branch waveguide splitter based on high-contrast grating (HCG) is presented. We calculated and designed the HCG-HW splitter using Rigorous Coupled Wave Analysis (RCWA). Finite-different timedomain (FDTD) simulation shows that the splitter has a broad bandwidth and the branching loss is as low as 0.23 dB. Fabrication is accomplished with standard Silicon-On-Insulator (SOI) process. The experimental measurement results indicate its good performance on beam splitting near the central wavelength λ = 1550 nm with a total insertion loss of 7.0 dB.
Improved models of stellar core collapse and still no explosions: what is missing?
Buras, R; Rampp, M; Janka, H-Th; Kifonidis, K
2003-06-20
Two-dimensional hydrodynamic simulations of stellar core collapse are presented which for the first time were performed by solving the Boltzmann equation for the neutrino transport including a state-of-the-art description of neutrino interactions. Stellar rotation is also taken into account. Although convection develops below the neutrinosphere and in the neutrino-heated region behind the supernova shock, the models do not explode. This suggests missing physics, possibly with respect to the nuclear equation of state and weak interactions in the subnuclear regime. However, it might also indicate a fundamental problem with the neutrino-driven explosion mechanism.
Low power test architecture for dynamic read destructive fault detection in SRAM
NASA Astrophysics Data System (ADS)
Takher, Vikram Singh; Choudhary, Rahul Raj
2018-06-01
Dynamic Read Destructive Fault (dRDF) is the outcome of resistive open defects in the core cells of static random-access memories (SRAMs). The sensitisation of dRDF involves either performing multiple read operations or creation of number of read equivalent stress (RES), on the core cell under test. Though the creation of RES is preferred over the performing multiple read operation on the core cell, cell dissipates more power during RES than during the read or write operation. This paper focuses on the reduction in power dissipation by optimisation of number of RESs, which are required to sensitise the dRDF during test mode of operation of SRAM. The novel pre-charge architecture has been proposed in order to reduce the power dissipation by limiting the number of RESs to an optimised number of two. The proposed low power architecture is simulated and analysed which shows reduction in power dissipation by reducing the number of RESs up to 18.18%.
Optimization of burnable poison design for Pu incineration in fully fertile free PWR core
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fridman, E.; Shwageraus, E.; Galperin, A.
2006-07-01
The design challenges of the fertile-free based fuel (FFF) can be addressed by careful and elaborate use of burnable poisons (BP). Practical fully FFF core design for PWR reactor has been reported in the past [1]. However, the burnable poison option used in the design resulted in significant end of cycle reactivity penalty due to incomplete BP depletion. Consequently, excessive Pu loading were required to maintain the target fuel cycle length, which in turn decreased the Pu burning efficiency. A systematic evaluation of commercially available BP materials in all configurations currently used in PWRs is the main objective of thismore » work. The BP materials considered are Boron, Gd, Er, and Hf. The BP geometries were based on Wet Annular Burnable Absorber (WABA), Integral Fuel Burnable Absorber (IFBA), and Homogeneous poison/fuel mixtures. Several most promising combinations of BP designs were selected for the full core 3D simulation. All major core performance parameters for the analyzed cases are very close to those of a standard PWR with conventional UO{sub 2} fuel including possibility of reactivity control, power peaking factors, and cycle length. The MTC of all FFF cores was found at the full power conditions at all times and very close to that of the UO{sub 2} core. The Doppler coefficient of the FFF cores is also negative but somewhat lower in magnitude compared to UO{sub 2} core. The soluble boron worth of the FFF cores was calculated to be lower than that of the UO{sub 2} core by about a factor of two, which still allows the core reactivity control with acceptable soluble boron concentrations. The main conclusion of this work is that judicial application of burnable poisons for fertile free fuel has a potential to produce a core design with performance characteristics close to those of the reference PWR core with conventional UO{sub 2} fuel. (authors)« less
NASA Technical Reports Server (NTRS)
Lee, Y. H.; Lamarque, J.-F.; Flanner, M. G.; Jiao, C.; Shindell, D. T.; Bernsten, T.; Bisiaux, M. M.; Cao, J.; Collins, W. J.; Curran, M.;
2013-01-01
As part of the Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP), we evaluate the historical black carbon (BC) aerosols simulated by 8 ACCMIP models against observations including 12 ice core records, long-term surface mass concentrations, and recent Arctic BC snowpack measurements. We also estimate BC albedo forcing by performing additional simulations using offline models with prescribed meteorology from 1996-2000. We evaluate the vertical profile of BC snow concentrations from these offline simulations using the recent BC snowpack measurements. Despite using the same BC emissions, the global BC burden differs by approximately a factor of 3 among models due to differences in aerosol removal parameterizations and simulated meteorology: 34 Gg to 103 Gg in 1850 and 82 Gg to 315 Gg in 2000. However, the global BC burden from preindustrial to present-day increases by 2.5-3 times with little variation among models, roughly matching the 2.5-fold increase in total BC emissions during the same period.We find a large divergence among models at both Northern Hemisphere (NH) and Southern Hemisphere (SH) high latitude regions for BC burden and at SH high latitude regions for deposition fluxes. The ACCMIP simulations match the observed BC surface mass concentrations well in Europe and North America except at Ispra. However, the models fail to predict the Arctic BC seasonality due to severe underestimations during winter and spring. The simulated vertically resolved BC snow concentrations are, on average, within a factor of 2-3 of the BC snowpack measurements except for Greenland and the Arctic Ocean. For the ice core evaluation, models tend to adequately capture both the observed temporal trends and the magnitudes at Greenland sites. However, models fail to predict the decreasing trend of BC depositions/ice core concentrations from the 1950s to the 1970s in most Tibetan Plateau ice cores. The distinct temporal trend at the Tibetan Plateau ice cores indicates a strong influence from Western Europe, but the modeled BC increases in that period are consistent with the emission changes in Eastern Europe, the Middle East, South and East Asia. At the Alps site, the simulated BC suggests a strong influence from Europe, which agrees with the Alps ice core observations. At Zuoqiupu on the Tibetan Plateau, models successfully simulate the higher BC concentrations observed during the non-monsoon season compared to the monsoon season but overpredict BC in both seasons. Despite a large divergence in BC deposition at two Antarctic ice core sites, some models with a BC lifetime of less than 7 days are able to capture the observed concentrations. In 2000 relative to 1850, globally and annually averaged BC surface albedo forcing from the offline simulations ranges from 0.014 to 0.019Wm-2 among the ACCMIP models. Comparing offline and online BC albedo forcings computed by some of the same models, we find that the global annual mean can vary by up to a factor of two because of different aerosol models or different BC-snow parameterizations and snow cover. The spatial distributions of the offline BC albedo forcing in 2000 show especially high BC forcing (i.e., over 0.1W/sq. m) over Manchuria, Karakoram, and most of the Former USSR. Models predict the highest global annual mean BC forcing in 1980 rather than 2000, mostly driven by the high fossil fuel and biofuel emissions in the Former USSR in 1980.
NASA Astrophysics Data System (ADS)
Liu, Tianyu; Wolfe, Noah; Lin, Hui; Zieb, Kris; Ji, Wei; Caracappa, Peter; Carothers, Christopher; Xu, X. George
2017-09-01
This paper contains two parts revolving around Monte Carlo transport simulation on Intel Many Integrated Core coprocessors (MIC, also known as Xeon Phi). (1) MCNP 6.1 was recompiled into multithreading (OpenMP) and multiprocessing (MPI) forms respectively without modification to the source code. The new codes were tested on a 60-core 5110P MIC. The test case was FS7ONNi, a radiation shielding problem used in MCNP's verification and validation suite. It was observed that both codes became slower on the MIC than on a 6-core X5650 CPU, by a factor of 4 for the MPI code and, abnormally, 20 for the OpenMP code, and both exhibited limited capability of strong scaling. (2) We have recently added a Constructive Solid Geometry (CSG) module to our ARCHER code to provide better support for geometry modelling in radiation shielding simulation. The functions of this module are frequently called in the particle random walk process. To identify the performance bottleneck we developed a CSG proxy application and profiled the code using the geometry data from FS7ONNi. The profiling data showed that the code was primarily memory latency bound on the MIC. This study suggests that despite low initial porting e_ort, Monte Carlo codes do not naturally lend themselves to the MIC platform — just like to the GPUs, and that the memory latency problem needs to be addressed in order to achieve decent performance gain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Druinsky, Alex; Ghysels, Pieter; Li, Xiaoye S.
In this paper, we study the performance of a two-level algebraic-multigrid algorithm, with a focus on the impact of the coarse-grid solver on performance. We consider two algorithms for solving the coarse-space systems: the preconditioned conjugate gradient method and a new robust HSS-embedded low-rank sparse-factorization algorithm. Our test data comes from the SPE Comparative Solution Project for oil-reservoir simulations. We contrast the performance of our code on one 12-core socket of a Cray XC30 machine with performance on a 60-core Intel Xeon Phi coprocessor. To obtain top performance, we optimized the code to take full advantage of fine-grained parallelism andmore » made it thread-friendly for high thread count. We also developed a bounds-and-bottlenecks performance model of the solver which we used to guide us through the optimization effort, and also carried out performance tuning in the solver’s large parameter space. Finally, as a result, significant speedups were obtained on both machines.« less
Ross, Matthew; Andersen, Amity; Fox, Zachary W; Zhang, Yu; Hong, Kiryong; Lee, Jae-Hyuk; Cordones, Amy; March, Anne Marie; Doumy, Gilles; Southworth, Stephen H; Marcus, Matthew A; Schoenlein, Robert W; Mukamel, Shaul; Govind, Niranjan; Khalil, Munira
2018-05-17
We present a joint experimental and computational study of the hexacyanoferrate aqueous complexes at equilibrium in the 250 meV to 7.15 keV regime. The experiments and the computations include the vibrational spectroscopy of the cyanide ligands, the valence electronic absorption spectra, and Fe 1s core hole spectra using element-specific-resonant X-ray absorption and emission techniques. Density functional theory-based quantum mechanics/molecular mechanics molecular dynamics simulations are performed to generate explicit solute-solvent configurations, which serve as inputs for the spectroscopy calculations of the experiments spanning the IR to X-ray wavelengths. The spectroscopy simulations are performed at the same level of theory across this large energy window, which allows for a systematic comparison of the effects of explicit solute-solvent interactions in the vibrational, valence electronic, and core-level spectra of hexacyanoferrate complexes in water. Although the spectroscopy of hexacyanoferrate complexes in solution has been the subject of several studies, most of the previous works have focused on a narrow energy window and have not accounted for explicit solute-solvent interactions in their spectroscopy simulations. In this work, we focus our analysis on identifying how the local solvation environment around the hexacyanoferrate complexes influences the intensity and line shape of specific spectroscopic features in the UV/vis, X-ray absorption, and valence-to-core X-ray emission spectra. The identification of these features and their relationship to solute-solvent interactions is important because hexacyanoferrate complexes serve as model systems for understanding the photochemistry and photophysics of a large class of Fe(II) and Fe(III) complexes in solution.
System-level protection and hardware Trojan detection using weighted voting.
Amin, Hany A M; Alkabani, Yousra; Selim, Gamal M I
2014-07-01
The problem of hardware Trojans is becoming more serious especially with the widespread of fabless design houses and design reuse. Hardware Trojans can be embedded on chip during manufacturing or in third party intellectual property cores (IPs) during the design process. Recent research is performed to detect Trojans embedded at manufacturing time by comparing the suspected chip with a golden chip that is fully trusted. However, Trojan detection in third party IP cores is more challenging than other logic modules especially that there is no golden chip. This paper proposes a new methodology to detect/prevent hardware Trojans in third party IP cores. The method works by gradually building trust in suspected IP cores by comparing the outputs of different untrusted implementations of the same IP core. Simulation results show that our method achieves higher probability of Trojan detection over a naive implementation of simple voting on the output of different IP cores. In addition, experimental results show that the proposed method requires less hardware overhead when compared with a simple voting technique achieving the same degree of security.
NASA Astrophysics Data System (ADS)
Crocker, N. A.; Tritz, K.; White, R. B.; Fredrickson, E. D.; Gorelenkov, N. N.; NSTX-U Team
2015-11-01
New simulation results demonstrate that high frequency compressional (CAE) and global (GAE) Alfvén eigenmodes cause radial convection of electrons, with implications for particle and energy confinement, as well as electric field formation in NSTX-U. Simulations of electron orbits in the presence of multiple experimentally determined CAEs and GAEs, using the gyro-center code ORBIT, have revealed substantial convective transport, in addition to the expected diffusion via orbit stochastization. These results advance understanding of anomalous core energy transport expected in high performance, beam-heated NSTX-U plasmas. The simulations make use of experimentally determined density perturbation (δn) amplitudes and mode structures obtained by inverting measurements from 16 a channel reflectometer array using a synthetic diagnostic. Combined with experimentally determined mode polarizations (i.e. CAE or GAE), the δn are used to estimate the ExB displacements for use in ORBIT. Preliminary comparison of the simulation results with transport modeling by TRANSP indicate that the convection is currently underestimated. Supported by US DOE Contracts DE-SC0011810, DE-FG02-99ER54527 & DE-AC02-09CH11466.
C-Mod MHD stability analysis with LHCD
NASA Astrophysics Data System (ADS)
Ebrahimi, Fatima; Bhattacharjee, A.; Delgado, L.; Scott, S.; Wilson, J. R.; Wallace, G. M.; Shiraiwa, S.; Mumgaard, R. T.
2016-10-01
In lower hybrid current drive (LHCD) experiments on the Alcator C-Mod, sawtooth activity could be suppressed as the safety factor q on axis is raised above unity. However, in some of these experiments, after applying LHCD, the onset of MHD mode activity caused the current drive efficiency to significantly drop. Here, we study the stability of these experiments by performing MHD simulations using the NIMROD code starting with experimental EFIT equilibria. First, consistent with the LHCD experiment with no signature of MHD activity, MHD mode activity was also absent in the simulations. Second, for experiments with MHD mode activity, we find that a core n=1 reconnecting mode with dominate poloidal modes of m=2,3 is unstable. This mode is a resistive current-driven mode as its growth rate scales with a negative power of the Lundquist number in the simulations. In addition, with further enhanced reversed-shear q profile in the simulations, a core double tearing mode is found to be unstable. This work is supported by U.S. DOE cooperative agreement DE-FC02-99ER54512 using the Alcator C-Mod tokamak, a DOE Office of Science user facility.
Coilable Crystalline Fiber (CCF) Lasers and their Scalability
2014-03-01
Fibers: Double-Clad Design Concept of Tm:YAG-Core Fiber and Mode Simulation. Proc. SPIE 2012, 8237 , 82373M. 8. Beach, R. J.; Mitchell, S. C...Dubinskii, M. True Crystalline Fibers: Double-Clad LMA Design Concept of Tm:YAG-Core Fiber and Mode Simulation. Proc. of SPIE 2012, 8237 , 82373M-1...Tm:YAG-Core Fiber and Mode Simulation. Proc. SPIE 8237 , 82373M, 2012. 8. Beach, R. J.; Mitchell, S. C.; Meissner, H. E.; Meissner, O. R.; Krupke, W
Three-dimensional modeling of direct-drive cryogenic implosions on OMEGA
Igumenshchev, I. V.; Goncharov, V. N.; Marshall, F. J.; ...
2016-05-04
The effects of large-scale (with Legendre modes ≲10) laser-imposed nonuniformities in direct-drive cryogenic implosions on the OMEGA laser system are investigated using three-dimension hydrodynamic simulations performed using a newly developed code ASTER. Sources of these nonuniformities include an illumination pattern produced by 60 OMEGA laser beams, capsule offsets (~10 to 20 μm), and imperfect pointing, energy balance, and timing of the beams (with typical σ rms ~10 μm, 10%, and 5 ps, respectively). Two implosion designs using 26-kJ triple-picket laser pulses were studied: a nominal design, in which a 880-μm-diameter capsule is illuminated by the same-diameter beams, and a “R75”more » design using a capsule of 900 μm in diameter and beams of 75% of this diameter. Simulations found that nonuniformities because of capsule offsets and beam imbalance have the largest effect on implosion performance. These nonuniformities lead to significant distortions of implosion cores resulting in an incomplete stagnation. The shape of distorted cores is well represented by neutron images, but loosely in x-rays. Simulated neutron spectra from perturbed implosions show large directional variations and up to ~ 2 keV variation of the hot spot temperature inferred from these spectra. The R75 design is more hydrodynamically efficient because of mitigation of crossed-beam energy transfer, but also suffers more from the nonuniformities. Furthermore, simulations predict a performance advantage of this design over the nominal design when the target offset and beam imbalance σ rms are reduced to less than 5 μm and 5%, respectively.« less
Three-dimensional modeling of direct-drive cryogenic implosions on OMEGA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Igumenshchev, I. V.; Goncharov, V. N.; Marshall, F. J.
The effects of large-scale (with Legendre modes ≲10) laser-imposed nonuniformities in direct-drive cryogenic implosions on the OMEGA laser system are investigated using three-dimension hydrodynamic simulations performed using a newly developed code ASTER. Sources of these nonuniformities include an illumination pattern produced by 60 OMEGA laser beams, capsule offsets (~10 to 20 μm), and imperfect pointing, energy balance, and timing of the beams (with typical σ rms ~10 μm, 10%, and 5 ps, respectively). Two implosion designs using 26-kJ triple-picket laser pulses were studied: a nominal design, in which a 880-μm-diameter capsule is illuminated by the same-diameter beams, and a “R75”more » design using a capsule of 900 μm in diameter and beams of 75% of this diameter. Simulations found that nonuniformities because of capsule offsets and beam imbalance have the largest effect on implosion performance. These nonuniformities lead to significant distortions of implosion cores resulting in an incomplete stagnation. The shape of distorted cores is well represented by neutron images, but loosely in x-rays. Simulated neutron spectra from perturbed implosions show large directional variations and up to ~ 2 keV variation of the hot spot temperature inferred from these spectra. The R75 design is more hydrodynamically efficient because of mitigation of crossed-beam energy transfer, but also suffers more from the nonuniformities. Furthermore, simulations predict a performance advantage of this design over the nominal design when the target offset and beam imbalance σ rms are reduced to less than 5 μm and 5%, respectively.« less
COOL CORE CLUSTERS FROM COSMOLOGICAL SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rasia, E.; Borgani, S.; Murante, G.
2015-11-01
We present results obtained from a set of cosmological hydrodynamic simulations of galaxy clusters, aimed at comparing predictions with observational data on the diversity between cool-core (CC) and non-cool-core (NCC) clusters. Our simulations include the effects of stellar and active galactic nucleus (AGN) feedback and are based on an improved version of the smoothed particle hydrodynamics code GADGET-3, which ameliorates gas mixing and better captures gas-dynamical instabilities by including a suitable artificial thermal diffusion. In this Letter, we focus our analysis on the entropy profiles, the primary diagnostic we used to classify the degree of cool-coreness of clusters, and themore » iron profiles. In keeping with observations, our simulated clusters display a variety of behaviors in entropy profiles: they range from steadily decreasing profiles at small radii, characteristic of CC systems, to nearly flat core isentropic profiles, characteristic of NCC systems. Using observational criteria to distinguish between the two classes of objects, we find that they occur in similar proportions in both simulations and observations. Furthermore, we also find that simulated CC clusters have profiles of iron abundance that are steeper than those of NCC clusters, which is also in agreement with observational results. We show that the capability of our simulations to generate a realistic CC structure in the cluster population is due to AGN feedback and artificial thermal diffusion: their combined action allows us to naturally distribute the energy extracted from super-massive black holes and to compensate for the radiative losses of low-entropy gas with short cooling time residing in the cluster core.« less
Cool Core Clusters from Cosmological Simulations
NASA Astrophysics Data System (ADS)
Rasia, E.; Borgani, S.; Murante, G.; Planelles, S.; Beck, A. M.; Biffi, V.; Ragone-Figueroa, C.; Granato, G. L.; Steinborn, L. K.; Dolag, K.
2015-11-01
We present results obtained from a set of cosmological hydrodynamic simulations of galaxy clusters, aimed at comparing predictions with observational data on the diversity between cool-core (CC) and non-cool-core (NCC) clusters. Our simulations include the effects of stellar and active galactic nucleus (AGN) feedback and are based on an improved version of the smoothed particle hydrodynamics code GADGET-3, which ameliorates gas mixing and better captures gas-dynamical instabilities by including a suitable artificial thermal diffusion. In this Letter, we focus our analysis on the entropy profiles, the primary diagnostic we used to classify the degree of cool-coreness of clusters, and the iron profiles. In keeping with observations, our simulated clusters display a variety of behaviors in entropy profiles: they range from steadily decreasing profiles at small radii, characteristic of CC systems, to nearly flat core isentropic profiles, characteristic of NCC systems. Using observational criteria to distinguish between the two classes of objects, we find that they occur in similar proportions in both simulations and observations. Furthermore, we also find that simulated CC clusters have profiles of iron abundance that are steeper than those of NCC clusters, which is also in agreement with observational results. We show that the capability of our simulations to generate a realistic CC structure in the cluster population is due to AGN feedback and artificial thermal diffusion: their combined action allows us to naturally distribute the energy extracted from super-massive black holes and to compensate for the radiative losses of low-entropy gas with short cooling time residing in the cluster core.
Evaluation of HFIR LEU Fuel Using the COMSOL Multiphysics Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Primm, Trent; Ruggles, Arthur; Freels, James D
2009-03-01
A finite element computational approach to simulation of the High Flux Isotope Reactor (HFIR) Core Thermal-Fluid behavior is developed. These models were developed to facilitate design of a low enriched core for the HFIR, which will have different axial and radial flux profiles from the current HEU core and thus will require fuel and poison load optimization. This report outlines a stepwise implementation of this modeling approach using the commercial finite element code, COMSOL, with initial assessment of fuel, poison and clad conduction modeling capability, followed by assessment of mating of the fuel conduction models to a one dimensional fluidmore » model typical of legacy simulation techniques for the HFIR core. The model is then extended to fully couple 2-dimensional conduction in the fuel to a 2-dimensional thermo-fluid model of the coolant for a HFIR core cooling sub-channel with additional assessment of simulation outcomes. Finally, 3-dimensional simulations of a fuel plate and cooling channel are presented.« less
Simulating the minimum core for hydrophobic collapse in globular proteins.
Tsai, J.; Gerstein, M.; Levitt, M.
1997-01-01
To investigate the nature of hydrophobic collapse considered to be the driving force in protein folding, we have simulated aqueous solutions of two model hydrophobic solutes, methane and isobutylene. Using a novel methodology for determining contacts, we can precisely follow hydrophobic aggregation as it proceeds through three stages: dispersed, transition, and collapsed. Theoretical modeling of the cluster formation observed by simulation indicates that this aggregation is cooperative and that the simulations favor the formation of a single cluster midway through the transition stage. This defines a minimum solute hydrophobic core volume. We compare this with protein hydrophobic core volumes determined from solved crystal structures. Our analysis shows that the solute core volume roughly estimates the minimum core size required for independent hydrophobic stabilization of a protein and defines a limiting concentration of nonpolar residues that can cause hydrophobic collapse. These results suggest that the physical forces driving aggregation of hydrophobic molecules in water is indeed responsible for protein folding. PMID:9416609
NASA Astrophysics Data System (ADS)
Joglekar, Prasad; Shastry, Karthik; Satyal, Suman; Weiss, Alexander
2011-10-01
Time of Flight Positron Annihilation Induced Auger Electron Spectroscopy (T-O-F PAES) is a highly surface selective analytical technique in which elemental identification is accomplished through a measurement of the flight time distributions of Auger electrons resulting from the annihilation of core electron by positrons. SIMION charged particle optics simulation software was used to model the trajectories both the incident positrons and outgoing electrons in our existing T-O-F PAES system as well as in a new system currently under construction in our laboratory. The implication of these simulation regarding the instrument design and performance are discussed.
Comparison of a 3-D DEM simulation with MRI data
NASA Astrophysics Data System (ADS)
Ng, Tang-Tat; Wang, Changming
2001-04-01
This paper presents a comparison of a granular material studied experimentally and numerically. Simple shear tests were performed inside the magnetic core of magnetic resonance imaging (MRI) equipment. Spherical pharmaceutical pills were used as the granular material, with each pill's centre location determined by MRI. These centre locations in the initial assembly were then used as the initial configuration in the numerical simulation using the discrete element method. The contact properties between pharmaceutical pills used in the numerical simulation were obtained experimentally. The numerical predication was compared with experimental data at both macroscopic and microscopic levels. Good agreement was found at both levels.
Abraham, Mark James; Murtola, Teemu; Schulz, Roland; ...
2015-07-15
GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. This work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. Finally, the latest best-in-class compressed trajectory storage format is supported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abraham, Mark James; Murtola, Teemu; Schulz, Roland
GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. This work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. Finally, the latest best-in-class compressed trajectory storage format is supported.
NASA Astrophysics Data System (ADS)
Hamann, Ilse; Arnault, Joel; Bliefernicht, Jan; Klein, Cornelia; Heinzeller, Dominikus; Kunstmann, Harald
2014-05-01
Changing climate and hydro-meteorological boundary conditions are among the most severe challenges to Africa in the 21st century. In particular West Africa faces an urgent need to develop effective adaptation and mitigation strategies to cope with negative impacts on humans and environment due to climate change, increased hydro-meteorological variability and land use changes. To help meet these challenges, the German Federal Ministry of Education and Research (BMBF) started an initiative with institutions in Germany and West African countries to establish together a West African Science Service Center on Climate Change and Adapted Land Use (WASCAL). This activity is accompanied by an establishment of trans-boundary observation networks, an interdisciplinary core research program and graduate research programs on climate change and related issues for strengthening the analytical capabilities of the Science Service Center. A key research activity of the WASCAL Competence Center is the provision of regional climate simulations in a fine spatio-temporal resolution for the core research sites of WASCAL for the present and the near future. The climate information is needed for subsequent local climate impact studies in agriculture, water resources and further socio-economic sectors. The simulation experiments are performed using regional climate models such as COSMO-CLM, RegCM and WRF and statistical techniques for a further refinement of the projections. The core research sites of WASCAL are located in the Sudanian Savannah belt in Northern Ghana, Southern Burkina Faso and Northern Benin. The climate in this region is semi-arid with six rainy months. Due to the strong population growth in West Africa, many areas of the Sudanian Savannah have been already converted to farmland since the majority of the people are living directly or indirectly from the income produced in agriculture. The simulation experiments of the Competence Center and the Core Research Program are accompanied by the WASCAL Graduate Research Program on the West African Climate System. The GRP-WACS provides ten scholarships per year for West African PhD students with a duration of three years. Present and future WASCAL PhD students will constitute one important user group of the Linux cluster that will be installed at the Competence Center in Ouagadougou, Burkina Faso. Regional Land-Atmosphere Simulations A key research activity of the WASCAL Core Research Program is the analysis of interactions between the land surface and the atmosphere to investigate how land surface changes affect hydro-meteorological surface fluxes such as evapotranspiration. Since current land surface models of global and regional climate models neglect dominant lateral hydrological processes such as surface runoff, a novel land surface model is used, the NCAR Distributed Hydrological Modeling System (NDHMS). This model can be coupled to WRF (WRF-Hydro) to perform two-way coupled atmospheric-hydrological simulations for the watershed of interest. Hardware and network prerequisites include a HPC cluster, network switches, internal storage media, Internet connectivity of sufficient bandwidth. Competences needed are HPC, storage, and visualization systems optimized for climate research, parallelization and optimization of climate models and workflows, efficient management of highest data volumes.
Edge-core interaction of ITG turbulence in Tokamaks: Is the Tail Wagging the Dog?
NASA Astrophysics Data System (ADS)
Ku, S.; Chang, C. S.; Dif-Pradalier, G.; Diamond, P. H.
2010-11-01
A full-f XGC1 gyrokinetic simulation of ITG turbulence, together with the neoclassical dynamics without scale separation, has been performed for the whole-volume plasma in realistic diverted DIII-D geometry. The simulation revealed that the global structure of the turbulence and transport in tokamak plasmas results from a synergy between edge-driven inward propagation of turbulence intensity and the core-driven outward heat transport. The global ion confinement and the ion temperature gradient then self-organize quickly at turbulence propagation time scale. This synergy results in inward-outward pulse scattering leading to spontaneous production of strong internal shear layers in which the turbulent transport is almost suppressed over several radial correlation lengths. Co-existence of the edge turbulence source and the strong internal shear layer leads to radially increasing turbulence intensity and ion thermal transport profiles.
A Petascale Non-Hydrostatic Atmospheric Dynamical Core in the HOMME Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tufo, Henry
The High-Order Method Modeling Environment (HOMME) is a framework for building scalable, conserva- tive atmospheric models for climate simulation and general atmospheric-modeling applications. Its spatial discretizations are based on Spectral-Element (SE) and Discontinuous Galerkin (DG) methods. These are local methods employing high-order accurate spectral basis-functions that have been shown to perform well on massively parallel supercomputers at any resolution and scale particularly well at high resolutions. HOMME provides the framework upon which the CAM-SE community atmosphere model dynamical-core is constructed. In its current incarnation, CAM-SE employs the hydrostatic primitive-equations (PE) of motion, which limits its resolution to simulations coarser thanmore » 0.1 per grid cell. The primary objective of this project is to remove this resolution limitation by providing HOMME with the capabilities needed to build nonhydrostatic models that solve the compressible Euler/Navier-Stokes equations.« less
NASA Astrophysics Data System (ADS)
Rodrigues, Manuel J.; Fernandes, David E.; Silveirinha, Mário G.; Falcão, Gabriel
2018-01-01
This work introduces a parallel computing framework to characterize the propagation of electron waves in graphene-based nanostructures. The electron wave dynamics is modeled using both "microscopic" and effective medium formalisms and the numerical solution of the two-dimensional massless Dirac equation is determined using a Finite-Difference Time-Domain scheme. The propagation of electron waves in graphene superlattices with localized scattering centers is studied, and the role of the symmetry of the microscopic potential in the electron velocity is discussed. The computational methodologies target the parallel capabilities of heterogeneous multi-core CPU and multi-GPU environments and are built with the OpenCL parallel programming framework which provides a portable, vendor agnostic and high throughput-performance solution. The proposed heterogeneous multi-GPU implementation achieves speedup ratios up to 75x when compared to multi-thread and multi-core CPU execution, reducing simulation times from several hours to a couple of minutes.
User's manual SIG: a general-purpose signal processing program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lager, D.; Azevedo, S.
1983-10-25
SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time- and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Many of the basic operations one would perform on digitized data are contained in the core SIG package. Out of these core commands, more powerful signal processing algorithms may be built. Many different operations on time- and frequency-domain signals can be performed by SIG. They include operations on the samples of a signal, such as adding a scalar tomore » each sample, operations on the entire signal such as digital filtering, and operations on two or more signals such as adding two signals. Signals may be simulated, such as a pulse train or a random waveform. Graphics operations display signals and spectra.« less
Performance Calculations for the ITER Core Imaging X-Ray Spectrometer (CIXS)
NASA Astrophysics Data System (ADS)
Hill, K. W.; Delgado-Aparicio, L.; Pablant, N.; Johnson, D.; Feder, R.; Klabacha, J.; Stratton, B.; Bitter, M.; Beiersdorfer, P.; Barnsley, R.; Bertschinger, G.; O'Mullane, M.; Lee, S. G.
2013-10-01
The US is providing a 1D imaging x-ray crystal spectrometer system as a primary diagnostic for measuring profiles of ion temperature (Ti) and toroidal flow velocity (v) in the ITER plasma core (r/a = 0-0.85). The diagnostic must provide high spectral resolution (E/ ΔE > 5,000), spatial resolution of 10 cm, and time resolution of 10-100 ms, and must operate and survive in an environment having high neutron and gamma-ray fluxes. This work presents spectral simulations and tomographic inversions for obtaining local Ti and v, comparisons of the expected count rate profiles to the requirements, the degradation of performance due to the nuclear radiation background, and measurements of the rejection of nuclear background by detector pulse-height discrimination. This work was performed under the auspices of the DOE by PPPL under contract DE-AC02-09CH11466 and by LLNL under contract DE-AC52-07NA27344.
Das, Susobhan; Li, Jun; Hui, Rongqing
2015-01-01
Micro- and nano-structured electrodes have the potential to improve the performance of Li-ion batteries by increasing the surface area of the electrode and reducing the diffusion distance required by the charged carriers. We report the numerical simulation of Lithium-ion batteries with the anode made of core-shell heterostructures of silicon-coated carbon nanofibers. We show that the energy capacity can be significantly improved by reducing the thickness of the silicon anode to the dimension comparable or less than the Li-ion diffusion length inside silicon. The results of simulation indicate that the contraction of the silicon electrode thickness during the battery discharge process commonly found in experiments also plays a major role in the increase of the energy capacity. PMID:28347120
Is there a Stobbs factor in atomic-resolution STEM-EELS mapping?
Xin, Huolin L; Dwyer, Christian; Muller, David A
2014-04-01
Recent work has convincingly argued that the Stobbs factor-disagreement in contrast between simulated and experimental atomic-resolution images-in ADF-STEM imaging can be accounted for by including the incoherent source size in simulation. However, less progress has been made for atomic-resolution STEM-EELS mapping. Here we have performed carefully calibrated EELS mapping experiments of a [101] DyScO3 single-crystal specimen, allowing atomic-resolution EELS signals to be extracted on an absolute scale for a large range of thicknesses. By simultaneously recording the elastic signal, also on an absolute scale, and using it to characterize the source size, sample thickness and inelastic mean free path, we eliminate all free parameters in the simulation of the core-loss signals. Coupled with double channeling simulations that incorporate both core-loss inelastic scattering and dynamical elastic and thermal diffuse scattering, the present work enables a close scrutiny of the scattering physics in the inelastic channel. We found that by taking into account the effective source distribution determined from the ADF images, both the absolute signal and the contrast in atomic-resolution Dy-M5 maps can be closely reproduced by the double-channeling simulations. At lower energy losses, discrepancies are present in the Sc-L2,3 and Dy-N4,5 maps due to the energy-dependent spatial distribution of the background spectrum, core-hole effects, and omitted complexities in the final states. This work has demonstrated the possibility of using quantitative STEM-EELS for element-specific column-by-column atom counting at higher energy losses and for atomic-like final states, and has elucidated several possible improvements for future theoretical work. Copyright © 2014 Elsevier B.V. All rights reserved.
AMR Studies of Star Formation: Simulations and Simulated Observations
NASA Astrophysics Data System (ADS)
Offner, Stella; McKee, C. F.; Klein, R. I.
2009-01-01
Molecular clouds are typically observed to be approximately virialized with gravitational and turbulent energy in balance, yielding a star formation rate of a few percent. The origin and characteristics of the observed supersonic turbulence are poorly understood, and without continued energy injection the turbulence is predicted to decay within a cloud dynamical time. Recent observations and analytic work have suggested a strong connection between the initial stellar mass function, the core mass function, and turbulence characteristics. The role of magnetic fields in determining core lifetimes, shapes, and kinematic properties remains hotly debated. Simulations are a formidable tool for studying the complex process of star formation and addressing these puzzles. I present my results modeling low-mass star formation using the ORION adaptive mesh refinement (AMR) code. I investigate the properties of forming cores and protostars in simulations in which the turbulence is driven to maintain virial balance and where it is allowed to decay. I will discuss simulated observations of cores in dust emission and in molecular tracers and compare to observations of local star-forming clouds. I will also present results from ORION cluster simulations including flux-limited diffusion radiative transfer and show that radiative feedback, even from low-mass stars, has a significant effect on core fragmentation, disk properties, and the IMF. Finally, I will discuss the new simulation frontier of AMR multigroup radiative transfer.
Two-Dimensional Diffusion Theory Analysis of Reactivity Effects of a Fuel-Plate-Removal Experiment
NASA Technical Reports Server (NTRS)
Gotsky, Edward R.; Cusick, James P.; Bogart, Donald
1959-01-01
Two-dimensional two-group diffusion calculations were performed on the NASA reactor simulator in order to evaluate the reactivity effects of fuel plates removed successively from the center experimental fuel element of a seven- by three-element core loading at the Oak Ridge Bulk Shielding Facility. The reactivity calculations were performed by two methods: In the first, the slowing-down properties of the experimental fuel element were represented by its infinite media parameters; and, in the second, the finite size of the experimental fuel element was recognized, and the slowing-down properties of the surrounding core were attributed to this small region. The latter calculation method agreed very well with the experimented reactivity effects; the former method underestimated the experimental reactivity effects.
CMS Readiness for Multi-Core Workload Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.
In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides amore » solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.« less
CAM-SE: A scalable spectral element dynamical core for the Community Atmosphere Model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dennis, John; Edwards, Jim; Evans, Kate J
2012-01-01
The Community Atmosphere Model (CAM) version 5 includes a spectral element dynamical core option from NCAR's High-Order Method Modeling Environment. It is a continuous Galerkin spectral finite element method designed for fully unstructured quadrilateral meshes. The current configurations in CAM are based on the cubed-sphere grid. The main motivation for including a spectral element dynamical core is to improve the scalability of CAM by allowing quasi-uniform grids for the sphere that do not require polar filters. In addition, the approach provides other state-of-the-art capabilities such as improved conservation properties. Spectral elements are used for the horizontal discretization, while most othermore » aspects of the dynamical core are a hybrid of well tested techniques from CAM's finite volume and global spectral dynamical core options. Here we first give a overview of the spectral element dynamical core as used in CAM. We then give scalability and performance results from CAM running with three different dynamical core options within the Community Earth System Model, using a pre-industrial time-slice configuration. We focus on high resolution simulations of 1/4 degree, 1/8 degree, and T340 spectral truncation.« less
CMS readiness for multi-core workload scheduling
NASA Astrophysics Data System (ADS)
Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.; Aftab Khan, F.; Letts, J.; Mason, D.; Verguilov, V.
2017-10-01
In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.
Performance of the MTR core with MOX fuel using the MCNP4C2 code.
Shaaban, Ismail; Albarhoum, Mohamad
2016-08-01
The MCNP4C2 code was used to simulate the MTR-22 MW research reactor and perform the neutronic analysis for a new fuel namely: a MOX (U3O8&PuO2) fuel dispersed in an Al matrix for One Neutronic Trap (ONT) and Three Neutronic Traps (TNTs) in its core. Its new characteristics were compared to its original characteristics based on the U3O8-Al fuel. Experimental data for the neutronic parameters including criticality relative to the MTR-22 MW reactor for the original U3O8-Al fuel at nominal power were used to validate the calculated values and were found acceptable. The achieved results seem to confirm that the use of MOX fuel in the MTR-22 MW will not degrade the safe operational conditions of the reactor. In addition, the use of MOX fuel in the MTR-22 MW core leads to reduce the uranium fuel enrichment with (235)U and the amount of loaded (235)U in the core by about 34.84% and 15.21% for the ONT and TNTs cases, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.
Solving Navier-Stokes' equation using Castillo-Grone's mimetic difference operators on GPUs
NASA Astrophysics Data System (ADS)
Abouali, Mohammad; Castillo, Jose
2012-11-01
This paper discusses the performance and the accuracy of Castillo-Grone's (CG) mimetic difference operator in solving the Navier-Stokes' equation in order to simulate oceanic and atmospheric flows. The implementation is further adapted to harness the power of the many computing cores available on the Graphics Processing Units (GPUs) and the speedup is discussed.
Ejector subassembly for dual wall air drilling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolle, J.J.
1996-09-01
The dry drilling system developed for the Yucca Mountain Site Characterization Project incorporates a surface vacuum system to prevent drilling air and cuttings from contaminating the borehole wall during coring operations. As the drilling depth increases, however there is a potential for borehole contamination because of the limited volume of air which can be removed by the vacuum system. A feasibility analysis has shown that an ejector subassembly mounted in the drill string above the core barrel could significantly enhance the depth capacity of the dry drilling system. The ejector subassembly would use a portion of the air supplied tomore » the core bit to maintain a vacuum on the hole bottom. The results of a design study including performance testing of laboratory scale ejector simulator are presented here.« less
Density-based cluster algorithms for the identification of core sets
NASA Astrophysics Data System (ADS)
Lemke, Oliver; Keller, Bettina G.
2016-10-01
The core-set approach is a discretization method for Markov state models of complex molecular dynamics. Core sets are disjoint metastable regions in the conformational space, which need to be known prior to the construction of the core-set model. We propose to use density-based cluster algorithms to identify the cores. We compare three different density-based cluster algorithms: the CNN, the DBSCAN, and the Jarvis-Patrick algorithm. While the core-set models based on the CNN and DBSCAN clustering are well-converged, constructing core-set models based on the Jarvis-Patrick clustering cannot be recommended. In a well-converged core-set model, the number of core sets is up to an order of magnitude smaller than the number of states in a conventional Markov state model with comparable approximation error. Moreover, using the density-based clustering one can extend the core-set method to systems which are not strongly metastable. This is important for the practical application of the core-set method because most biologically interesting systems are only marginally metastable. The key point is to perform a hierarchical density-based clustering while monitoring the structure of the metric matrix which appears in the core-set method. We test this approach on a molecular-dynamics simulation of a highly flexible 14-residue peptide. The resulting core-set models have a high spatial resolution and can distinguish between conformationally similar yet chemically different structures, such as register-shifted hairpin structures.
NASA Astrophysics Data System (ADS)
Zhou, S.; Solana, J. R.
2018-03-01
Monte Carlo NVT simulations have been performed to obtain the thermodynamic and structural properties and perturbation coefficients up to third order in the inverse temperature expansion of the Helmholtz free energy of fluids with potential models proposed in the literature for diamond and wurtzite lattices. These data are used to analyze performance of a coupling parameter series expansion (CPSE). The main findings are summarized as follows, (1) The CPSE provides accurate predictions of the first three coefficient in the inverse temperature expansion of Helmholtz free energy for the potential models considered and the thermodynamic properties of these fluids are predicted more accurately when the CPSE is truncated at second or third order. (2) The Barker-Henderson (BH) recipe is appropriate for determining the effective hard sphere diameter for strongly repulsive potential cores, but its performance worsens with increasing the softness of the potential core. (3) For some thermodynamic properties the first-order CPSE works better for the diamond potential, whose tail is dominated by repulsive interactions, than for the potential, whose tail is dominated by attractive interactions. However, the first-order CPSE provides unsatisfactory results for the excess internal energy and constant-volume excess heat capacity for the two potential models.
Multi-Kepler GPU vs. multi-Intel MIC for spin systems simulations
NASA Astrophysics Data System (ADS)
Bernaschi, M.; Bisson, M.; Salvadore, F.
2014-10-01
We present and compare the performances of two many-core architectures: the Nvidia Kepler and the Intel MIC both in a single system and in cluster configuration for the simulation of spin systems. As a benchmark we consider the time required to update a single spin of the 3D Heisenberg spin glass model by using the Over-relaxation algorithm. We present data also for a traditional high-end multi-core architecture: the Intel Sandy Bridge. The results show that although on the two Intel architectures it is possible to use basically the same code, the performances of a Intel MIC change dramatically depending on (apparently) minor details. Another issue is that to obtain a reasonable scalability with the Intel Phi coprocessor (Phi is the coprocessor that implements the MIC architecture) in a cluster configuration it is necessary to use the so-called offload mode which reduces the performances of the single system. As to the GPU, the Kepler architecture offers a clear advantage with respect to the previous Fermi architecture maintaining exactly the same source code. Scalability of the multi-GPU implementation remains very good by using the CPU as a communication co-processor of the GPU. All source codes are provided for inspection and for double-checking the results.
Analysis of Gravitational Signals from Core-Collapse Supernovae (CCSNe) using MatLab
NASA Astrophysics Data System (ADS)
Frere, Noah; Mezzacappa, Anthony; Yakunin, Konstantin
2017-01-01
When a massive star runs out of fuel, it collapses under its own weight and rebounds in a powerful supernova explosion, sending, among other things, ripples through space-time, known as gravitational waves (GWs). GWs can be detected by earth-based observatories, such as the Laser Interferometer Gravitational-Wave Observatory (LIGO). Observers must compare the data from GW detectors with theoretical waveforms in order to confirm that the detection of a GW signal from a particular source has occurred. GW predictions for core collapse supernovae (CCSNe) rely on computer simulations. The UTK/ORNL astrophysics group has performed such simulations. Here, I analyze the resulting waveforms, using Matlab, to generate their Fourier transforms, short-time Fourier transforms, energy spectra, evolution of frequencies, and frequency maxima. One product will be a Matlab interface for analyzing and comparing GW predictions based on data from future simulations. This interface will make it easier to analyze waveforms and to share the results with the GW astrophysics community. Funding provided by Department of Physics and Astronomy, University of Tennessee, Knoxville, TN 37996-1200, USA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C. S.; Zhang, Hongbin
Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, T.Y.; Bentz, J.H.; Bergeron, K.D.
1994-04-01
The possibility of achieving in-vessel core retention by flooding the reactor cavity, or the ``flooded cavity``, is an accident management concept currently under consideration for advanced light water reactors (ALWR), as well as for existing light water reactors (LWR). The CYBL (CYlindrical BoiLing) facility is a facility specifically designed to perform large-scale confirmatory testing of the flooded cavity concept. CYBL has a tank-within-a-tank design; the inner 3.7 m diameter tank simulates the reactor vessel, and the outer tank simulates the reactor cavity. The energy deposition on the bottom head is simulated with an array of radiant heaters. The array canmore » deliver a tailored heat flux distribution corresponding to that resulting from core melt convection. The present paper provides a detailed description of the capabilities of the facility, as well as results of recent experiments with heat flux in the range of interest to those required for in-vessel retention in typical ALWRs. The paper concludes with a discussion of other experiments for the flooded cavity applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, J.; Kucukboyaci, V. N.; Nguyen, L.
2012-07-01
The Westinghouse Small Modular Reactor (SMR) is an 800 MWt (> 225 MWe) integral pressurized water reactor (iPWR) with all primary components, including the steam generator and the pressurizer located inside the reactor vessel. The reactor core is based on a partial-height 17x17 fuel assembly design used in the AP1000{sup R} reactor core. The Westinghouse SMR utilizes passive safety systems and proven components from the AP1000 plant design with a compact containment that houses the integral reactor vessel and the passive safety systems. A preliminary loss of coolant accident (LOCA) analysis of the Westinghouse SMR has been performed using themore » WCOBRA/TRAC-TF2 code, simulating a transient caused by a double ended guillotine (DEG) break in the direct vessel injection (DVI) line. WCOBRA/TRAC-TF2 is a new generation Westinghouse LOCA thermal-hydraulics code evolving from the US NRC licensed WCOBRA/TRAC code. It is designed to simulate PWR LOCA events from the smallest break size to the largest break size (DEG cold leg). A significant number of fluid dynamics models and heat transfer models were developed or improved in WCOBRA/TRAC-TF2. A large number of separate effects and integral effects tests were performed for a rigorous code assessment and validation. WCOBRA/TRAC-TF2 was introduced into the Westinghouse SMR design phase to assist a quick and robust passive cooling system design and to identify thermal-hydraulic phenomena for the development of the SMR Phenomena Identification Ranking Table (PIRT). The LOCA analysis of the Westinghouse SMR demonstrates that the DEG DVI break LOCA is mitigated by the injection and venting from the Westinghouse SMR passive safety systems without core heat up, achieving long term core cooling. (authors)« less
USDA-ARS?s Scientific Manuscript database
The objective of this simulation study is to determine which sampling method (Cozzini core sampler, core drill shaving, and N-60 surface excision) will better detect Shiga Toxin-producing Escherichia coli (STEC) at varying levels of contamination when present in the meat. 1000 simulated experiments...
PuReMD-GPU: A reactive molecular dynamics simulation package for GPUs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kylasa, S.B., E-mail: skylasa@purdue.edu; Aktulga, H.M., E-mail: hmaktulga@lbl.gov; Grama, A.Y., E-mail: ayg@cs.purdue.edu
2014-09-01
We present an efficient and highly accurate GP-GPU implementation of our community code, PuReMD, for reactive molecular dynamics simulations using the ReaxFF force field. PuReMD and its incorporation into LAMMPS (Reax/C) is used by a large number of research groups worldwide for simulating diverse systems ranging from biomembranes to explosives (RDX) at atomistic level of detail. The sub-femtosecond time-steps associated with ReaxFF strongly motivate significant improvements to per-timestep simulation time through effective use of GPUs. This paper presents, in detail, the design and implementation of PuReMD-GPU, which enables ReaxFF simulations on GPUs, as well as various performance optimization techniques wemore » developed to obtain high performance on state-of-the-art hardware. Comprehensive experiments on model systems (bulk water and amorphous silica) are presented to quantify the performance improvements achieved by PuReMD-GPU and to verify its accuracy. In particular, our experiments show up to 16× improvement in runtime compared to our highly optimized CPU-only single-core ReaxFF implementation. PuReMD-GPU is a unique production code, and is currently available on request from the authors.« less
A multilevel-skin neighbor list algorithm for molecular dynamics simulation
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Zhao, Mingcan; Hou, Chaofeng; Ge, Wei
2018-01-01
Searching of the interaction pairs and organization of the interaction processes are important steps in molecular dynamics (MD) algorithms and are critical to the overall efficiency of the simulation. Neighbor lists are widely used for these steps, where thicker skin can reduce the frequency of list updating but is discounted by more computation in distance check for the particle pairs. In this paper, we propose a new neighbor-list-based algorithm with a precisely designed multilevel skin which can reduce unnecessary computation on inter-particle distances. The performance advantages over traditional methods are then analyzed against the main simulation parameters on Intel CPUs and MICs (many integrated cores), and are clearly demonstrated. The algorithm can be generalized for various discrete simulations using neighbor lists.
High-performance computational fluid dynamics: a custom-code approach
NASA Astrophysics Data System (ADS)
Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.
2016-07-01
We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier-Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.
MDGRAPE-4: a special-purpose computer system for molecular dynamics simulations.
Ohmura, Itta; Morimoto, Gentaro; Ohno, Yousuke; Hasegawa, Aki; Taiji, Makoto
2014-08-06
We are developing the MDGRAPE-4, a special-purpose computer system for molecular dynamics (MD) simulations. MDGRAPE-4 is designed to achieve strong scalability for protein MD simulations through the integration of general-purpose cores, dedicated pipelines, memory banks and network interfaces (NIFs) to create a system on chip (SoC). Each SoC has 64 dedicated pipelines that are used for non-bonded force calculations and run at 0.8 GHz. Additionally, it has 65 Tensilica Xtensa LX cores with single-precision floating-point units that are used for other calculations and run at 0.6 GHz. At peak performance levels, each SoC can evaluate 51.2 G interactions per second. It also has 1.8 MB of embedded shared memory banks and six network units with a peak bandwidth of 7.2 GB s(-1) for the three-dimensional torus network. The system consists of 512 (8×8×8) SoCs in total, which are mounted on 64 node modules with eight SoCs. The optical transmitters/receivers are used for internode communication. The expected maximum power consumption is 50 kW. While MDGRAPE-4 software has still been improved, we plan to run MD simulations on MDGRAPE-4 in 2014. The MDGRAPE-4 system will enable long-time molecular dynamics simulations of small systems. It is also useful for multiscale molecular simulations where the particle simulation parts often become bottlenecks.
MDGRAPE-4: a special-purpose computer system for molecular dynamics simulations
Ohmura, Itta; Morimoto, Gentaro; Ohno, Yousuke; Hasegawa, Aki; Taiji, Makoto
2014-01-01
We are developing the MDGRAPE-4, a special-purpose computer system for molecular dynamics (MD) simulations. MDGRAPE-4 is designed to achieve strong scalability for protein MD simulations through the integration of general-purpose cores, dedicated pipelines, memory banks and network interfaces (NIFs) to create a system on chip (SoC). Each SoC has 64 dedicated pipelines that are used for non-bonded force calculations and run at 0.8 GHz. Additionally, it has 65 Tensilica Xtensa LX cores with single-precision floating-point units that are used for other calculations and run at 0.6 GHz. At peak performance levels, each SoC can evaluate 51.2 G interactions per second. It also has 1.8 MB of embedded shared memory banks and six network units with a peak bandwidth of 7.2 GB s−1 for the three-dimensional torus network. The system consists of 512 (8×8×8) SoCs in total, which are mounted on 64 node modules with eight SoCs. The optical transmitters/receivers are used for internode communication. The expected maximum power consumption is 50 kW. While MDGRAPE-4 software has still been improved, we plan to run MD simulations on MDGRAPE-4 in 2014. The MDGRAPE-4 system will enable long-time molecular dynamics simulations of small systems. It is also useful for multiscale molecular simulations where the particle simulation parts often become bottlenecks. PMID:24982255
CHEMICAL AND PHYSICAL CHARACTERIZATION OF COLLAPSING LOW-MASS PRESTELLAR DENSE CORES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hincelin, U.; Commerçon, B.; Wakelam, V.
The first hydrostatic core, also called the first Larson core, is one of the first steps in low-mass star formation as predicted by theory. With recent and future high-performance telescopes, the details of these first phases are becoming accessible, and observations may confirm theory and even present new challenges for theoreticians. In this context, from a theoretical point of view, we study the chemical and physical evolution of the collapse of prestellar cores until the formation of the first Larson core, in order to better characterize this early phase in the star formation process. We couple a state-of-the-art hydrodynamical modelmore » with full gas-grain chemistry, using different assumptions for the magnetic field strength and orientation. We extract the different components of each collapsing core (i.e., the central core, the outflow, the disk, the pseudodisk, and the envelope) to highlight their specific physical and chemical characteristics. Each component often presents a specific physical history, as well as a specific chemical evolution. From some species, the components can clearly be differentiated. The different core models can also be chemically differentiated. Our simulation suggests that some chemical species act as tracers of the different components of a collapsing prestellar dense core, and as tracers of the magnetic field characteristics of the core. From this result, we pinpoint promising key chemical species to be observed.« less
Advanced Fuels for LWRs: Fully-Ceramic Microencapsulated and Related Concepts FY 2012 Interim Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Sonat Sen; Brian Boer; John D. Bess
2012-03-01
This report summarizes the progress in the Deep Burn project at Idaho National Laboratory during the first half of fiscal year 2012 (FY2012). The current focus of this work is on Fully-Ceramic Microencapsulated (FCM) fuel containing low-enriched uranium (LEU) uranium nitride (UN) fuel kernels. UO2 fuel kernels have not been ruled out, and will be examined as later work in FY2012. Reactor physics calculations confirmed that the FCM fuel containing 500 mm diameter kernels of UN fuel has positive MTC with a conventional fuel pellet radius of 4.1 mm. The methodology was put into place and validated against MCNP tomore » perform whole-core calculations using DONJON, which can interpolate cross sections from a library generated using DRAGON. Comparisons to MCNP were performed on the whole core to confirm the accuracy of the DRAGON/DONJON schemes. A thermal fluid coupling scheme was also developed and implemented with DONJON. This is currently able to iterate between diffusion calculations and thermal fluid calculations in order to update fuel temperatures and cross sections in whole-core calculations. Now that the DRAGON/DONJON calculation capability is in place and has been validated against MCNP results, and a thermal-hydraulic capability has been implemented in the DONJON methodology, the work will proceed to more realistic reactor calculations. MTC calculations at the lattice level without the correct burnable poison are inadequate to guarantee zero or negative values in a realistic mode of operation. Using the DONJON calculation methodology described in this report, a startup core with enrichment zoning and burnable poisons will be designed. Larger fuel pins will be evaluated for their ability to (1) alleviate the problem of positive MTC and (2) increase reactivity-limited burnup. Once the critical boron concentration of the startup core is determined, MTC will be calculated to verify a non-positive value. If the value is positive, the design will be changed to require less soluble boron by, for example, increasing the reactivity hold-down by burnable poisons. Then, the whole core analysis will be repeated until an acceptable design is found. Calculations of departure from nucleate boiling ratio (DNBR) will be included in the safety evaluation as well. Once a startup core is shown to be viable, subsequent reloads will be simulated by shuffling fuel and introducing fresh fuel. The PASTA code has been updated with material properties of UN fuel from literature and a model for the diffusion and release of volatile fission products from the SiC matrix material . Preliminary simulations have been performed for both normal conditions and elevated temperatures. These results indicated that the fuel performs well and that the SiC matrix has a good retention of the fission products. The path forward for fuel performance work includes improvement of metallic fission product release from the kernel. Results should be considered preliminary and further validation is required.« less
High Performance Radiation Transport Simulations on TITAN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Christopher G; Davidson, Gregory G; Evans, Thomas M
2012-01-01
In this paper we describe the Denovo code system. Denovo solves the six-dimensional, steady-state, linear Boltzmann transport equation, of central importance to nuclear technology applications such as reactor core analysis (neutronics), radiation shielding, nuclear forensics and radiation detection. The code features multiple spatial differencing schemes, state-of-the-art linear solvers, the Koch-Baker-Alcouffe (KBA) parallel-wavefront sweep algorithm for inverting the transport operator, a new multilevel energy decomposition method scaling to hundreds of thousands of processing cores, and a modern, novel code architecture that supports straightforward integration of new features. In this paper we discuss the performance of Denovo on the 10--20 petaflop ORNLmore » GPU-based system, Titan. We describe algorithms and techniques used to exploit the capabilities of Titan's heterogeneous compute node architecture and the challenges of obtaining good parallel performance for this sparse hyperbolic PDE solver containing inherently sequential computations. Numerical results demonstrating Denovo performance on early Titan hardware are presented.« less
Performance Analysis of Scientific and Engineering Applications Using MPInside and TAU
NASA Technical Reports Server (NTRS)
Saini, Subhash; Mehrotra, Piyush; Taylor, Kenichi Jun Haeng; Shende, Sameer Suresh; Biswas, Rupak
2010-01-01
In this paper, we present performance analysis of two NASA applications using performance tools like Tuning and Analysis Utilities (TAU) and SGI MPInside. MITgcmUV and OVERFLOW are two production-quality applications used extensively by scientists and engineers at NASA. MITgcmUV is a global ocean simulation model, developed by the Estimating the Circulation and Climate of the Ocean (ECCO) Consortium, for solving the fluid equations of motion using the hydrostatic approximation. OVERFLOW is a general-purpose Navier-Stokes solver for computational fluid dynamics (CFD) problems. Using these tools, we analyze the MPI functions (MPI_Sendrecv, MPI_Bcast, MPI_Reduce, MPI_Allreduce, MPI_Barrier, etc.) with respect to message size of each rank, time consumed by each function, and how ranks communicate. MPI communication is further analyzed by studying the performance of MPI functions used in these two applications as a function of message size and number of cores. Finally, we present the compute time, communication time, and I/O time as a function of the number of cores.
Benoit, Roland G.; Schacter, Daniel L.
2015-01-01
It has been suggested that the simulation of hypothetical episodes and the recollection of past episodes are supported by fundamentally the same set of brain regions. The present article specifies this core network via Activation Likelihood Estimation (ALE). Specifically, a first meta-analysis revealed joint engagement of core network regions during episodic memory and episodic simulation. These include parts of the medial surface, the hippocampus and parahippocampal cortex within the medial temporal lobes, and the lateral temporal and inferior posterior parietal cortices on the lateral surface. Both capacities also jointly recruited additional regions such as parts of the bilateral dorsolateral prefrontal cortex. All of these core regions overlapped with the default network. Moreover, it has further been suggested that episodic simulation may require a stronger engagement of some of the core network’s nodes as wells as the recruitment of additional brain regions supporting control functions. A second ALE meta-analysis indeed identified such regions that were consistently more strongly engaged during episodic simulation than episodic memory. These comprised the core-network clusters located in the left dorsolateral prefrontal cortex and posterior inferior parietal lobe and other structures distributed broadly across the default and fronto-parietal control networks. Together, the analyses determine the set of brain regions that allow us to experience past and hypothetical episodes, thus providing an important foundation for studying the regions’ specialized contributions and interactions. PMID:26142352
Parallelization of a Monte Carlo particle transport simulation code
NASA Astrophysics Data System (ADS)
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
NASA Astrophysics Data System (ADS)
Stoekl, Alexander; Dorfi, Ernst
2014-05-01
In the early, embedded phase of evolution of terrestrial planets, the planetary core accumulates gas from the circumstellar disk into a planetary envelope. This atmosphere is very significant for the further thermal evolution of the planet by forming an insulation around the rocky core. The disk-captured envelope is also the staring point for the atmospheric evolution where the atmosphere is modified by outgassing from the planetary core and atmospheric mass loss once the planet is exposed to the radiation field of the host star. The final amount of persistent atmosphere around the evolved planet very much characterizes the planet and is a key criterion for habitability. The established way to study disk accumulated atmospheres are hydrostatic models, even though in many cases the assumption of stationarity is unlikely to be fulfilled. We present, for the first time, time-dependent radiation hydrodynamics simulations of the accumulation process and the interaction between the disk-nebula gas and the planetary core. The calculations were performed with the TAPIR-Code (short for The adaptive, implicit RHD-Code) in spherical symmetry solving the equations of hydrodynamics, gray radiative transport, and convective energy transport. The models range from the surface of the solid core up to the Hill radius where the planetary envelope merges into the surrounding protoplanetary disk. Our results show that the time-scale of gas capturing and atmospheric growth strongly depends on the mass of the solid core. The amount of atmosphere accumulated during the lifetime of the protoplanetary disk (typically a few Myr) varies accordingly with the mass of the planet. Thus, a core with Mars-mass will end up with about 10 bar of atmosphere while for an Earth-mass core, the surface pressure reaches several 1000 bar. Even larger planets with several Earth masses quickly capture massive envelopes which in turn become gravitationally unstable leading to runaway accretion and the eventual formation of a gas planet.
Bridging FPGA and GPU technologies for AO real-time control
NASA Astrophysics Data System (ADS)
Perret, Denis; Lainé, Maxime; Bernard, Julien; Gratadour, Damien; Sevin, Arnaud
2016-07-01
Our team has developed a common environment for high performance simulations and real-time control of AO systems based on the use of Graphics Processors Units in the context of the COMPASS project. Such a solution, based on the ability of the real time core in the simulation to provide adequate computing performance, limits the cost of developing AO RTC systems and makes them more scalable. A code developed and validated in the context of the simulation may be injected directly into the system and tested on sky. Furthermore, the use of relatively low cost components also offers significant advantages for the system hardware platform. However, the use of GPUs in an AO loop comes with drawbacks: the traditional way of offloading computation from CPU to GPUs - involving multiple copies and unacceptable overhead in kernel launching - is not well suited in a real time context. This last application requires the implementation of a solution enabling direct memory access (DMA) to the GPU memory from a third party device, bypassing the operating system. This allows this device to communicate directly with the real-time core of the simulation feeding it with the WFS camera pixel stream. We show that DMA between a custom FPGA-based frame-grabber and a computation unit (GPU, FPGA, or Coprocessor such as Xeon-phi) across PCIe allows us to get latencies compatible with what will be needed on ELTs. As a fine-grained synchronization mechanism is not yet made available by GPU vendors, we propose the use of memory polling to avoid interrupts handling and involvement of a CPU. Network and Vision protocols are handled by the FPGA-based Network Interface Card (NIC). We present the results we obtained on a complete AO loop using camera and deformable mirror simulators.
NASA Center for Climate Simulation (NCCS) Presentation
NASA Technical Reports Server (NTRS)
Webster, William P.
2012-01-01
The NASA Center for Climate Simulation (NCCS) offers integrated supercomputing, visualization, and data interaction technologies to enhance NASA's weather and climate prediction capabilities. It serves hundreds of users at NASA Goddard Space Flight Center, as well as other NASA centers, laboratories, and universities across the US. Over the past year, NCCS has continued expanding its data-centric computing environment to meet the increasingly data-intensive challenges of climate science. We doubled our Discover supercomputer's peak performance to more than 800 teraflops by adding 7,680 Intel Xeon Sandy Bridge processor-cores and most recently 240 Intel Xeon Phi Many Integrated Core (MIG) co-processors. A supercomputing-class analysis system named Dali gives users rapid access to their data on Discover and high-performance software including the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT), with interfaces from user desktops and a 17- by 6-foot visualization wall. NCCS also is exploring highly efficient climate data services and management with a new MapReduce/Hadoop cluster while augmenting its data distribution to the science community. Using NCCS resources, NASA completed its modeling contributions to the Intergovernmental Panel on Climate Change (IPCG) Fifth Assessment Report this summer as part of the ongoing Coupled Modellntercomparison Project Phase 5 (CMIP5). Ensembles of simulations run on Discover reached back to the year 1000 to test model accuracy and projected climate change through the year 2300 based on four different scenarios of greenhouse gases, aerosols, and land use. The data resulting from several thousand IPCC/CMIP5 simulations, as well as a variety of other simulation, reanalysis, and observationdatasets, are available to scientists and decision makers through an enhanced NCCS Earth System Grid Federation Gateway. Worldwide downloads have totaled over 110 terabytes of data.
NASA Astrophysics Data System (ADS)
Francés, J.; Bleda, S.; Neipp, C.; Márquez, A.; Pascual, I.; Beléndez, A.
2013-03-01
The finite-difference time-domain method (FDTD) allows electromagnetic field distribution analysis as a function of time and space. The method is applied to analyze holographic volume gratings (HVGs) for the near-field distribution at optical wavelengths. Usually, this application requires the simulation of wide areas, which implies more memory and time processing. In this work, we propose a specific implementation of the FDTD method including several add-ons for a precise simulation of optical diffractive elements. Values in the near-field region are computed considering the illumination of the grating by means of a plane wave for different angles of incidence and including absorbing boundaries as well. We compare the results obtained by FDTD with those obtained using a matrix method (MM) applied to diffraction gratings. In addition, we have developed two optimized versions of the algorithm, for both CPU and GPU, in order to analyze the improvement of using the new NVIDIA Fermi GPU architecture versus highly tuned multi-core CPU as a function of the size simulation. In particular, the optimized CPU implementation takes advantage of the arithmetic and data transfer streaming SIMD (single instruction multiple data) extensions (SSE) included explicitly in the code and also of multi-threading by means of OpenMP directives. A good agreement between the results obtained using both FDTD and MM methods is obtained, thus validating our methodology. Moreover, the performance of the GPU is compared to the SSE+OpenMP CPU implementation, and it is quantitatively determined that a highly optimized CPU program can be competitive for a wider range of simulation sizes, whereas GPU computing becomes more powerful for large-scale simulations.
Palmer, Matthew S; Heigenhauser, George J F; Duong, MyLinh; Spriet, Lawrence L
2017-04-01
This study determined whether mild dehydration influenced skeletal muscle glycogen use, core temperature or performance during high-intensity, intermittent cycle-based exercise in ice hockey players vs. staying hydrated with water. Eight males (21.6 ± 0.4 yr, 183.5 ± 1.6 cm, 83.9 ± 3.7 kg, 50.2 ± 1.9 ml·kg -1 ·min -1 ) performed two trials separated by 7 days. The protocol consisted of 3 periods (P) containing 10 × 45-s cycling bouts at ~133% VO 2max , followed by 135 s of passive rest. Subjects drank no fluid and dehydrated during the protocol (NF), or maintained body mass by drinking WATER. Muscle biopsies were taken at rest, immediately before and after P3. Subjects were mildly dehydrated (-1.8% BM) at the end of P3 in the NF trial. There were no differences between the NF and WATER trials for glycogen use (P1+P2; 350.1 ± 31.9 vs. 413.2 ± 33.2, P3; 103.5 ± 16.2 vs. 131.5 ± 18.9 mmol·kg dm -1 ), core temperature (P1; 37.8 ± 0.1 vs. 37.7 ± 0.1, P2; 38.2 ± 0.1 vs. 38.1 ± 0.1, P3; 38.3 ± 0.1 vs. 38.2 ± 0.1 °C) or performance (P1; 156.3 ± 7.8 vs. 154.4 ± 8.2, P2; 150.5 ± 7.8 vs. 152.4 ± 8.3, P3; 144.1 ± 8.7 vs. 148.4 ± 8.7 kJ). This study demonstrated that typical dehydration experienced by ice hockey players (~1.8% BM loss), did not affect glycogen use, core temperature, or voluntary performance vs. staying hydrated by ingesting water during a cycle-based simulation of ice hockey exercise in a laboratory environment.
Vincent, Grace E; Ferguson, Sally; Larsen, Brianna; Ridgers, Nicola D; Snow, Rod; Aisbett, Brad
2018-04-06
To examine the effects of sleep restriction on firefighters' physical task performance, physical activity, and physiological and perceived exertion during simulated hot wildfire conditions. 31 firefighters were randomly allocated to either the hot (n = 18, HOT; 33 °C, 8-h sleep opportunity) or hot and sleep restricted (n = 13, HOT + SR; 33 °C, 4-h sleep opportunity) condition. Intermittent, self-paced work circuits of six firefighting tasks were performed for 3 days. Firefighters self-reported ratings of perceived exertion. Heart rate, core temperature, and physical activity were measured continuously. Fluids were consumed ad libitum, and all food and fluids consumed were recorded. Urine volume and urine specific gravity (USG) were analysed and sleep was assessed using polysomnography (PSG). There were no differences between the HOT and HOT + SR groups in firefighters' physical task performance, heart rate, core temperature, USG, or fluid intake. Ratings of perceived exertion were higher (p < 0.05) in the HOT + SR group for two of the six firefighting tasks. The HOT group spent approximately 7 min more undertaking moderate physical activity throughout the 2-h work circuits compared to the HOT + SR group. Two nights of sleep restriction did not influence firefighters' physical task performance or physiological responses during 3 days of simulated wildfire suppression. Further research is needed to explore firefighters' pacing strategies during real wildfire suppression.
Hybrid-optimization strategy for the communication of large-scale Kinetic Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Wu, Baodong; Li, Shigang; Zhang, Yunquan; Nie, Ningming
2017-02-01
The parallel Kinetic Monte Carlo (KMC) algorithm based on domain decomposition has been widely used in large-scale physical simulations. However, the communication overhead of the parallel KMC algorithm is critical, and severely degrades the overall performance and scalability. In this paper, we present a hybrid optimization strategy to reduce the communication overhead for the parallel KMC simulations. We first propose a communication aggregation algorithm to reduce the total number of messages and eliminate the communication redundancy. Then, we utilize the shared memory to reduce the memory copy overhead of the intra-node communication. Finally, we optimize the communication scheduling using the neighborhood collective operations. We demonstrate the scalability and high performance of our hybrid optimization strategy by both theoretical and experimental analysis. Results show that the optimized KMC algorithm exhibits better performance and scalability than the well-known open-source library-SPPARKS. On 32-node Xeon E5-2680 cluster (total 640 cores), the optimized algorithm reduces the communication time by 24.8% compared with SPPARKS.
VCSim3: a VR simulator for cardiovascular interventions.
Korzeniowski, Przemyslaw; White, Ruth J; Bello, Fernando
2018-01-01
Effective and safe performance of cardiovascular interventions requires excellent catheter/guidewire manipulation skills. These skills are currently mainly gained through an apprenticeship on real patients, which may not be safe or cost-effective. Computer simulation offers an alternative for core skills training. However, replicating the physical behaviour of real instruments navigated through blood vessels is a challenging task. We have developed VCSim3-a virtual reality simulator for cardiovascular interventions. The simulator leverages an inextensible Cosserat rod to model virtual catheters and guidewires. Their mechanical properties were optimized with respect to their real counterparts scanned in a silicone phantom using X-ray CT imaging. The instruments are manipulated via a VSP haptic device. Supporting solutions such as fluoroscopic visualization, contrast flow propagation, cardiac motion, balloon inflation, and stent deployment, enable performing a complete angioplasty procedure. We present detailed results of simulation accuracy of the virtual instruments, along with their computational performance. In addition, the results of a preliminary face and content validation study conveyed on a group of 17 interventional radiologists are given. VR simulation of cardiovascular procedure can contribute to surgical training and improve the educational experience without putting patients at risk, raising ethical issues or requiring expensive animal or cadaver facilities. VCSim3 is still a prototype, yet the initial results indicate that it provides promising foundations for further development.
Comparison of Thunderstorm Simulations from WRF-NMM and WRF-ARW Models over East Indian Region
Litta, A. J.; Mary Ididcula, Sumam; Mohanty, U. C.; Kiran Prasad, S.
2012-01-01
The thunderstorms are typical mesoscale systems dominated by intense convection. Mesoscale models are essential for the accurate prediction of such high-impact weather events. In the present study, an attempt has been made to compare the simulated results of three thunderstorm events using NMM and ARW model core of WRF system and validated the model results with observations. Both models performed well in capturing stability indices which are indicators of severe convective activity. Comparison of model-simulated radar reflectivity imageries with observations revealed that NMM model has simulated well the propagation of the squall line, while the squall line movement was slow in ARW. From the model-simulated spatial plots of cloud top temperature, we can see that NMM model has better captured the genesis, intensification, and propagation of thunder squall than ARW model. The statistical analysis of rainfall indicates the better performance of NMM than ARW. Comparison of model-simulated thunderstorm affected parameters with that of the observed showed that NMM has performed better than ARW in capturing the sharp rise in humidity and drop in temperature. This suggests that NMM model has the potential to provide unique and valuable information for severe thunderstorm forecasters over east Indian region. PMID:22645480
A Bandwidth-Optimized Multi-Core Architecture for Irregular Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Secchi, Simone; Tumeo, Antonino; Villa, Oreste
This paper presents an architecture template for next-generation high performance computing systems specifically targeted to irregular applications. We start our work by considering that future generation interconnection and memory bandwidth full-system numbers are expected to grow by a factor of 10. In order to keep up with such a communication capacity, while still resorting to fine-grained multithreading as the main way to tolerate unpredictable memory access latencies of irregular applications, we show how overall performance scaling can benefit from the multi-core paradigm. At the same time, we also show how such an architecture template must be coupled with specific techniquesmore » in order to optimize bandwidth utilization and achieve the maximum scalability. We propose a technique based on memory references aggregation, together with the related hardware implementation, as one of such optimization techniques. We explore the proposed architecture template by focusing on the Cray XMT architecture and, using a dedicated simulation infrastructure, validate the performance of our template with two typical irregular applications. Our experimental results prove the benefits provided by both the multi-core approach and the bandwidth optimization reference aggregation technique.« less
Thermal behavior of cylindrical buckling restrained braces at elevated temperatures.
Talebi, Elnaz; Tahir, Mahmood Md; Zahmatkesh, Farshad; Yasreen, Airil; Mirza, Jahangir
2014-01-01
The primary focus of this investigation was to analyze sequentially coupled nonlinear thermal stress, using a three-dimensional model. It was meant to shed light on the behavior of Buckling Restraint Brace (BRB) elements with circular cross section, at elevated temperature. Such bracing systems were comprised of a cylindrical steel core encased in a strong concrete-filled steel hollow casing. A debonding agent was rubbed on the core's surface to avoid shear stress transition to the restraining system. The numerical model was verified by the analytical solutions developed by the other researchers. Performance of BRB system under seismic loading at ambient temperature has been well documented. However, its performance in case of fire has yet to be explored. This study showed that the failure of brace may be attributed to material strength reduction and high compressive forces, both due to temperature rise. Furthermore, limiting temperatures in the linear behavior of steel casing and concrete in BRB element for both numerical and analytical simulations were about 196°C and 225°C, respectively. Finally it is concluded that the performance of BRB at elevated temperatures was the same as that seen at room temperature; that is, the steel core yields prior to the restraining system.
Virtual machine-based simulation platform for mobile ad-hoc network-based cyber infrastructure
Yoginath, Srikanth B.; Perumalla, Kayla S.; Henz, Brian J.
2015-09-29
In modeling and simulating complex systems such as mobile ad-hoc networks (MANETs) in de-fense communications, it is a major challenge to reconcile multiple important considerations: the rapidity of unavoidable changes to the software (network layers and applications), the difficulty of modeling the critical, implementation-dependent behavioral effects, the need to sustain larger scale scenarios, and the desire for faster simulations. Here we present our approach in success-fully reconciling them using a virtual time-synchronized virtual machine(VM)-based parallel ex-ecution framework that accurately lifts both the devices as well as the network communications to a virtual time plane while retaining full fidelity. At themore » core of our framework is a scheduling engine that operates at the level of a hypervisor scheduler, offering a unique ability to execute multi-core guest nodes over multi-core host nodes in an accurate, virtual time-synchronized manner. In contrast to other related approaches that suffer from either speed or accuracy issues, our framework provides MANET node-wise scalability, high fidelity of software behaviors, and time-ordering accuracy. The design and development of this framework is presented, and an ac-tual implementation based on the widely used Xen hypervisor system is described. Benchmarks with synthetic and actual applications are used to identify the benefits of our approach. The time inaccuracy of traditional emulation methods is demonstrated, in comparison with the accurate execution of our framework verified by theoretically correct results expected from analytical models of the same scenarios. In the largest high fidelity tests, we are able to perform virtual time-synchronized simulation of 64-node VM-based full-stack, actual software behaviors of MANETs containing a mix of static and mobile (unmanned airborne vehicle) nodes, hosted on a 32-core host, with full fidelity of unmodified ad-hoc routing protocols, unmodified application executables, and user-controllable physical layer effects including inter-device wireless signal strength, reachability, and connectivity.« less
Virtual machine-based simulation platform for mobile ad-hoc network-based cyber infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B.; Perumalla, Kayla S.; Henz, Brian J.
In modeling and simulating complex systems such as mobile ad-hoc networks (MANETs) in de-fense communications, it is a major challenge to reconcile multiple important considerations: the rapidity of unavoidable changes to the software (network layers and applications), the difficulty of modeling the critical, implementation-dependent behavioral effects, the need to sustain larger scale scenarios, and the desire for faster simulations. Here we present our approach in success-fully reconciling them using a virtual time-synchronized virtual machine(VM)-based parallel ex-ecution framework that accurately lifts both the devices as well as the network communications to a virtual time plane while retaining full fidelity. At themore » core of our framework is a scheduling engine that operates at the level of a hypervisor scheduler, offering a unique ability to execute multi-core guest nodes over multi-core host nodes in an accurate, virtual time-synchronized manner. In contrast to other related approaches that suffer from either speed or accuracy issues, our framework provides MANET node-wise scalability, high fidelity of software behaviors, and time-ordering accuracy. The design and development of this framework is presented, and an ac-tual implementation based on the widely used Xen hypervisor system is described. Benchmarks with synthetic and actual applications are used to identify the benefits of our approach. The time inaccuracy of traditional emulation methods is demonstrated, in comparison with the accurate execution of our framework verified by theoretically correct results expected from analytical models of the same scenarios. In the largest high fidelity tests, we are able to perform virtual time-synchronized simulation of 64-node VM-based full-stack, actual software behaviors of MANETs containing a mix of static and mobile (unmanned airborne vehicle) nodes, hosted on a 32-core host, with full fidelity of unmodified ad-hoc routing protocols, unmodified application executables, and user-controllable physical layer effects including inter-device wireless signal strength, reachability, and connectivity.« less
Effects of Boron and Graphite Uncertainty in Fuel for TREAT Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaughn, Kyle; Mausolff, Zander; Gonzalez, Esteban
Advanced modeling techniques and current computational capacity make full core TREAT simulations possible, with the goal of such simulations to understand the pre-test core and minimize the number of required calibrations. But, in order to simulate TREAT with a high degree of precision the reactor materials and geometry must also be modeled with a high degree of precision. This paper examines how uncertainty in the reported values of boron and graphite have an effect on simulations of TREAT.
FINDING THE FIRST COSMIC EXPLOSIONS. II. CORE-COLLAPSE SUPERNOVAE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whalen, Daniel J.; Joggerst, Candace C.; Fryer, Chris L.
2013-05-01
Understanding the properties of Population III (Pop III) stars is prerequisite to elucidating the nature of primeval galaxies, the chemical enrichment and reionization of the early intergalactic medium, and the origin of supermassive black holes. While the primordial initial mass function (IMF) remains unknown, recent evidence from numerical simulations and stellar archaeology suggests that some Pop III stars may have had lower masses than previously thought, 15-50 M{sub Sun} in addition to 50-500 M{sub Sun }. The detection of Pop III supernovae (SNe) by JWST, WFIRST, or the TMT could directly probe the primordial IMF for the first time. Wemore » present numerical simulations of 15-40 M{sub Sun} Pop III core-collapse SNe performed with the Los Alamos radiation hydrodynamics code RAGE. We find that they will be visible in the earliest galaxies out to z {approx} 10-15, tracing their star formation rates and in some cases revealing their positions on the sky. Since the central engines of Pop III and solar-metallicity core-collapse SNe are quite similar, future detection of any Type II SNe by next-generation NIR instruments will in general be limited to this epoch.« less
Computational Analysis of a Pylon-Chevron Core Nozzle Interaction
NASA Technical Reports Server (NTRS)
Thomas, Russell H.; Kinzie, Kevin W.; Pao, S. Paul
2001-01-01
In typical engine installations, the pylon of an engine creates a flow disturbance that interacts with the engine exhaust flow. This interaction of the pylon with the exhaust flow from a dual stream nozzle was studied computationally. The dual stream nozzle simulates an engine with a bypass ratio of five. A total of five configurations were simulated all at the take-off operating point. All computations were performed using the structured PAB3D code which solves the steady, compressible, Reynolds-averaged Navier-Stokes equations. These configurations included a core nozzle with eight chevron noise reduction devices built into the nozzle trailing edge. Baseline cases had no chevron devices and were run with a pylon and without a pylon. Cases with the chevron were also studied with and without the pylon. Another case was run with the chevron rotated relative to the pylon. The fan nozzle did not have chevron devices attached. Solutions showed that the effect of the pylon is to distort the round Jet plume and to destroy the symmetrical lobed pattern created by the core chevrons. Several overall flow field quantities were calculated that might be used in extensions of this work to find flow field parameters that correlate with changes in noise.
Influence of shell thickness on thermal stability of bimetallic Al-Pd nanoparticles
NASA Astrophysics Data System (ADS)
Wen, John Z.; Nguyen, Ngoc Ha; Rawlins, John; Petre, Catalin F.; Ringuette, Sophie
2014-07-01
Aluminum-based bimetallic core-shell nanoparticles have shown promising applications in civil and defense industries. This study addresses the thermal stability of aluminum-palladium (Al-Pd) core/shell nanoparticles with a varying shell thickness of 5, 6, and 7 Å, respectively. The classic molecular dynamics (MD) simulations are performed in order to investigate the effects of the shell thickness on the ignition mechanism and subsequent energetic processes of these nanoparticles. The histograms of temperature change and structural evolution clearly show the inhibition role of the Pd shell during ignition. While the nanoparticle with a thicker shell is more thermally stable and hence requires more excess energy, stored as the potential energy of the nanoparticle and provided through numerically heating, to initiate the thermite reaction, a higher adiabatic temperature can be produced from this nanoparticle, thanks to its greater content of Pd. The two-stage thermite reactions are discussed with their activation energy based on the energy balance processes during MD heating and production. Analyses of the simulation results reveal that the inner pressure of the core-shell nanoparticle increases with both temperature and the absorbed thermal energy during heating, which may result in a breakup of the Pd shell.
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
NASA Astrophysics Data System (ADS)
Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu
2011-07-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model
NASA Technical Reports Server (NTRS)
Putnam, Williama
2011-01-01
The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.
Reed, K. A.; Bacmeister, J. T.; Rosenbloom, N. A.; ...
2015-05-13
Our paper examines the impact of the dynamical core on the simulation of tropical cyclone (TC) frequency, distribution, and intensity. The dynamical core, the central fluid flow component of any general circulation model (GCM), is often overlooked in the analysis of a model's ability to simulate TCs compared to the impact of more commonly documented components (e.g., physical parameterizations). The Community Atmosphere Model version 5 is configured with multiple dynamics packages. This analysis demonstrates that the dynamical core has a significant impact on storm intensity and frequency, even in the presence of similar large-scale environments. In particular, the spectral elementmore » core produces stronger TCs and more hurricanes than the finite-volume core using very similar parameterization packages despite the latter having a slightly more favorable TC environment. Furthermore, these results suggest that more detailed investigations into the impact of the GCM dynamical core on TC climatology are needed to fully understand these uncertainties. Key Points The impact of the GCM dynamical core is often overlooked in TC assessments The CAM5 dynamical core has a significant impact on TC frequency and intensity A larger effort is needed to better understand this uncertainty« less
Patient-specific core decompression surgery for early-stage ischemic necrosis of the femoral head
Wang, Wei; Hu, Wei; Yang, Pei; Dang, Xiao Qian; Li, Xiao Hui; Wang, Kun Zheng
2017-01-01
Introduction Core decompression is an efficient treatment for early stage ischemic necrosis of the femoral head. In conventional procedures, the pre-operative X-ray only shows one plane of the ischemic area, which often results in inaccurate drilling. This paper introduces a new method that uses computer-assisted technology and rapid prototyping to enhance drilling accuracy during core decompression surgeries and presents a validation study of cadaveric tests. Methods Twelve cadaveric human femurs were used to simulate early-stage ischemic necrosis. The core decompression target at the anterolateral femoral head was simulated using an embedded glass ball (target). Three positioning Kirschner wires were drilled into the top and bottom of the large rotor. The specimen was then subjected to computed tomography (CT). A CT image of the specimen was imported into the Mimics software to construct a three-dimensional model including the target. The best core decompression channel was then designed using the 3D model. A navigational template for the specimen was designed using the Pro/E software and manufactured by rapid prototyping technology to guide the drilling channel. The specimen-specific navigation template was installed on the specimen using positioning Kirschner wires. Drilling was performed using a guide needle through the guiding hole on the templates. The distance between the end point of the guide needle and the target was measured to validate the patient-specific surgical accuracy. Results The average distance between the tip of the guide needle drilled through the guiding template and the target was 1.92±0.071 mm. Conclusions Core decompression using a computer-rapid prototyping template is a reliable and accurate technique that could provide a new method of precision decompression for early-stage ischemic necrosis. PMID:28464029
Impact and Blast Resistance of Sandwich Plates
NASA Astrophysics Data System (ADS)
Dvorak, George J.; Bahei-El-Din, Yehia A.; Suvorov, Alexander P.
Response of conventional and modified sandwich plate designs is examined under static load, impact by a rigid cylindrical or flat indenter, and during and after an exponential pressure impulse lasting for 0.05 ms, at peak pressure of 100 MPa, simulating a nearby explosion. The conventional sandwich design consists of thin outer (loaded side) and inner facesheets made of carbon/epoxy fibrous laminates, separated by a thick layer of structural foam core. In the three modified designs, one or two thin ductile interlayers are inserted between the outer facesheet and the foam core. Materials selected for the interlayers are a hyperelas-tic rate-independent polyurethane;a compression strain and strain rate dependent, elastic-plastic polyurea;and an elastomeric foam. ABAQUS and LS-Dyna software were used in various response simulations. Performance comparisons between the enhanced and conventional designs show that the modified designs provide much better protection against different damage modes under both load regimes. After impact, local facesheet deflection, core compression, and energy release rate of delamination cracks, which may extend on hidden interfaces between facesheet and core, are all reduced. Under blast or impulse loads, reductions have been observed in the extent of core crushing, facesheet delaminations and vibration amplitudes, and in overall deflections. Similar reductions were found in the kinetic energy and in the stored and dissipated strain energy. Although strain rates as high as 10-4/s1 are produced by the blast pressure, peak strains in the interlayers were too low to raise the flow stress in the polyurea to that in the polyurethane, where a possible rate-dependent response was neglected. Therefore, stiff polyurethane or hard rubber interlayers materials should be used for protection of sandwich plate foam cores against both impact and blast-induced damage.
Patient-specific core decompression surgery for early-stage ischemic necrosis of the femoral head.
Wang, Wei; Hu, Wei; Yang, Pei; Dang, Xiao Qian; Li, Xiao Hui; Wang, Kun Zheng
2017-01-01
Core decompression is an efficient treatment for early stage ischemic necrosis of the femoral head. In conventional procedures, the pre-operative X-ray only shows one plane of the ischemic area, which often results in inaccurate drilling. This paper introduces a new method that uses computer-assisted technology and rapid prototyping to enhance drilling accuracy during core decompression surgeries and presents a validation study of cadaveric tests. Twelve cadaveric human femurs were used to simulate early-stage ischemic necrosis. The core decompression target at the anterolateral femoral head was simulated using an embedded glass ball (target). Three positioning Kirschner wires were drilled into the top and bottom of the large rotor. The specimen was then subjected to computed tomography (CT). A CT image of the specimen was imported into the Mimics software to construct a three-dimensional model including the target. The best core decompression channel was then designed using the 3D model. A navigational template for the specimen was designed using the Pro/E software and manufactured by rapid prototyping technology to guide the drilling channel. The specimen-specific navigation template was installed on the specimen using positioning Kirschner wires. Drilling was performed using a guide needle through the guiding hole on the templates. The distance between the end point of the guide needle and the target was measured to validate the patient-specific surgical accuracy. The average distance between the tip of the guide needle drilled through the guiding template and the target was 1.92±0.071 mm. Core decompression using a computer-rapid prototyping template is a reliable and accurate technique that could provide a new method of precision decompression for early-stage ischemic necrosis.
Induction simulation of gas core nuclear engine
NASA Technical Reports Server (NTRS)
Poole, J. W.; Vogel, C. E.
1973-01-01
The design, construction and operation of an induction heated plasma device known as a combined principles simulator is discussed. This device incorporates the major design features of the gas core nuclear rocket engine such as solid feed, propellant seeding, propellant injection through the walls, and a transpiration cooled, choked flow nozzle. Both argon and nitrogen were used as propellant simulating material, and sodium was used for fuel simulating material. In addition, a number of experiments were conducted utilizing depleted uranium as the fuel. The test program revealed that satisfactory operation of this device can be accomplished over a range of operating conditions and provided additional data to confirm the validity of the gas core concept.
NASA Astrophysics Data System (ADS)
Fuhrer, Oliver; Chadha, Tarun; Hoefler, Torsten; Kwasniewski, Grzegorz; Lapillonne, Xavier; Leutwyler, David; Lüthi, Daniel; Osuna, Carlos; Schär, Christoph; Schulthess, Thomas C.; Vogt, Hannes
2018-05-01
The best hope for reducing long-standing global climate model biases is by increasing resolution to the kilometer scale. Here we present results from an ultrahigh-resolution non-hydrostatic climate model for a near-global setup running on the full Piz Daint supercomputer on 4888 GPUs (graphics processing units). The dynamical core of the model has been completely rewritten using a domain-specific language (DSL) for performance portability across different hardware architectures. Physical parameterizations and diagnostics have been ported using compiler directives. To our knowledge this represents the first complete atmospheric model being run entirely on accelerators on this scale. At a grid spacing of 930 m (1.9 km), we achieve a simulation throughput of 0.043 (0.23) simulated years per day and an energy consumption of 596 MWh per simulated year. Furthermore, we propose a new memory usage efficiency (MUE) metric that considers how efficiently the memory bandwidth - the dominant bottleneck of climate codes - is being used.
IMPROVEMENTS IN THE THERMAL NEUTRON CALIBRATION UNIT, TNF2, AT LNMRI/IRD.
Astuto, A; Fernandes, S S; Patrão, K C S; Fonseca, E S; Pereira, W W; Lopes, R T
2018-02-21
The standard thermal neutron flux unit, TNF2, in the Brazilian National Ionizing Radiation Metrology Laboratory was rebuilt. Fluence is still achieved by moderating of four 241Am-Be sources with 0.6 TBq each. The facility was again simulated and redesigned with graphite core and paraffin added graphite blocks surrounding it. Simulations using the MCNPX code on different geometric arrangements of moderator materials and neutron sources were performed. The resulting neutron fluence quality in terms of intensity, spectrum and cadmium ratio was evaluated. After this step, the system was assembled based on the results obtained from the simulations and measurements were performed with equipment existing in LNMRI/IRD and by simulated equipment. This work focuses on the characterization of a central chamber point and external points around the TNF2 in terms of neutron spectrum, fluence and ambient dose equivalent, H*(10). This system was validated with spectra measurements, fluence and H*(10) to ensure traceability.
NASA Astrophysics Data System (ADS)
Yang, Lin; Zhang, Feng; Wang, Cai-Zhuang; Ho, Kai-Ming; Travesset, Alex
2018-04-01
We present an implementation of EAM and FS interatomic potentials, which are widely used in simulating metallic systems, in HOOMD-blue, a software designed to perform classical molecular dynamics simulations using GPU accelerations. We first discuss the details of our implementation and then report extensive benchmark tests. We demonstrate that single-precision floating point operations efficiently implemented on GPUs can produce sufficient accuracy when compared against double-precision codes, as demonstrated in test simulations of calculations of the glass-transition temperature of Cu64.5Zr35.5, and pair correlation function g (r) of liquid Ni3Al. Our code scales well with the size of the simulating system on NVIDIA Tesla M40 and P100 GPUs. Compared with another popular software LAMMPS running on 32 cores of AMD Opteron 6220 processors, the GPU/CPU performance ratio can reach as high as 4.6. The source code can be accessed through the HOOMD-blue web page for free by any interested user.
NASA Astrophysics Data System (ADS)
Vatansever, Erol
2017-05-01
By means of Monte Carlo simulation method with Metropolis algorithm, we elucidate the thermal and magnetic phase transition behaviors of a ferrimagnetic core/shell nanocubic system driven by a time dependent magnetic field. The particle core is composed of ferromagnetic spins, and it is surrounded by an antiferromagnetic shell. At the interface of the core/shell particle, we use antiferromagnetic spin-spin coupling. We simulate the nanoparticle using classical Heisenberg spins. After a detailed analysis, our Monte Carlo simulation results suggest that present system exhibits unusual and interesting magnetic behaviors. For example, at the relatively lower temperature regions, an increment in the amplitude of the external field destroys the antiferromagnetism in the shell part of the nanoparticle, leading to a ground state with ferromagnetic character. Moreover, particular attention has been dedicated to the hysteresis behaviors of the system. For the first time, we show that frequency dispersions can be categorized into three groups for a fixed temperature for finite core/shell systems, as in the case of the conventional bulk systems under the influence of an oscillating magnetic field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Cyrus; Larsen, Matt; Brugger, Eric
Strawman is a system designed to explore the in situ visualization and analysis needs of simulation code teams running multi-physics calculations on many-core HPC architectures. It porvides rendering pipelines that can leverage both many-core CPUs and GPUs to render images of simulation meshes.
Flow Analysis of a Gas Turbine Low- Pressure Subsystem
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
1997-01-01
The NASA Lewis Research Center is coordinating a project to numerically simulate aerodynamic flow in the complete low-pressure subsystem (LPS) of a gas turbine engine. The numerical model solves the three-dimensional Navier-Stokes flow equations through all components within the low-pressure subsystem as well as the external flow around the engine nacelle. The Advanced Ducted Propfan Analysis Code (ADPAC), which is being developed jointly by Allison Engine Company and NASA, is the Navier-Stokes flow code being used for LPS simulation. The majority of the LPS project is being done under a NASA Lewis contract with Allison. Other contributors to the project are NYMA and the University of Toledo. For this project, the Energy Efficient Engine designed by GE Aircraft Engines is being modeled. This engine includes a low-pressure system and a high-pressure system. An inlet, a fan, a booster stage, a bypass duct, a lobed mixer, a low-pressure turbine, and a jet nozzle comprise the low-pressure subsystem within this engine. The tightly coupled flow analysis evaluates aerodynamic interactions between all components of the LPS. The high-pressure core engine of this engine is simulated with a one-dimensional thermodynamic cycle code in order to provide boundary conditions to the detailed LPS model. This core engine consists of a high-pressure compressor, a combustor, and a high-pressure turbine. The three-dimensional LPS flow model is coupled to the one-dimensional core engine model to provide a "hybrid" flow model of the complete gas turbine Energy Efficient Engine. The resulting hybrid engine model evaluates the detailed interaction between the LPS components at design and off-design engine operating conditions while considering the lumped-parameter performance of the core engine.
Search for Thermal X-ray Features from the Crab nebula with Hitomi Soft X-ray Spectrometer
NASA Astrophysics Data System (ADS)
Tsujimoto, M.; Mori, K.; Lee, S.; Yamaguchi, H.; Tominaga, N.; Moriya, T.; Sato, T.; Bamba, A.
2017-10-01
The Crab nebula originates from a core-collapse SN in 1054. It has an anomalously low observed ejecta mass for a Fe-core collapse SN. Intensive searches were made for an undetected massive shell to solve this discrepancy. An alternative idea is that the SN1054 is an electron-capture (EC) explosion with a lower explosion energy than Fe-core collapse SNe. In the X-rays, imaging searches were performed for the plasma emission from the shell in the Crab outskirts. However, the extreme brightness hampers access to its vicinity. We used spectroscopic technique using the X-ray micro-calorimeter onboard Hitomi. We searched for the emission or absorption features by the thermal plasma and set a new limit. We re-evaluated the existing data to claim that the X-ray plasma mass is < 1 M_{⊙} for a wide range of assumed parameters. We further performed hydrodynamic simulation for two SN models (Fe core versus EC) under two environments (uniform ISM versus progenitor wind). We found that the observed mass limit can be compatible with both SN models if the environment has a low density of <0.03 cm^{-3} (Fe core) or <0.1 cm^{-3} (EC) for the uniform density, or <10^{14} g cm^{-1} for the wind density parameter for the wind environment.
NASA Astrophysics Data System (ADS)
Schmieschek, S.; Shamardin, L.; Frijters, S.; Krüger, T.; Schiller, U. D.; Harting, J.; Coveney, P. V.
2017-08-01
We introduce the lattice-Boltzmann code LB3D, version 7.1. Building on a parallel program and supporting tools which have enabled research utilising high performance computing resources for nearly two decades, LB3D version 7 provides a subset of the research code functionality as an open source project. Here, we describe the theoretical basis of the algorithm as well as computational aspects of the implementation. The software package is validated against simulations of meso-phases resulting from self-assembly in ternary fluid mixtures comprising immiscible and amphiphilic components such as water-oil-surfactant systems. The impact of the surfactant species on the dynamics of spinodal decomposition are tested and quantitative measurement of the permeability of a body centred cubic (BCC) model porous medium for a simple binary mixture is described. Single-core performance and scaling behaviour of the code are reported for simulations on current supercomputer architectures.
Biomedical Simulation: Evolution, Concepts, Challenges and Future Trends.
Sá-Couto, Carla; Patrão, Luís; Maio-Matos, Francisco; Pêgo, José Miguel
2016-12-30
Biomedical simulation is an effective educational complement for healthcare training, both at undergraduate and postgraduate level. It enables knowledge, skills and attitudes to be acquired in a safe, educationally orientated and efficient manner. In this context, simulation provides skills and experience that facilitate the transfer of cognitive, psychomotor and proper communication competences, thus changing behavior and attitudes, and ultimately improving patient safety. Beyond the impact on individual and team performance, simulation provides an opportunity to study organizational failures and improve system performance. Over the last decades, simulation in healthcare had a slow but steady growth, with a visible maturation in the last ten years. The simulation community must continue to provide the core leadership in developing standards. There is a need for strategies and policy development to ensure its coordinated and cost-effective implementation, applied to patient safety. This paper reviews the evolutionary movements of biomedical simulation, including a review of the Portuguese initiatives and nationwide programs. For leveling knowledge and standardize terminology, basic but essential concepts in clinical simulation, together with some considerations on assessment, validation and reliability are presented. The final sections discuss the current challenges and future initiatives and strategies, crucial for the integration of simulation programs in the greater movement toward patient safety.
2014-07-01
Molecular evidence of stress- induced acute heart injury in a mouse model simulating posttraumatic stress disorder. Proc Natl Acad Sci U S A. 2014 Feb...obtaining measures aligned with the core neurocognitive domains: IQ, working memory ( auditory /visual), processing speed, verbal memory (immediate...in the test sample and combined sample with a similar pattern for the validation sample. Similarly, performance on tests of auditory and visual
NASA Astrophysics Data System (ADS)
McClure, J. E.; Prins, J. F.; Miller, C. T.
2014-07-01
Multiphase flow implementations of the lattice Boltzmann method (LBM) are widely applied to the study of porous medium systems. In this work, we construct a new variant of the popular "color" LBM for two-phase flow in which a three-dimensional, 19-velocity (D3Q19) lattice is used to compute the momentum transport solution while a three-dimensional, seven velocity (D3Q7) lattice is used to compute the mass transport solution. Based on this formulation, we implement a novel heterogeneous GPU-accelerated algorithm in which the mass transport solution is computed by multiple shared memory CPU cores programmed using OpenMP while a concurrent solution of the momentum transport is performed using a GPU. The heterogeneous solution is demonstrated to provide speedup of 2.6 × as compared to multi-core CPU solution and 1.8 × compared to GPU solution due to concurrent utilization of both CPU and GPU bandwidths. Furthermore, we verify that the proposed formulation provides an accurate physical representation of multiphase flow processes and demonstrate that the approach can be applied to perform heterogeneous simulations of two-phase flow in porous media using a typical GPU-accelerated workstation.
Mix Model Comparison of Low Feed-Through Implosions
NASA Astrophysics Data System (ADS)
Pino, Jesse; MacLaren, S.; Greenough, J.; Casey, D.; Dewald, E.; Dittrich, T.; Khan, S.; Ma, T.; Sacks, R.; Salmonson, J.; Smalyuk, V.; Tipton, R.; Kyrala, G.
2016-10-01
The CD Mix campaign previously demonstrated the use of nuclear diagnostics to study the mix of separated reactants in plastic capsule implosions at the NIF. Recently, the separated reactants technique has been applied to the Two Shock (TS) implosion platform, which is designed to minimize this feed-through and isolate local mix at the gas-ablator interface and produce core yields in good agreement with 1D clean simulations. The effects of both inner surface roughness and convergence ratio have been probed. The TT, DT, and DD neutron signals respectively give information about core gas performance, gas-shell atomic mix, and heating of the shell. In this talk, we describe efforts to model these implosions using high-resolution 2D ARES simulations. Various methods of interfacial mix will be considered, including the Reynolds-Averaged Navier Stokes (RANS) KL method as well as and a multicomponent enhanced diffusivity model with species, thermal, and pressure gradient terms. We also give predictions of a upcoming campaign to investigate Mid-Z mixing by adding a Ge dopant to the CD layer. LLNL-ABS-697251 This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Experimental performance of the regenerator for the Chrysler upgraded automotive gas turbine engine
NASA Technical Reports Server (NTRS)
Winter, J. M.; Nussle, R. C.
1982-01-01
Automobile gas turbine engine regenerator performance was studied in a regenerator test facility that provided a satisfactory simulation of the actual engine operating environment but with independent control of airflow and gas flow. Velocity and temperature distributions were measured immediately downstream of both the core high-pressure-side outlet and the core low-pressure-side outlet. For the original engine housing, the regenerator temperature effectiveness was 1 to 2 percent higher than the design value, and the heat transfer effectiveness was 2 to 4 percent lower than the design value over the range of test conditions simulating 50 to 100 percent of gas generator speed. Recalculating the design values to account for seal leakage decreased the design heat transfer effectiveness to values consistent with those measured herein. A baffle installed in the engine housing high-pressure-side inlet provided more uniform velocities out of the regenerator but did not improve the effectiveness. A housing designed to provide more uniform axial flow to the regenerator was also tested. Although temperature uniformity was improved, the effectiveness values were not improved. Neither did 50-percent flow blockage (90 degree segment) applied to the high-pressure-side inlet change the effectiveness significantly.
NASA Astrophysics Data System (ADS)
Bergado, D. T.; Long, P. V.; Chaiyaput, S.; Balasubramaniam, A. S.
2018-04-01
Soft ground improvement techniques have become most practical and popular methods to increase soil strength, soil stiffness and reduce soil compressibility including the soft Bangkok clay. This paper focuses on comparative performances of prefabricated vertical drain (PVD) using surcharge, vacuum and heat preloading as well as the cement-admixed clay of Deep Cement Mixing (DCM) and Stiffened DCM (SDCM) methods. The Vacuum-PVD can increase the horizontal coefficient of consolidation, Ch, resulting in faster rate of settlement at the same magnitudes of settlement compared to Conventional PVD. Several field methods of applying vacuum preloading are also compared. Moreover, the Thermal PVD and Thermal Vacuum PVD can increase further the coefficient of horizontal consolidation, Ch, with the associated reduction of kh/ks values by reducing the drainage retardation effects in the smear zone around the PVD which resulted in faster rates of consolidation and higher magnitudes of settlements. Furthermore, the equivalent smear effect due to non-uniform consolidation is also discussed in addition to the smear due to the mechanical installation of PVDs. In addition, a new kind of reinforced deep mixing method, namely Stiffened Deep Cement Mixing (SDCM) pile is introduced to improve the flexural resistance, improve the field quality control, and prevent unexpected failures of the Deep Cement Mixing (DCM) pile. The SDCM pile consists of DCM pile reinforced with the insertion of precast reinforced concrete (RC) core. The full scale test embankment on soft clay improved by SDCM and DCM piles was also analysed. Numerical simulations using the 3D PLAXIS Foundation finite element software have been done to understand the behavior of SDCM and DCM piles. The simulation results indicated that the surface settlements decreased with increasing lengths of the RC cores, and, at lesser extent, increasing sectional areas of the RC cores in the SDCM piles. In addition, the lateral movements decreased by increasing the lengths (longer than 4 m) and, the sectional areas of the RC cores in the SDCM piles. The results of the numerical simulations closely agreed with the observed data and successfully verified the parameters affecting the performances and behavior of both SDCM and DCM piles.
High-performance multiprocessor architecture for a 3-D lattice gas model
NASA Technical Reports Server (NTRS)
Lee, F.; Flynn, M.; Morf, M.
1991-01-01
The lattice gas method has recently emerged as a promising discrete particle simulation method in areas such as fluid dynamics. We present a very high-performance scalable multiprocessor architecture, called ALGE, proposed for the simulation of a realistic 3-D lattice gas model, Henon's 24-bit FCHC isometric model. Each of these VLSI processors is as powerful as a CRAY-2 for this application. ALGE is scalable in the sense that it achieves linear speedup for both fixed and increasing problem sizes with more processors. The core computation of a lattice gas model consists of many repetitions of two alternating phases: particle collision and propagation. Functional decomposition by symmetry group and virtual move are the respective keys to efficient implementation of collision and propagation.
NASA Astrophysics Data System (ADS)
Palmesi, P.; Abert, C.; Bruckner, F.; Suess, D.
2018-05-01
Fast stray field calculation is commonly considered of great importance for micromagnetic simulations, since it is the most time consuming part of the simulation. The Fast Multipole Method (FMM) has displayed linear O(N) parallelization behavior on many cores. This article investigates the error of a recent FMM approach approximating sources using linear—instead of constant—finite elements in the singular integral for calculating the stray field and the corresponding potential. After measuring performance in an earlier manuscript, this manuscript investigates the convergence of the relative L2 error for several FMM simulation parameters. Various scenarios either calculating the stray field directly or via potential are discussed.
NASA Astrophysics Data System (ADS)
Rudianto, Indra; Sudarmaji
2018-04-01
We present an implementation of the spectral-element method for simulation of two-dimensional elastic wave propagation in fully heterogeneous media. We have incorporated most of realistic geological features in the model, including surface topography, curved layer interfaces, and 2-D wave-speed heterogeneity. To accommodate such complexity, we use an unstructured quadrilateral meshing technique. Simulation was performed on a GPU cluster, which consists of 24 core processors Intel Xeon CPU and 4 NVIDIA Quadro graphics cards using CUDA and MPI implementation. We speed up the computation by a factor of about 5 compared to MPI only, and by a factor of about 40 compared to Serial implementation.
NASA Astrophysics Data System (ADS)
Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.
2016-05-01
In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.
MaMiCo: Software design for parallel molecular-continuum flow simulations
NASA Astrophysics Data System (ADS)
Neumann, Philipp; Flohr, Hanno; Arora, Rahul; Jarmatz, Piet; Tchipev, Nikola; Bungartz, Hans-Joachim
2016-03-01
The macro-micro-coupling tool (MaMiCo) was developed to ease the development of and modularize molecular-continuum simulations, retaining sequential and parallel performance. We demonstrate the functionality and performance of MaMiCo by coupling the spatially adaptive Lattice Boltzmann framework waLBerla with four molecular dynamics (MD) codes: the light-weight Lennard-Jones-based implementation SimpleMD, the node-level optimized software ls1 mardyn, and the community codes ESPResSo and LAMMPS. We detail interface implementations to connect each solver with MaMiCo. The coupling for each waLBerla-MD setup is validated in three-dimensional channel flow simulations which are solved by means of a state-based coupling method. We provide sequential and strong scaling measurements for the four molecular-continuum simulations. The overhead of MaMiCo is found to come at 10%-20% of the total (MD) runtime. The measurements further show that scalability of the hybrid simulations is reached on up to 500 Intel SandyBridge, and more than 1000 AMD Bulldozer compute cores.
Gorshkov, Anton V; Kirillin, Mikhail Yu
2015-08-01
Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.
Hofmann, Bjørn
2009-07-23
It is important to demonstrate learning outcomes of simulation in technology based practices, such as in advanced health care. Although many studies show skills improvement and self-reported change to practice, there are few studies demonstrating patient outcome and societal efficiency. The objective of the study is to investigate if and why simulation can be effective and efficient in a hi-tech health care setting. This is important in order to decide whether and how to design simulation scenarios and outcome studies. Core theoretical insights in Science and Technology Studies (STS) are applied to analyze the field of simulation in hi-tech health care education. In particular, a process-oriented framework where technology is characterized by its devices, methods and its organizational setting is applied. The analysis shows how advanced simulation can address core characteristics of technology beyond the knowledge of technology's functions. Simulation's ability to address skilful device handling as well as purposive aspects of technology provides a potential for effective and efficient learning. However, as technology is also constituted by organizational aspects, such as technology status, disease status, and resource constraints, the success of simulation depends on whether these aspects can be integrated in the simulation setting as well. This represents a challenge for future development of simulation and for demonstrating its effectiveness and efficiency. Assessing the outcome of simulation in education in hi-tech health care settings is worthwhile if core characteristics of medical technology are addressed. This challenges the traditional technical versus non-technical divide in simulation, as organizational aspects appear to be part of technology's core characteristics.
Integration of Weather Avoidance and Traffic Separation
NASA Technical Reports Server (NTRS)
Consiglio, Maria C.; Chamberlain, James P.; Wilson, Sara R.
2011-01-01
This paper describes a dynamic convective weather avoidance concept that compensates for weather motion uncertainties; the integration of this weather avoidance concept into a prototype 4-D trajectory-based Airborne Separation Assurance System (ASAS) application; and test results from a batch (non-piloted) simulation of the integrated application with high traffic densities and a dynamic convective weather model. The weather model can simulate a number of pseudo-random hazardous weather patterns, such as slow- or fast-moving cells and opening or closing weather gaps, and also allows for modeling of onboard weather radar limitations in range and azimuth. The weather avoidance concept employs nested "core" and "avoid" polygons around convective weather cells, and the simulations assess the effectiveness of various avoid polygon sizes in the presence of different weather patterns, using traffic scenarios representing approximately two times the current traffic density in en-route airspace. Results from the simulation experiment show that the weather avoidance concept is effective over a wide range of weather patterns and cell speeds. Avoid polygons that are only 2-3 miles larger than their core polygons are sufficient to account for weather uncertainties in almost all cases, and traffic separation performance does not appear to degrade with the addition of weather polygon avoidance. Additional "lessons learned" from the batch simulation study are discussed in the paper, along with insights for improving the weather avoidance concept. Introduction
Using gaming simulation to evaluate bioterrorism and emergency readiness training.
Olson, Debra K; Scheller, Amy; Wey, Andrew
2014-01-01
The University of Minnesota: Simulations, Exercises and Effective Education: Preparedness and Emergency Response Learning Center uses simulations, which allow trainees to participate in realistic scenarios, to develop and evaluate competency. In a previous study, participants in Disaster in Franklin County: A Public Health Simulation demonstrated that prior bioterrorism and emergency readiness training (BT/ER) is significantly associated with better performance in a simulated emergency. We conducted a second analysis with a larger data set, remapping simulation questions to the Public Health Preparedness and Response Core Competency Model, Version 1.0. We performed an outcome evaluation of the impact of public health preparedness training. In particular, we compared individuals with significant BT/ER training to individuals without training on the basis of performance in a simulated emergency. We grouped participants as group 1 (≥45 hours of BT/ER training) and group 2 (<45 hours). Dependent variables included effectiveness of chosen responses within the gaming simulation, which was measured as the proportion of questions answered correctly for each participant. The relationship of effectiveness with significant BT/ER training was estimated using either multiple linear or logistic regression. For overall effectiveness, group 1 had 2% more correct decisions, on average, than group 2 (P < .001). Group 1 performed significantly better, on average, than group 2 for competency 1.1 (P = .001) and competency 2.3 (P < .001). However, group 1 was significantly worse on competency 1.2 than group 2. Results indicate that prior training is significantly associated with better performance in a simulated emergency using gaming technology. Effectiveness differed by competency, indicating that more training may be needed in certain competency areas. Next steps to enhancing the usefulness of simulations in training should go beyond questioning if the learner learned and included questions related to the organizational factors that contributed to simulation effectiveness, and attributes of the simulation that encouraged competency and capacity building.
Wang, Haiyang; Yan, Xin; Li, Shuguang; An, Guowen; Zhang, Xuenan
2016-10-08
A refractive index sensor based on dual-core photonic crystal fiber (PCF) with hexagonal lattice is proposed. The effects of geometrical parameters of the PCF on performances of the sensor are investigated by using the finite element method (FEM). Two fiber cores are separated by two air holes filled with the analyte whose refractive index is in the range of 1.33-1.41. Numerical simulation results show that the highest sensitivity can be up to 22,983 nm/RIU(refractive index unit) when the analyte refractive index is 1.41. The lowest sensitivity can reach to 21,679 nm/RIU when the analyte refractive index is 1.33. The sensor we proposed has significant advantages in the field of biomolecule detection as it provides a wide-range of detection with high sensitivity.
Development of As-Se tapered suspended-core fibers for ultra-broadband mid-IR wavelength conversion
NASA Astrophysics Data System (ADS)
Anashkina, E. A.; Shiryaev, V. S.; Koptev, M. Y.; Stepanov, B. S.; Muravyev, S. V.
2018-01-01
We designed and developed tapered suspended-core fibers of high-purity As39Se61 glass for supercontinuum generation in the mid-IR with a standard fiber laser pump source at 2 ${\\mu}$m. It was shown that microstructuring allows shifting a zero dispersion wavelength to the range shorter than 2 ${\\mu}$m in the fiber waist with a core diameter of about 1 ${\\mu}$m. In this case, supercontinuum generation in the 1-10 ${\\mu}$m range was obtained numerically with 150-fs 100-pJ pump pulses at 2 ${\\mu}$m. We also performed experiments on wavelength conversion of ultrashort optical pulses at 1.57 ${\\mu}$m from Er: fiber laser system in the manufactured As-Se tapered fibers. The measured broadening spectra were in a good agreement with the ones simulated numerically.