Parallelized direct execution simulation of message-passing parallel programs
NASA Technical Reports Server (NTRS)
Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.
1994-01-01
As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.
On the suitability of the connection machine for direct particle simulation
NASA Technical Reports Server (NTRS)
Dagum, Leonard
1990-01-01
The algorithmic structure was examined of the vectorizable Stanford particle simulation (SPS) method and the structure is reformulated in data parallel form. Some of the SPS algorithms can be directly translated to data parallel, but several of the vectorizable algorithms have no direct data parallel equivalent. This requires the development of new, strictly data parallel algorithms. In particular, a new sorting algorithm is developed to identify collision candidates in the simulation and a master/slave algorithm is developed to minimize communication cost in large table look up. Validation of the method is undertaken through test calculations for thermal relaxation of a gas, shock wave profiles, and shock reflection from a stationary wall. A qualitative measure is provided of the performance of the Connection Machine for direct particle simulation. The massively parallel architecture of the Connection Machine is found quite suitable for this type of calculation. However, there are difficulties in taking full advantage of this architecture because of lack of a broad based tradition of data parallel programming. An important outcome of this work has been new data parallel algorithms specifically of use for direct particle simulation but which also expand the data parallel diction.
A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Owen, Jeffrey E.
1988-01-01
A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.
Parallelization of sequential Gaussian, indicator and direct simulation algorithms
NASA Astrophysics Data System (ADS)
Nunes, Ruben; Almeida, José A.
2010-08-01
Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.
Parallel spatial direct numerical simulations on the Intel iPSC/860 hypercube
NASA Technical Reports Server (NTRS)
Joslin, Ronald D.; Zubair, Mohammad
1993-01-01
The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube is documented. The direct numerical simulation approach is used to compute spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows. The feasibility of using the PSDNS on the hypercube to perform transition studies is examined. The results indicate that the direct numerical simulation approach can effectively be parallelized on a distributed-memory parallel machine. By increasing the number of processors nearly ideal linear speedups are achieved with nonoptimized routines; slower than linear speedups are achieved with optimized (machine dependent library) routines. This slower than linear speedup results because the Fast Fourier Transform (FFT) routine dominates the computational cost and because the routine indicates less than ideal speedups. However with the machine-dependent routines the total computational cost decreases by a factor of 4 to 5 compared with standard FORTRAN routines. The computational cost increases linearly with spanwise wall-normal and streamwise grid refinements. The hypercube with 32 processors was estimated to require approximately twice the amount of Cray supercomputer single processor time to complete a comparable simulation; however it is estimated that a subgrid-scale model which reduces the required number of grid points and becomes a large-eddy simulation (PSLES) would reduce the computational cost and memory requirements by a factor of 10 over the PSDNS. This PSLES implementation would enable transition simulations on the hypercube at a reasonable computational cost.
Influence of equilibrium shear flow in the parallel magnetic direction on edge localized mode crash
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Y.; Xiong, Y. Y.; Chen, S. Y., E-mail: sychen531@163.com
2016-04-15
The influence of the parallel shear flow on the evolution of peeling-ballooning (P-B) modes is studied with the BOUT++ four-field code in this paper. The parallel shear flow has different effects in linear simulation and nonlinear simulation. In the linear simulations, the growth rate of edge localized mode (ELM) can be increased by Kelvin-Helmholtz term, which can be caused by the parallel shear flow. In the nonlinear simulations, the results accord with the linear simulations in the linear phase. However, the ELM size is reduced by the parallel shear flow in the beginning of the turbulence phase, which is recognizedmore » as the P-B filaments' structure. Then during the turbulence phase, the ELM size is decreased by the shear flow.« less
Efficient Parallel Algorithm For Direct Numerical Simulation of Turbulent Flows
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Gatski, Thomas B.
1997-01-01
A distributed algorithm for a high-order-accurate finite-difference approach to the direct numerical simulation (DNS) of transition and turbulence in compressible flows is described. This work has two major objectives. The first objective is to demonstrate that parallel and distributed-memory machines can be successfully and efficiently used to solve computationally intensive and input/output intensive algorithms of the DNS class. The second objective is to show that the computational complexity involved in solving the tridiagonal systems inherent in the DNS algorithm can be reduced by algorithm innovations that obviate the need to use a parallelized tridiagonal solver.
An intelligent processing environment for real-time simulation
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Wells, Buren Earl, Jr.
1988-01-01
The development of a highly efficient and thus truly intelligent processing environment for real-time general purpose simulation of continuous systems is described. Such an environment can be created by mapping the simulation process directly onto the University of Alamba's OPERA architecture. To facilitate this effort, the field of continuous simulation is explored, highlighting areas in which efficiency can be improved. Areas in which parallel processing can be applied are also identified, and several general OPERA type hardware configurations that support improved simulation are investigated. Three direct execution parallel processing environments are introduced, each of which greatly improves efficiency by exploiting distinct areas of the simulation process. These suggested environments are candidate architectures around which a highly intelligent real-time simulation configuration can be developed.
Constitutive Model Calibration via Autonomous Multiaxial Experimentation (Postprint)
2016-09-17
test machine. Experimental data is reduced and finite element simulations are conducted in parallel with the test based on experimental strain...data is reduced and finite element simulations are conducted in parallel with the test based on experimental strain conditions. Optimization methods...be used directly in finite element simulations of more complex geometries. Keywords Axial/torsional experimentation • Plasticity • Constitutive model
Scalability of Parallel Spatial Direct Numerical Simulations on Intel Hypercube and IBM SP1 and SP2
NASA Technical Reports Server (NTRS)
Joslin, Ronald D.; Hanebutte, Ulf R.; Zubair, Mohammad
1995-01-01
The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube and IBM SP1 and SP2 parallel computers is documented. Spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows are computed with the PSDNS code. The feasibility of using the PSDNS to perform transition studies on these computers is examined. The results indicate that PSDNS approach can effectively be parallelized on a distributed-memory parallel machine by remapping the distributed data structure during the course of the calculation. Scalability information is provided to estimate computational costs to match the actual costs relative to changes in the number of grid points. By increasing the number of processors, slower than linear speedups are achieved with optimized (machine-dependent library) routines. This slower than linear speedup results because the computational cost is dominated by FFT routine, which yields less than ideal speedups. By using appropriate compile options and optimized library routines on the SP1, the serial code achieves 52-56 M ops on a single node of the SP1 (45 percent of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a "real world" simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP supercomputer. For the same simulation, 32-nodes of the SP1 and SP2 are required to reach the performance of a Cray C-90. A 32 node SP1 (SP2) configuration is 2.9 (4.6) times faster than a Cray Y/MP for this simulation, while the hypercube is roughly 2 times slower than the Y/MP for this application. KEY WORDS: Spatial direct numerical simulations; incompressible viscous flows; spectral methods; finite differences; parallel computing.
NASA Technical Reports Server (NTRS)
Campbell, David; Wysong, Ingrid; Kaplan, Carolyn; Mott, David; Wadsworth, Dean; VanGilder, Douglas
2000-01-01
An AFRL/NRL team has recently been selected to develop a scalable, parallel, reacting, multidimensional (SUPREM) Direct Simulation Monte Carlo (DSMC) code for the DoD user community under the High Performance Computing Modernization Office (HPCMO) Common High Performance Computing Software Support Initiative (CHSSI). This paper will introduce the JANNAF Exhaust Plume community to this three-year development effort and present the overall goals, schedule, and current status of this new code.
Wave Turning and Flow Angle in the E-Region Ionosphere
NASA Astrophysics Data System (ADS)
Young, M.; Oppenheim, M. M.; Dimant, Y. S.
2016-12-01
This work presents results of particle-in-cell (PIC) simulations of Farley-Buneman (FB) turbulence at various altitudes in the high-latitude E-region ionosphere. In that region, the FB instability regularly produces meter-scale plasma irregularities. VHF radars observe coherent echoes via Bragg scatter from wave fronts parallel or anti-parallel to the radar line of sight (LoS) but do not necessarily measure the mean direction of wave propagation. Haldoupis (1984) conducted a study of diffuse radar aurora and found that the spectral width of back-scattered power depends critically on the angle between the radar LoS and the true flow direction, called the flow angle. Knowledge of the flow angle will allow researchers to better interpret observations of coherent back-scatter. Experiments designed to observe meter-scale irregularities in the E-region ionosphere created by the FB instability typically assume that the predominant flow direction is the E×B direction. However, linear theory of Dimant and Oppenheim (2004) showed that FB waves should turn away from E×B and particle-in-cell simulations by Oppenheim and Dimant (2013) support the theory. The present study comprises a quantitative analysis of the dependence of back-scattered power, flow velocity, and spectral width as functions of the flow angle. It also demonstrates that the mean direction of meter-scale wave propagation may differ from the E×B direction by tens of degrees. The analysis includes 2-D and 3-D simulations at a range of altitudes in the auroral ionosphere. Comparison between 2-D and 3-D simulations illustrates the relative importance to the irregularity spectrum of a small but finite component in the direction parallel to B. Previous work has shown this small parallel component to be important to turbulent electron heating and nonlinear transport.
NASA Technical Reports Server (NTRS)
Morgan, Philip E.
2004-01-01
This final report contains reports of research related to the tasks "Scalable High Performance Computing: Direct and Lark-Eddy Turbulent FLow Simulations Using Massively Parallel Computers" and "Devleop High-Performance Time-Domain Computational Electromagnetics Capability for RCS Prediction, Wave Propagation in Dispersive Media, and Dual-Use Applications. The discussion of Scalable High Performance Computing reports on three objectives: validate, access scalability, and apply two parallel flow solvers for three-dimensional Navier-Stokes flows; develop and validate a high-order parallel solver for Direct Numerical Simulations (DNS) and Large Eddy Simulation (LES) problems; and Investigate and develop a high-order Reynolds averaged Navier-Stokes turbulence model. The discussion of High-Performance Time-Domain Computational Electromagnetics reports on five objectives: enhancement of an electromagnetics code (CHARGE) to be able to effectively model antenna problems; utilize lessons learned in high-order/spectral solution of swirling 3D jets to apply to solving electromagnetics project; transition a high-order fluids code, FDL3DI, to be able to solve Maxwell's Equations using compact-differencing; develop and demonstrate improved radiation absorbing boundary conditions for high-order CEM; and extend high-order CEM solver to address variable material properties. The report also contains a review of work done by the systems engineer.
A three-dimensional spectral algorithm for simulations of transition and turbulence
NASA Technical Reports Server (NTRS)
Zang, T. A.; Hussaini, M. Y.
1985-01-01
A spectral algorithm for simulating three dimensional, incompressible, parallel shear flows is described. It applies to the channel, to the parallel boundary layer, and to other shear flows with one wall bounded and two periodic directions. Representative applications to the channel and to the heated boundary layer are presented.
NASA Technical Reports Server (NTRS)
Hanebutte, Ulf R.; Joslin, Ronald D.; Zubair, Mohammad
1994-01-01
The implementation and the performance of a parallel spatial direct numerical simulation (PSDNS) code are reported for the IBM SP1 supercomputer. The spatially evolving disturbances that are associated with laminar-to-turbulent in three-dimensional boundary-layer flows are computed with the PS-DNS code. By remapping the distributed data structure during the course of the calculation, optimized serial library routines can be utilized that substantially increase the computational performance. Although the remapping incurs a high communication penalty, the parallel efficiency of the code remains above 40% for all performed calculations. By using appropriate compile options and optimized library routines, the serial code achieves 52-56 Mflops on a single node of the SP1 (45% of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a 'real world' simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP for the same simulation. The scalability information provides estimated computational costs that match the actual costs relative to changes in the number of grid points.
NASA Astrophysics Data System (ADS)
Kum, Oyeon; Dickson, Brad M.; Stuart, Steven J.; Uberuaga, Blas P.; Voter, Arthur F.
2004-11-01
Parallel replica dynamics simulation methods appropriate for the simulation of chemical reactions in molecular systems with many conformational degrees of freedom have been developed and applied to study the microsecond-scale pyrolysis of n-hexadecane in the temperature range of 2100-2500 K. The algorithm uses a transition detection scheme that is based on molecular topology, rather than energetic basins. This algorithm allows efficient parallelization of small systems even when using more processors than particles (in contrast to more traditional parallelization algorithms), and even when there are frequent conformational transitions (in contrast to previous implementations of the parallel replica algorithm). The parallel efficiency for pyrolysis initiation reactions was over 90% on 61 processors for this 50-atom system. The parallel replica dynamics technique results in reaction probabilities that are statistically indistinguishable from those obtained from direct molecular dynamics, under conditions where both are feasible, but allows simulations at temperatures as much as 1000 K lower than direct molecular dynamics simulations. The rate of initiation displayed Arrhenius behavior over the entire temperature range, with an activation energy and frequency factor of Ea=79.7 kcal/mol and log A/s-1=14.8, respectively, in reasonable agreement with experiment and empirical kinetic models. Several interesting unimolecular reaction mechanisms were observed in simulations of the chain propagation reactions above 2000 K, which are not included in most coarse-grained kinetic models. More studies are needed in order to determine whether these mechanisms are experimentally relevant, or specific to the potential energy surface used.
ERIC Educational Resources Information Center
Gil, Arturo; Peidró, Adrián; Reinoso, Óscar; Marín, José María
2017-01-01
This paper presents a tool, LABEL, oriented to the teaching of parallel robotics. The application, organized as a set of tools developed using Easy Java Simulations, enables the study of the kinematics of parallel robotics. A set of classical parallel structures was implemented such that LABEL can solve the inverse and direct kinematic problem of…
STOCHSIMGPU: parallel stochastic simulation for the Systems Biology Toolbox 2 for MATLAB.
Klingbeil, Guido; Erban, Radek; Giles, Mike; Maini, Philip K
2011-04-15
The importance of stochasticity in biological systems is becoming increasingly recognized and the computational cost of biologically realistic stochastic simulations urgently requires development of efficient software. We present a new software tool STOCHSIMGPU that exploits graphics processing units (GPUs) for parallel stochastic simulations of biological/chemical reaction systems and show that significant gains in efficiency can be made. It is integrated into MATLAB and works with the Systems Biology Toolbox 2 (SBTOOLBOX2) for MATLAB. The GPU-based parallel implementation of the Gillespie stochastic simulation algorithm (SSA), the logarithmic direct method (LDM) and the next reaction method (NRM) is approximately 85 times faster than the sequential implementation of the NRM on a central processing unit (CPU). Using our software does not require any changes to the user's models, since it acts as a direct replacement of the stochastic simulation software of the SBTOOLBOX2. The software is open source under the GPL v3 and available at http://www.maths.ox.ac.uk/cmb/STOCHSIMGPU. The web site also contains supplementary information. klingbeil@maths.ox.ac.uk Supplementary data are available at Bioinformatics online.
Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Patera, Anthony
1993-01-01
Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.
A transient FETI methodology for large-scale parallel implicit computations in structural mechanics
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Crivelli, Luis; Roux, Francois-Xavier
1992-01-01
Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because explicit schemes are also easier to parallelize than implicit ones. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet -- and perhaps will never -- be offset by the speed of parallel hardware. Therefore, it is essential to develop efficient and robust alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating low-frequency dynamics. Here we present a domain decomposition method for implicit schemes that requires significantly less storage than factorization algorithms, that is several times faster than other popular direct and iterative methods, that can be easily implemented on both shared and local memory parallel processors, and that is both computationally and communication-wise efficient. The proposed transient domain decomposition method is an extension of the method of Finite Element Tearing and Interconnecting (FETI) developed by Farhat and Roux for the solution of static problems. Serial and parallel performance results on the CRAY Y-MP/8 and the iPSC-860/128 systems are reported and analyzed for realistic structural dynamics problems. These results establish the superiority of the FETI method over both the serial/parallel conjugate gradient algorithm with diagonal scaling and the serial/parallel direct method, and contrast the computational power of the iPSC-860/128 parallel processor with that of the CRAY Y-MP/8 system.
NASA Technical Reports Server (NTRS)
Miller, R. H.; Gombosi, T. I.; Gary, S. P.; Winske, D.
1991-01-01
The direction of propagation of low frequency magnetic fluctuations generated by cometary ion pick-up is examined by means of 1D electromagnetic hybrid simulations. The newborn ions are injected at a constant rate, and the helicity and direction of propagation of magnetic fluctuations are explored for cometary ion injection angles of 0 and 90 deg relative to the solar wind magnetic field. The parameter eta represents the relative contribution of wave energy propagating in the direction away from the comet, parallel to the beam. For small (quasi-parallel) injection angles eta was found to be of order unity, while for larger (quasi-perpendicular) angles eta was found to be of order 0.5.
NASA Astrophysics Data System (ADS)
Marx, Alain; Lütjens, Hinrich
2017-03-01
A hybrid MPI/OpenMP parallel version of the XTOR-2F code [Lütjens and Luciani, J. Comput. Phys. 229 (2010) 8130] solving the two-fluid MHD equations in full tokamak geometry by means of an iterative Newton-Krylov matrix-free method has been developed. The present work shows that the code has been parallelized significantly despite the numerical profile of the problem solved by XTOR-2F, i.e. a discretization with pseudo-spectral representations in all angular directions, the stiffness of the two-fluid stability problem in tokamaks, and the use of a direct LU decomposition to invert the physical pre-conditioner at every Krylov iteration of the solver. The execution time of the parallelized version is an order of magnitude smaller than the sequential one for low resolution cases, with an increasing speedup when the discretization mesh is refined. Moreover, it allows to perform simulations with higher resolutions, previously forbidden because of memory limitations.
Scalable direct Vlasov solver with discontinuous Galerkin method on unstructured mesh.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, J.; Ostroumov, P. N.; Mustapha, B.
2010-12-01
This paper presents the development of parallel direct Vlasov solvers with discontinuous Galerkin (DG) method for beam and plasma simulations in four dimensions. Both physical and velocity spaces are in two dimesions (2P2V) with unstructured mesh. Contrary to the standard particle-in-cell (PIC) approach for kinetic space plasma simulations, i.e., solving Vlasov-Maxwell equations, direct method has been used in this paper. There are several benefits to solving a Vlasov equation directly, such as avoiding noise associated with a finite number of particles and the capability to capture fine structure in the plasma. The most challanging part of a direct Vlasov solvermore » comes from higher dimensions, as the computational cost increases as N{sup 2d}, where d is the dimension of the physical space. Recently, due to the fast development of supercomputers, the possibility has become more realistic. Many efforts have been made to solve Vlasov equations in low dimensions before; now more interest has focused on higher dimensions. Different numerical methods have been tried so far, such as the finite difference method, Fourier Spectral method, finite volume method, and spectral element method. This paper is based on our previous efforts to use the DG method. The DG method has been proven to be very successful in solving Maxwell equations, and this paper is our first effort in applying the DG method to Vlasov equations. DG has shown several advantages, such as local mass matrix, strong stability, and easy parallelization. These are particularly suitable for Vlasov equations. Domain decomposition in high dimensions has been used for parallelization; these include a highly scalable parallel two-dimensional Poisson solver. Benchmark results have been shown and simulation results will be reported.« less
NASA Astrophysics Data System (ADS)
Sun, Jicheng; Gao, Xinliang; Lu, Quanming; Chen, Lunjin; Liu, Xu; Wang, Xueyi; Tao, Xin; Wang, Shui
2017-05-01
In this paper, we perform a 1-D particle-in-cell (PIC) simulation model consisting of three species, cold electrons, cold ions, and energetic ion ring, to investigate spectral structures of magnetosonic waves excited by ring distribution protons in the Earth's magnetosphere, and dynamics of charged particles during the excitation of magnetosonic waves. As the wave normal angle decreases, the spectral range of excited magnetosonic waves becomes broader with upper frequency limit extending beyond the lower hybrid resonant frequency, and the discrete spectra tends to merge into a continuous one. This dependence on wave normal angle is consistent with the linear theory. The effects of magnetosonic waves on the background cold plasma populations also vary with wave normal angle. For exactly perpendicular magnetosonic waves (parallel wave number k|| = 0), there is no energization in the parallel direction for both background cold protons and electrons due to the negligible fluctuating electric field component in the parallel direction. In contrast, the perpendicular energization of background plasmas is rather significant, where cold protons follow unmagnetized motion while cold electrons follow drift motion due to wave electric fields. For magnetosonic waves with a finite k||, there exists a nonnegligible parallel fluctuating electric field, leading to a significant and rapid energization in the parallel direction for cold electrons. These cold electrons can also be efficiently energized in the perpendicular direction due to the interaction with the magnetosonic wave fields in the perpendicular direction. However, cold protons can be only heated in the perpendicular direction, which is likely caused by the higher-order resonances with magnetosonic waves. The potential impacts of magnetosonic waves on the energization of the background cold plasmas in the Earth's inner magnetosphere are also discussed in this paper.
Formation of Electrostatic Potential Drops in the Auroral Zone
NASA Technical Reports Server (NTRS)
Schriver, D.; Ashour-Abdalla, M.; Richard, R. L.
2001-01-01
In order to examine the self-consistent formation of large-scale quasi-static parallel electric fields in the auroral zone on a micro/meso scale, a particle in cell simulation has been developed. The code resolves electron Debye length scales so that electron micro-processes are included and a variable grid scheme is used such that the overall length scale of the simulation is of the order of an Earth radii along the magnetic field. The simulation is electrostatic and includes the magnetic mirror force, as well as two types of plasmas, a cold dense ionospheric plasma and a warm tenuous magnetospheric plasma. In order to study the formation of parallel electric fields in the auroral zone, different magnetospheric ion and electron inflow boundary conditions are used to drive the system. It has been found that for conditions in the primary (upward) current region an upward directed quasi-static electric field can form across the system due to magnetic mirroring of the magnetospheric ions and electrons at different altitudes. For conditions in the return (downward) current region it is shown that a quasi-static parallel electric field in the opposite sense of that in the primary current region is formed, i.e., the parallel electric field is directed earthward. The conditions for how these different electric fields can be formed are discussed using satellite observations and numerical simulations.
GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation.
Hess, Berk; Kutzner, Carsten; van der Spoel, David; Lindahl, Erik
2008-03-01
Molecular simulation is an extremely useful, but computationally very expensive tool for studies of chemical and biomolecular systems. Here, we present a new implementation of our molecular simulation toolkit GROMACS which now both achieves extremely high performance on single processors from algorithmic optimizations and hand-coded routines and simultaneously scales very well on parallel machines. The code encompasses a minimal-communication domain decomposition algorithm, full dynamic load balancing, a state-of-the-art parallel constraint solver, and efficient virtual site algorithms that allow removal of hydrogen atom degrees of freedom to enable integration time steps up to 5 fs for atomistic simulations also in parallel. To improve the scaling properties of the common particle mesh Ewald electrostatics algorithms, we have in addition used a Multiple-Program, Multiple-Data approach, with separate node domains responsible for direct and reciprocal space interactions. Not only does this combination of algorithms enable extremely long simulations of large systems but also it provides that simulation performance on quite modest numbers of standard cluster nodes.
NASA Astrophysics Data System (ADS)
Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.
2011-10-01
SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.
Parallelizing Timed Petri Net simulations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1993-01-01
The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.
Control of parallel manipulators using force feedback
NASA Technical Reports Server (NTRS)
Nanua, Prabjot
1994-01-01
Two control schemes are compared for parallel robotic mechanisms actuated by hydraulic cylinders. One scheme, the 'rate based scheme', uses the position and rate information only for feedback. The second scheme, the 'force based scheme' feeds back the force information also. The force control scheme is shown to improve the response over the rate control one. It is a simple constant gain control scheme better suited to parallel mechanisms. The force control scheme can be easily modified for the dynamic forces on the end effector. This paper presents the results of a computer simulation of both the rate and force control schemes. The gains in the force based scheme can be individually adjusted in all three directions, whereas the adjustment in just one direction of the rate based scheme directly affects the other two directions.
An Investigation of the Flow Physics of Acoustic Liners by Direct Numerical Simulation
NASA Technical Reports Server (NTRS)
Watson, Willie R. (Technical Monitor); Tam, Christopher
2004-01-01
This report concentrates on reporting the effort and status of work done on three dimensional (3-D) simulation of a multi-hole resonator in an impedance tube. This work is coordinated with a parallel experimental effort to be carried out at the NASA Langley Research Center. The outline of this report is as follows : 1. Preliminary consideration. 2. Computation model. 3. Mesh design and parallel computing. 4. Visualization. 5. Status of computer code development. 1. Preliminary Consideration.
Hierarchial parallel computer architecture defined by computational multidisciplinary mechanics
NASA Technical Reports Server (NTRS)
Padovan, Joe; Gute, Doug; Johnson, Keith
1989-01-01
The goal is to develop an architecture for parallel processors enabling optimal handling of multi-disciplinary computation of fluid-solid simulations employing finite element and difference schemes. The goals, philosphical and modeling directions, static and dynamic poly trees, example problems, interpolative reduction, the impact on solvers are shown in viewgraph form.
Parallel DSMC Solution of Three-Dimensional Flow Over a Finite Flat Plate
NASA Technical Reports Server (NTRS)
Nance, Robert P.; Wilmoth, Richard G.; Moon, Bongki; Hassan, H. A.; Saltz, Joel
1994-01-01
This paper describes a parallel implementation of the direct simulation Monte Carlo (DSMC) method. Runtime library support is used for scheduling and execution of communication between nodes, and domain decomposition is performed dynamically to maintain a good load balance. Performance tests are conducted using the code to evaluate various remapping and remapping-interval policies, and it is shown that a one-dimensional chain-partitioning method works best for the problems considered. The parallel code is then used to simulate the Mach 20 nitrogen flow over a finite-thickness flat plate. It is shown that the parallel algorithm produces results which compare well with experimental data. Moreover, it yields significantly faster execution times than the scalar code, as well as very good load-balance characteristics.
The firehose instability during multiple reconnection in the Earth's magnetotail
NASA Astrophysics Data System (ADS)
Alexandrova, Alexandra; Divin, Andrey; Retino, Alessandro; Deca, Jan; Catapano, Filomena; Cozzani, Giulia
2017-04-01
We found unique events in the Cluster spacecraft observations of the Earth's magnetotail which correspond to the case of multiple reconnection sites. The ion temperature anisotropy of more energized ions in the direction parallel to the magnetic field, rather than in the perpendicular direction, is observed in the region of dynamical interaction between two active X-lines. The magnetic field and plasma parameters associated with the anisotropy correspond to the firehose instability conditions. We discuss possible scenarios of development of the firehose instability in multiple reconnection by comparing the observations with numerical simulations. Conventional Particle-in-Cell simulations of 2D magnetic reconnection starting from Harris equilibria are performed using implicit PIC code iPIC3D [Markidis, 2010]. At earlier stages the evolution creates fronts which push the weakly magnetized current sheet plasma away from the X-line. Fronts accelerate and reflect particles, producing parallel ion beams and increasing parallel ion temperature ahead of the front. If multiple X-lines are present, then the counterstreaming ion beams appear inside the original current sheet between colliding reconnection jet fronts. For large enough parallel ion pressure anisotropy, the firehose-like mode is excited inside the original current sheet with a flapping-like appearance along the X GSM direction but not Y GSM (current) direction. One should note that our simulations do not include the Bz magnetic field component (normal to the current sheet), hence ion beams cannot escape into the lobes and the whole region between two colliding fronts is unstable to firehose-like instability. In the Earth's magnetotail such configuration likely occurs when two active X-lines are close enough to each other, similar to a few cases we found in the Cluster observations.
Particle simulation of plasmas on the massively parallel processor
NASA Technical Reports Server (NTRS)
Gledhill, I. M. A.; Storey, L. R. O.
1987-01-01
Particle simulations, in which collective phenomena in plasmas are studied by following the self consistent motions of many discrete particles, involve several highly repetitive sets of calculations that are readily adaptable to SIMD parallel processing. A fully electromagnetic, relativistic plasma simulation for the massively parallel processor is described. The particle motions are followed in 2 1/2 dimensions on a 128 x 128 grid, with periodic boundary conditions. The two dimensional simulation space is mapped directly onto the processor network; a Fast Fourier Transform is used to solve the field equations. Particle data are stored according to an Eulerian scheme, i.e., the information associated with each particle is moved from one local memory to another as the particle moves across the spatial grid. The method is applied to the study of the nonlinear development of the whistler instability in a magnetospheric plasma model, with an anisotropic electron temperature. The wave distribution function is included as a new diagnostic to allow simulation results to be compared with satellite observations.
Parallel 3D Multi-Stage Simulation of a Turbofan Engine
NASA Technical Reports Server (NTRS)
Turner, Mark G.; Topp, David A.
1998-01-01
A 3D multistage simulation of each component of a modern GE Turbofan engine has been made. An axisymmetric view of this engine is presented in the document. This includes a fan, booster rig, high pressure compressor rig, high pressure turbine rig and a low pressure turbine rig. In the near future, all components will be run in a single calculation for a solution of 49 blade rows. The simulation exploits the use of parallel computations by using two levels of parallelism. Each blade row is run in parallel and each blade row grid is decomposed into several domains and run in parallel. 20 processors are used for the 4 blade row analysis. The average passage approach developed by John Adamczyk at NASA Lewis Research Center has been further developed and parallelized. This is APNASA Version A. It is a Navier-Stokes solver using a 4-stage explicit Runge-Kutta time marching scheme with variable time steps and residual smoothing for convergence acceleration. It has an implicit K-E turbulence model which uses an ADI solver to factor the matrix. Between 50 and 100 explicit time steps are solved before a blade row body force is calculated and exchanged with the other blade rows. This outer iteration has been coined a "flip." Efforts have been made to make the solver linearly scaleable with the number of blade rows. Enough flips are run (between 50 and 200) so the solution in the entire machine is not changing. The K-E equations are generally solved every other explicit time step. One of the key requirements in the development of the parallel code was to make the parallel solution exactly (bit for bit) match the serial solution. This has helped isolate many small parallel bugs and guarantee the parallelization was done correctly. The domain decomposition is done only in the axial direction since the number of points axially is much larger than the other two directions. This code uses MPI for message passing. The parallel speed up of the solver portion (no 1/0 or body force calculation) for a grid which has 227 points axially.
Integrated parallel reception, excitation, and shimming (iPRES).
Han, Hui; Song, Allen W; Truong, Trong-Kha
2013-07-01
To develop a new concept for a hardware platform that enables integrated parallel reception, excitation, and shimming. This concept uses a single coil array rather than separate arrays for parallel excitation/reception and B0 shimming. It relies on a novel design that allows a radiofrequency current (for excitation/reception) and a direct current (for B0 shimming) to coexist independently in the same coil. Proof-of-concept B0 shimming experiments were performed with a two-coil array in a phantom, whereas B0 shimming simulations were performed with a 48-coil array in the human brain. Our experiments show that individually optimized direct currents applied in each coil can reduce the B0 root-mean-square error by 62-81% and minimize distortions in echo-planar images. The simulations show that dynamic shimming with the 48-coil integrated parallel reception, excitation, and shimming array can reduce the B0 root-mean-square error in the prefrontal and temporal regions by 66-79% as compared with static second-order spherical harmonic shimming and by 12-23% as compared with dynamic shimming with a 48-coil conventional shim array. Our results demonstrate the feasibility of the integrated parallel reception, excitation, and shimming concept to perform parallel excitation/reception and B0 shimming with a unified coil system as well as its promise for in vivo applications. Copyright © 2013 Wiley Periodicals, Inc.
Numerical study of the interaction between a head fire and a backfire propagating in grassland.
Dominique Morvan; Sofiane Meradji; William Mell
2011-01-01
One of the objectives of this paper was to simulate numerically the interaction between two line fires ignited in a grassland, on a flat terrain, perpendicularly to the wind direction, in such a way that the two fire fronts (a head fire and a backfire) propagated in opposite directions parallel to the wind. The numerical simulations were conducted in 3-0 using the new...
Micromagnetic simulation of anisotropic grain boundary diffusion for sintered Nd-Fe-B magnets
NASA Astrophysics Data System (ADS)
Li, W.; Zhou, Q.; Zhao, L. Z.; Wang, Q. X.; Zhong, X. C.; Liu, Z. W.
2018-04-01
A systematic investigation on the anisotropic grain boundary diffusion in sintered Nd-Fe-B magnets is carried out by micromagnetic simulation. The results indicate that the critical reason for the anisotropic diffusion effect is not the difference in the amount of Dy diffused along different directions but the macroscopic demagnetizing field. The diffusion parallel to the easy axis from both pole surfaces of the magnet can increase the nucleation fields in the two major regions with large macroscopic demagnetizing fields, where the reverse domains can nucleate easily. As a consequence, the grain boundary diffusion along the directions parallel to the easy axis from two pole surfaces is more effective to improve the coercivity of the magnets than that along other directions. It is also found that, to enhance the coercivity, only a limited diffusion depth is required. The present result is in good agreement with the recent experimental findings.
Raytracing and Direct-Drive Targets
NASA Astrophysics Data System (ADS)
Schmitt, Andrew J.; Bates, Jason; Fyfe, David; Eimerl, David
2013-10-01
Accurate simulation of the effects of laser imprinting and drive asymmetries in directly driven targets requires the ability to distinguish between raytrace noise and the intensity structure produced by the spatial and temporal incoherence of optical smoothing. We have developed and implemented a smoother raytrace algorithm for our mpi-parallel radiation hydrodynamics code, FAST3D. The underlying approach is to connect the rays into either sheets (in 2D) or volume-enclosing chunks (in 3D) so that the absorbed energy distribution continuously covers the propagation area illuminated by the laser. We will describe the status and show the different scalings encountered in 2D and 3D problems as the computational size, parallelization strategy, and number of rays is varied. Finally, we show results using the method in current NIKE experimental target simulations and in proposed symmetric and polar direct-drive target designs. Supported by US DoE/NNSA.
Longitudinal train dynamics: an overview
NASA Astrophysics Data System (ADS)
Wu, Qing; Spiryagin, Maksym; Cole, Colin
2016-12-01
This paper discusses the evolution of longitudinal train dynamics (LTD) simulations, which covers numerical solvers, vehicle connection systems, air brake systems, wagon dumper systems and locomotives, resistance forces and gravitational components, vehicle in-train instabilities, and computing schemes. A number of potential research topics are suggested, such as modelling of friction, polymer, and transition characteristics for vehicle connection simulations, studies of wagon dumping operations, proper modelling of vehicle in-train instabilities, and computing schemes for LTD simulations. Evidence shows that LTD simulations have evolved with computing capabilities. Currently, advanced component models that directly describe the working principles of the operation of air brake systems, vehicle connection systems, and traction systems are available. Parallel computing is a good solution to combine and simulate all these advanced models. Parallel computing can also be used to conduct three-dimensional long train dynamics simulations.
GAPD: a GPU-accelerated atom-based polychromatic diffraction simulation code.
E, J C; Wang, L; Chen, S; Zhang, Y Y; Luo, S N
2018-03-01
GAPD, a graphics-processing-unit (GPU)-accelerated atom-based polychromatic diffraction simulation code for direct, kinematics-based, simulations of X-ray/electron diffraction of large-scale atomic systems with mono-/polychromatic beams and arbitrary plane detector geometries, is presented. This code implements GPU parallel computation via both real- and reciprocal-space decompositions. With GAPD, direct simulations are performed of the reciprocal lattice node of ultralarge systems (∼5 billion atoms) and diffraction patterns of single-crystal and polycrystalline configurations with mono- and polychromatic X-ray beams (including synchrotron undulator sources), and validation, benchmark and application cases are presented.
A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows
NASA Technical Reports Server (NTRS)
Bui, Trong T.
1999-01-01
A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.
Effect of strain field on displacement cascade in tungsten studied by molecular dynamics simulation
NASA Astrophysics Data System (ADS)
Wang, D.; Gao, N.; Wang, Z. G.; Gao, X.; He, W. H.; Cui, M. H.; Pang, L. L.; Zhu, Y. B.
2016-10-01
Using atomistic methods, the coupling effect of strain field and displacement cascade in body-centered cubic (BCC) tungsten is directly simulated by molecular dynamics (MD) simulations at different temperatures. The values of the hydrostatic and uniaxial (parallel or perpendicular to primary knock-on atom (PKA) direction) strains are from -2% to 2% and the temperature is from 100 to 1000 K. Because of the annealing effect, the influence of strain on radiation damage at low temperature has been proved to be more significant than that at high temperature. When the cascade proceeds under the hydrostatic strain, the Frenkel Pair (FP) production, the fraction of defect in cluster and the average size of the defect cluster, all increase at tensile state and decrease at compressive state. When the cascade is under uniaxial strain, the effect of strain parallel to PKA direction is less than the effect of hydrostatic strain, while the effect of strain perpendicular to PKA direction can be negligible. Under the uniaxial strain along <1 1 1> direction, the SIA and SIA cluster is observed to orientate along the strain direction at tensile state and the uniaxial compressive strain with direction perpendicular to <1 1 1> has led to the similar preferred nucleation. All these results indicate that under irradiation, the tensile state should be avoided for materials used in nuclear power plants.
NASA Astrophysics Data System (ADS)
Leamy, Michael J.; Springer, Adam C.
In this research we report parallel implementation of a Cellular Automata-based simulation tool for computing elastodynamic response on complex, two-dimensional domains. Elastodynamic simulation using Cellular Automata (CA) has recently been presented as an alternative, inherently object-oriented technique for accurately and efficiently computing linear and nonlinear wave propagation in arbitrarily-shaped geometries. The local, autonomous nature of the method should lead to straight-forward and efficient parallelization. We address this notion on symmetric multiprocessor (SMP) hardware using a Java-based object-oriented CA code implementing triangular state machines (i.e., automata) and the MPI bindings written in Java (MPJ Express). We use MPJ Express to reconfigure our existing CA code to distribute a domain's automata to cores present on a dual quad-core shared-memory system (eight total processors). We note that this message passing parallelization strategy is directly applicable to computer clustered computing, which will be the focus of follow-on research. Results on the shared memory platform indicate nearly-ideal, linear speed-up. We conclude that the CA-based elastodynamic simulator is easily configured to run in parallel, and yields excellent speed-up on SMP hardware.
NASA Astrophysics Data System (ADS)
Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu
2015-07-01
The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.
Parallel distributed, reciprocal Monte Carlo radiation in coupled, large eddy combustion simulations
NASA Astrophysics Data System (ADS)
Hunsaker, Isaac L.
Radiation is the dominant mode of heat transfer in high temperature combustion environments. Radiative heat transfer affects the gas and particle phases, including all the associated combustion chemistry. The radiative properties are in turn affected by the turbulent flow field. This bi-directional coupling of radiation turbulence interactions poses a major challenge in creating parallel-capable, high-fidelity combustion simulations. In this work, a new model was developed in which reciprocal monte carlo radiation was coupled with a turbulent, large-eddy simulation combustion model. A technique wherein domain patches are stitched together was implemented to allow for scalable parallelism. The combustion model runs in parallel on a decomposed domain. The radiation model runs in parallel on a recomposed domain. The recomposed domain is stored on each processor after information sharing of the decomposed domain is handled via the message passing interface. Verification and validation testing of the new radiation model were favorable. Strong scaling analyses were performed on the Ember cluster and the Titan cluster for the CPU-radiation model and GPU-radiation model, respectively. The model demonstrated strong scaling to over 1,700 and 16,000 processing cores on Ember and Titan, respectively.
Mode structure symmetry breaking of energetic particle driven beta-induced Alfvén eigenmode
NASA Astrophysics Data System (ADS)
Lu, Z. X.; Wang, X.; Lauber, Ph.; Zonca, F.
2018-01-01
The mode structure symmetry breaking of energetic particle driven Beta-induced Alfvén Eigenmode (BAE) is studied based on global theory and simulation. The weak coupling formula gives a reasonable estimate of the local eigenvalue compared with global hybrid simulation using XHMGC. The non-perturbative effect of energetic particles on global mode structure symmetry breaking in radial and parallel (along B) directions is demonstrated. With the contribution from energetic particles, two dimensional (radial and poloidal) BAE mode structures with symmetric/asymmetric tails are produced using an analytical model. It is demonstrated that the symmetry breaking in radial and parallel directions is intimately connected. The effects of mode structure symmetry breaking on nonlinear physics, energetic particle transport, and the possible insight for experimental studies are discussed.
Predicting Flows of Rarefied Gases
NASA Technical Reports Server (NTRS)
LeBeau, Gerald J.; Wilmoth, Richard G.
2005-01-01
DSMC Analysis Code (DAC) is a flexible, highly automated, easy-to-use computer program for predicting flows of rarefied gases -- especially flows of upper-atmospheric, propulsion, and vented gases impinging on spacecraft surfaces. DAC implements the direct simulation Monte Carlo (DSMC) method, which is widely recognized as standard for simulating flows at densities so low that the continuum-based equations of computational fluid dynamics are invalid. DAC enables users to model complex surface shapes and boundary conditions quickly and easily. The discretization of a flow field into computational grids is automated, thereby relieving the user of a traditionally time-consuming task while ensuring (1) appropriate refinement of grids throughout the computational domain, (2) determination of optimal settings for temporal discretization and other simulation parameters, and (3) satisfaction of the fundamental constraints of the method. In so doing, DAC ensures an accurate and efficient simulation. In addition, DAC can utilize parallel processing to reduce computation time. The domain decomposition needed for parallel processing is completely automated, and the software employs a dynamic load-balancing mechanism to ensure optimal parallel efficiency throughout the simulation.
Spatial data analytics on heterogeneous multi- and many-core parallel architectures using python
Laura, Jason R.; Rey, Sergio J.
2017-01-01
Parallel vector spatial analysis concerns the application of parallel computational methods to facilitate vector-based spatial analysis. The history of parallel computation in spatial analysis is reviewed, and this work is placed into the broader context of high-performance computing (HPC) and parallelization research. The rise of cyber infrastructure and its manifestation in spatial analysis as CyberGIScience is seen as a main driver of renewed interest in parallel computation in the spatial sciences. Key problems in spatial analysis that have been the focus of parallel computing are covered. Chief among these are spatial optimization problems, computational geometric problems including polygonization and spatial contiguity detection, the use of Monte Carlo Markov chain simulation in spatial statistics, and parallel implementations of spatial econometric methods. Future directions for research on parallelization in computational spatial analysis are outlined.
NASA Astrophysics Data System (ADS)
Palmesi, P.; Abert, C.; Bruckner, F.; Suess, D.
2018-05-01
Fast stray field calculation is commonly considered of great importance for micromagnetic simulations, since it is the most time consuming part of the simulation. The Fast Multipole Method (FMM) has displayed linear O(N) parallelization behavior on many cores. This article investigates the error of a recent FMM approach approximating sources using linear—instead of constant—finite elements in the singular integral for calculating the stray field and the corresponding potential. After measuring performance in an earlier manuscript, this manuscript investigates the convergence of the relative L2 error for several FMM simulation parameters. Various scenarios either calculating the stray field directly or via potential are discussed.
Large-eddy simulations of compressible convection on massively parallel computers. [stellar physics
NASA Technical Reports Server (NTRS)
Xie, Xin; Toomre, Juri
1993-01-01
We report preliminary implementation of the large-eddy simulation (LES) technique in 2D simulations of compressible convection carried out on the CM-2 massively parallel computer. The convective flow fields in our simulations possess structures similar to those found in a number of direct simulations, with roll-like flows coherent across the entire depth of the layer that spans several density scale heights. Our detailed assessment of the effects of various subgrid scale (SGS) terms reveals that they may affect the gross character of convection. Yet, somewhat surprisingly, we find that our LES solutions, and another in which the SGS terms are turned off, only show modest differences. The resulting 2D flows realized here are rather laminar in character, and achieving substantial turbulence may require stronger forcing and less dissipation.
Generating performance portable geoscientific simulation code with Firedrake (Invited)
NASA Astrophysics Data System (ADS)
Ham, D. A.; Bercea, G.; Cotter, C. J.; Kelly, P. H.; Loriant, N.; Luporini, F.; McRae, A. T.; Mitchell, L.; Rathgeber, F.
2013-12-01
This presentation will demonstrate how a change in simulation programming paradigm can be exploited to deliver sophisticated simulation capability which is far easier to programme than are conventional models, is capable of exploiting different emerging parallel hardware, and is tailored to the specific needs of geoscientific simulation. Geoscientific simulation represents a grand challenge computational task: many of the largest computers in the world are tasked with this field, and the requirements of resolution and complexity of scientists in this field are far from being sated. However, single thread performance has stalled, even sometimes decreased, over the last decade, and has been replaced by ever more parallel systems: both as conventional multicore CPUs and in the emerging world of accelerators. At the same time, the needs of scientists to couple ever-more complex dynamics and parametrisations into their models makes the model development task vastly more complex. The conventional approach of writing code in low level languages such as Fortran or C/C++ and then hand-coding parallelism for different platforms by adding library calls and directives forces the intermingling of the numerical code with its implementation. This results in an almost impossible set of skill requirements for developers, who must simultaneously be domain science experts, numericists, software engineers and parallelisation specialists. Even more critically, it requires code to be essentially rewritten for each emerging hardware platform. Since new platforms are emerging constantly, and since code owners do not usually control the procurement of the supercomputers on which they must run, this represents an unsustainable development load. The Firedrake system, conversely, offers the developer the opportunity to write PDE discretisations in the high-level mathematical language UFL from the FEniCS project (http://fenicsproject.org). Non-PDE model components, such as parametrisations, can be written as short C kernels operating locally on the underlying mesh, with no explicit parallelism. The executable code is then generated in C, CUDA or OpenCL and executed in parallel on the target architecture. The system also offers features of special relevance to the geosciences. In particular, the large scale separation between the vertical and horizontal directions in many geoscientific processes can be exploited to offer the flexibility of unstructured meshes in the horizontal direction, without the performance penalty usually associated with those methods.
Petascale turbulence simulation using a highly parallel fast multipole method on GPUs
NASA Astrophysics Data System (ADS)
Yokota, Rio; Barba, L. A.; Narumi, Tetsu; Yasuoka, Kenji
2013-03-01
This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop/s on GPU hardware using single precision. The simulations use a vortex particle method to solve the Navier-Stokes equations, with a highly parallel fast multipole method (FMM) as numerical engine, and match the current record in mesh size for this application, a cube of 40963 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the FFT algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the FMM-based vortex method achieving 74% parallel efficiency on 4096 processes (one GPU per MPI process, 3 GPUs per node of the TSUBAME-2.0 system). The FFT-based spectral method is able to achieve just 14% parallel efficiency on the same number of MPI processes (using only CPU cores), due to the all-to-all communication pattern of the FFT algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date.
Computational methods and software systems for dynamics and control of large space structures
NASA Technical Reports Server (NTRS)
Park, K. C.; Felippa, C. A.; Farhat, C.; Pramono, E.
1990-01-01
Two key areas of crucial importance to the computer-based simulation of large space structures are discussed. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area involves massively parallel computers.
Dip and anisotropy effects on flow using a vertically skewed model grid.
Hoaglund, John R; Pollard, David
2003-01-01
Darcy flow equations relating vertical and bedding-parallel flow to vertical and bedding-parallel gradient components are derived for a skewed Cartesian grid in a vertical plane, correcting for structural dip given the principal hydraulic conductivities in bedding-parallel and bedding-orthogonal directions. Incorrect-minus-correct flow error results are presented for ranges of structural dip (0 < or = theta < or = 90) and gradient directions (0 < or = phi < or = 360). The equations can be coded into ground water models (e.g., MODFLOW) that can use a skewed Cartesian coordinate system to simulate flow in structural terrain with deformed bedding planes. Models modified with these equations will require input arrays of strike and dip, and a solver that can handle off-diagonal hydraulic conductivity terms.
NASA Astrophysics Data System (ADS)
Sloan, Gregory James
The direct numerical simulation (DNS) offers the most accurate approach to modeling the behavior of a physical system, but carries an enormous computation cost. There exists a need for an accurate DNS to model the coupled solid-fluid system seen in targeted drug delivery (TDD), nanofluid thermal energy storage (TES), as well as other fields where experiments are necessary, but experiment design may be costly. A parallel DNS can greatly reduce the large computation times required, while providing the same results and functionality of the serial counterpart. A D2Q9 lattice Boltzmann method approach was implemented to solve the fluid phase. The use of domain decomposition with message passing interface (MPI) parallelism resulted in an algorithm that exhibits super-linear scaling in testing, which may be attributed to the caching effect. Decreased performance on a per-node basis for a fixed number of processes confirms this observation. A multiscale approach was implemented to model the behavior of nanoparticles submerged in a viscous fluid, and used to examine the mechanisms that promote or inhibit clustering. Parallelization of this model using a masterworker algorithm with MPI gives less-than-linear speedup for a fixed number of particles and varying number of processes. This is due to the inherent inefficiency of the master-worker approach. Lastly, these separate simulations are combined, and two-way coupling is implemented between the solid and fluid.
NASA Astrophysics Data System (ADS)
Maitarad, Amphawan; Poomsuk, Nattawee; Vilaivan, Chotima; Vilaivan, Tirayut; Siriwong, Khatcharin
2018-04-01
Suitable conformations for peptide nucleic acid (PNA) self-hybrids with (2‧R,4‧R)- and (2‧R,4‧S)-prolyl-(1S,2S)-2-aminocyclopentanecarboxylic acid backbones (namely, acpcPNA and epi-acpcPNA, respectively) were investigated based on molecular dynamics simulations. The results revealed that hybridization of the acpcPNA was observed only in the parallel direction, with a conformation close to the P-type structure. In contrast, self-hybrids of the epi-acpcPNA were formed in the antiparallel and parallel directions; the antiparallel duplex adopted the B-form conformation, and the parallel duplex was between B- and P-forms. The calculated binding energies and the experimental data indicate that the antiparallel epi-acpcPNA self-hybrid was more stable than the parallel duplex.
Ion dynamics at supercritical quasi-parallel shocks: Hybrid simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su Yanqing; Lu Quanming; Gao Xinliang
2012-09-15
By separating the incident ions into directly transmitted, downstream thermalized, and diffuse ions, we perform one-dimensional (1D) hybrid simulations to investigate ion dynamics at a supercritical quasi-parallel shock. In the simulations, the angle between the upstream magnetic field and shock nominal direction is {theta}{sub Bn}=30 Degree-Sign , and the Alfven Mach number is M{sub A}{approx}5.5. The shock exhibits a periodic reformation process. The ion reflection occurs at the beginning of the reformation cycle. Part of the reflected ions is trapped between the old and new shock fronts for an extended time period. These particles eventually form superthermal diffuse ions aftermore » they escape to the upstream of the new shock front at the end of the reformation cycle. The other reflected ions may return to the shock immediately or be trapped between the old and new shock fronts for a short time period. When the amplitude of the new shock front exceeds that of the old shock front and the reformation cycle is finished, these ions become thermalized ions in the downstream. No noticeable heating can be found in the directly transmitted ions. The relevance of our simulations to the satellite observations is also discussed in the paper.« less
High-order finite difference formulations for the incompressible Navier-Stokes equations on the CM-5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tafti, D.
1995-12-01
The paper describes the features and implementation of a general purpose high-order accurate finite difference computer program for direct and large-eddy simulations of turbulence on the CM-5 in the data parallel mode. Benchmarking studies for a direct simulation of turbulent channel flow are discussed. Performance of up to 8.8 GFLOPS is obtained for the high-order formulations on 512 processing nodes of the CM-5. The execution time for a simulation with 24 million nodes in a domain with two periodic directions is in the range of 0.2 {mu}secs/time-step/degree of freedom on 512 processing nodes of the CM-5.
Particle Acceleration, Magnetic Field Generation in Relativistic Shocks
NASA Technical Reports Server (NTRS)
Nishikawa, Ken-Ichi; Hardee, P.; Hededal, C. B.; Richardson, G.; Sol, H.; Preece, R.; Fishman, G. J.
2005-01-01
Shock acceleration is an ubiquitous phenomenon in astrophysical plasmas. Plasma waves and their associated instabilities (e.g., the Buneman instability, two-streaming instability, and the Weibel instability) created in the shocks are responsible for particle (electron, positron, and ion) acceleration. Using a 3-D relativistic electromagnetic particle (REMP) code, we have investigated particle acceleration associated with a relativistic jet front propagating through an ambient plasma with and without initial magnetic fields. We find only small differences in the results between no ambient and weak ambient parallel magnetic fields. Simulations show that the Weibel instability created in the collisionless shock front accelerates particles perpendicular and parallel to the jet propagation direction. New simulations with an ambient perpendicular magnetic field show the strong interaction between the relativistic jet and the magnetic fields. The magnetic fields are piled up by the jet and the jet electrons are bent, which creates currents and displacement currents. At the nonlinear stage, the magnetic fields are reversed by the current and the reconnection may take place. Due to these dynamics the jet and ambient electron are strongly accelerated in both parallel and perpendicular directions.
Particle Acceleration, Magnetic Field Generation, and Emission in Relativistic Shocks
NASA Technical Reports Server (NTRS)
Nishikawa, Ken-IchiI.; Hededal, C.; Hardee, P.; Richardson, G.; Preece, R.; Sol, H.; Fishman, G.
2004-01-01
Shock acceleration is an ubiquitous phenomenon in astrophysical plasmas. Plasma waves and their associated instabilities (e.g., the Buneman instability, two-streaming instability, and the Weibel instability) created in the shocks are responsible for particle (electron, positron, and ion) acceleration. Using a 3-D relativistic electromagnetic particle (m) code, we have investigated particle acceleration associated with a relativistic jet front propagating through an ambient plasma with and without initial magnetic fields. We find only small differences in the results between no ambient and weak ambient parallel magnetic fields. Simulations show that the Weibel instability created in the collisionless shock front accelerates particles perpendicular and parallel to the jet propagation direction. New simulations with an ambient perpendicular magnetic field show the strong interaction between the relativistic jet and the magnetic fields. The magnetic fields are piled up by the jet and the jet electrons are bent, which creates currents and displacement currents. At the nonlinear stage, the magnetic fields are reversed by the current and the reconnection may take place. Due to these dynamics the jet and ambient electron are strongly accelerated in both parallel and perpendicular directions.
NASA Astrophysics Data System (ADS)
Wendel, D. E.; Olson, D. K.; Hesse, M.; Karimabadi, H.; Daughton, W. S.
2013-12-01
We investigate the distribution of parallel electric fields and their relationship to the location and rate of magnetic reconnection of a large particle-in-cell simulation of 3D turbulent magnetic reconnection with open boundary conditions. The simulation's guide field geometry inhibits the formation of topological features such as separators and null points. Therefore, we derive the location of potential changes in magnetic connectivity by finding the field lines that experience a large relative change between their endpoints, i.e., the quasi-separatrix layer. We find a correspondence between the locus of changes in magnetic connectivity, or the quasi-separatrix layer, and the map of large gradients in the integrated parallel electric field (or quasi-potential). Furthermore, we compare the distribution of parallel electric fields along field lines with the reconnection rate. We find the reconnection rate is controlled by only the low-amplitude, zeroth and first-order trends in the parallel electric field, while the contribution from high amplitude parallel fluctuations, such as electron holes, is negligible. The results impact the determination of reconnection sites within models of 3D turbulent reconnection as well as the inference of reconnection rates from in situ spacecraft measurements. It is difficult through direct observation to isolate the locus of the reconnection parallel electric field amidst the large amplitude fluctuations. However, we demonstrate that a positive slope of the partial sum of the parallel electric field along the field line as a function of field line length indicates where reconnection is occurring along the field line.
Closely Spaced Independent Parallel Runway Simulation.
1984-10-01
facility consists of the Central Computer Facility, the Controller Laboratory, and the Simulator Pilot Complex. CENTRAL COMPUTER FACILITY. The Central... Computer Facility consists of a group of mainframes, minicomputers, and associated peripherals which host the operational and data acquisition...in the Controller Laboratory and convert their verbal directives into a keyboard entry which is transmitted to the Central Computer Complex, where
[Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].
Furuta, Takuya; Sato, Tatsuhiko
2015-01-01
Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.
Massively parallel algorithms for trace-driven cache simulations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Greenberg, Albert G.; Lubachevsky, Boris D.
1991-01-01
Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache locations, the contents of which are then compared with x sub t. If at the t sup th instant x sub t is not present in the cache, then it is said to be a miss, and is loaded into the cache set, possibly forcing the replacement of some other memory line, and making x sub t present for the (t+1) sup st instant. The problem of parallel simulation of a subtrace of N references directed to a C line cache set is considered, with the aim of determining which references are misses and related statistics. A simulation method is presented for the Least Recently Used (LRU) policy, which regradless of the set size C runs in time O(log N) using N processors on the exclusive read, exclusive write (EREW) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. Timings are presented of the second algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference based line replacement policies are considered, which includes LRU as well as the Least Frequently Used and Random replacement policies. A simulation method is presented for any such policy that on any trace of length N directed to a C line set runs in the O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation.
High Performance Input/Output for Parallel Computer Systems
NASA Technical Reports Server (NTRS)
Ligon, W. B.
1996-01-01
The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.
Long-range interactions and parallel scalability in molecular simulations
NASA Astrophysics Data System (ADS)
Patra, Michael; Hyvönen, Marja T.; Falck, Emma; Sabouri-Ghomi, Mohsen; Vattulainen, Ilpo; Karttunen, Mikko
2007-01-01
Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform extensive benchmarking of different commonly used electrostatic schemes on a range of computer architectures (Pentium-4, IBM Power 4, and Apple/IBM G5) for single processor and parallel performance up to 8 nodes—we have also tested the scalability on four different networks, namely Infiniband, GigaBit Ethernet, Fast Ethernet, and nearly uniform memory architecture, i.e. communication between CPUs is possible by directly reading from or writing to other CPUs' local memory. It turns out that the particle-mesh Ewald method (PME) performs surprisingly well and offers competitive performance unless parallel runs on PC hardware with older network infrastructure are needed. Lipid bilayers of sizes 128, 512 and 2048 lipid molecules were used as the test systems representing typical cases encountered in biomolecular simulations. Our results enable an accurate prediction of computational speed on most current computing systems, both for serial and parallel runs. These results should be helpful in, for example, choosing the most suitable configuration for a small departmental computer cluster.
NASA Astrophysics Data System (ADS)
Wang, J.; Pu, Z. Y.; Fu, S. Y.; Wang, X. G.; Xiao, C. J.; Dunlop, M. W.; Wei, Y.; Bogdanova, Y. V.; Zong, Q. G.; Xie, L.
2011-05-01
Previous theoretical and simulation studies have suggested that the anti-parallel and component reconnection can occur simultaneously on the dayside magnetopause. Certain observations have also been reported to support global conjunct pattern of magnetic reconnection. Here, we show direct evidence for the conjunction of anti-parallel and component MR using coordinated observations of Double Star TC-1 and Cluster under the same IMF condition on 6 April, 2004. The global MR X-line configuration constructed is in good agreement with the “S-shape” model.
LDRD project final report : hybrid AI/cognitive tactical behavior framework for LVC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Djordjevich, Donna D.; Xavier, Patrick Gordon; Brannon, Nathan Gregory
This Lab-Directed Research and Development (LDRD) sought to develop technology that enhances scenario construction speed, entity behavior robustness, and scalability in Live-Virtual-Constructive (LVC) simulation. We investigated issues in both simulation architecture and behavior modeling. We developed path-planning technology that improves the ability to express intent in the planning task while still permitting an efficient search algorithm. An LVC simulation demonstrated how this enables 'one-click' layout of squad tactical paths, as well as dynamic re-planning for simulated squads and for real and simulated mobile robots. We identified human response latencies that can be exploited in parallel/distributed architectures. We did an experimentalmore » study to determine where parallelization would be productive in Umbra-based force-on-force (FOF) simulations. We developed and implemented a data-driven simulation composition approach that solves entity class hierarchy issues and supports assurance of simulation fairness. Finally, we proposed a flexible framework to enable integration of multiple behavior modeling components that model working memory phenomena with different degrees of sophistication.« less
Molecular-dynamics simulations of self-assembled monolayers (SAM) on parallel computers
NASA Astrophysics Data System (ADS)
Vemparala, Satyavani
The purpose of this dissertation is to investigate the properties of self-assembled monolayers, particularly alkanethiols and Poly (ethylene glycol) terminated alkanethiols. These simulations are based on realistic interatomic potentials and require scalable and portable multiresolution algorithms implemented on parallel computers. Large-scale molecular dynamics simulations of self-assembled alkanethiol monolayer systems have been carried out using an all-atom model involving a million atoms to investigate their structural properties as a function of temperature, lattice spacing and molecular chain-length. Results show that the alkanethiol chains tilt from the surface normal by a collective angle of 25° along next-nearest neighbor direction at 300K. At 350K the system transforms to a disordered phase characterized by small tilt angle, flexible tilt direction, and random distribution of backbone planes. With increasing lattice spacing, a, the tilt angle increases rapidly from a nearly zero value at a = 4.7A to as high as 34° at a = 5.3A at 300K. We also studied the effect of end groups on the tilt structure of SAM films. We characterized the system with respect to temperature, the alkane chain length, lattice spacing, and the length of the end group. We found that the gauche defects were predominant only in the tails, and the gauche defects increased with the temperature and number of EG units. Effect of electric field on the structure of poly (ethylene glycol) (PEG) terminated alkanethiol self assembled monolayer (SAM) on gold has been studied using parallel molecular dynamics method. An applied electric field triggers a conformational transition from all-trans to a mostly gauche conformation. The polarity of the electric field has a significant effect on the surface structure of PEG leading to a profound effect on the hydrophilicity of the surface. The electric field applied anti-parallel to the surface normal causes a reversible transition to an ordered state in which the oxygen atoms are exposed. On the other hand, an electric field applied in a direction parallel to the surface normal introduces considerable disorder in the system and the oxygen atoms are buried inside.
GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit
Pronk, Sander; Páll, Szilárd; Schulz, Roland; Larsson, Per; Bjelkmar, Pär; Apostolov, Rossen; Shirts, Michael R.; Smith, Jeremy C.; Kasson, Peter M.; van der Spoel, David; Hess, Berk; Lindahl, Erik
2013-01-01
Motivation: Molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. Results: Here, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations. Availability: GROMACS is an open source and free software available from http://www.gromacs.org. Contact: erik.lindahl@scilifelab.se Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23407358
Optimized Hypervisor Scheduler for Parallel Discrete Event Simulations on Virtual Machine Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2013-01-01
With the advent of virtual machine (VM)-based platforms for parallel computing, it is now possible to execute parallel discrete event simulations (PDES) over multiple virtual machines, in contrast to executing in native mode directly over hardware as is traditionally done over the past decades. While mature VM-based parallel systems now offer new, compelling benefits such as serviceability, dynamic reconfigurability and overall cost effectiveness, the runtime performance of parallel applications can be significantly affected. In particular, most VM-based platforms are optimized for general workloads, but PDES execution exhibits unique dynamics significantly different from other workloads. Here we first present results frommore » experiments that highlight the gross deterioration of the runtime performance of VM-based PDES simulations when executed using traditional VM schedulers, quantitatively showing the bad scaling properties of the scheduler as the number of VMs is increased. The mismatch is fundamental in nature in the sense that any fairness-based VM scheduler implementation would exhibit this mismatch with PDES runs. We also present a new scheduler optimized specifically for PDES applications, and describe its design and implementation. Experimental results obtained from running PDES benchmarks (PHOLD and vehicular traffic simulations) over VMs show over an order of magnitude improvement in the run time of the PDES-optimized scheduler relative to the regular VM scheduler, with over 20 reduction in run time of simulations using up to 64 VMs. The observations and results are timely in the context of emerging systems such as cloud platforms and VM-based high performance computing installations, highlighting to the community the need for PDES-specific support, and the feasibility of significantly reducing the runtime overhead for scalable PDES on VM platforms.« less
Parallel Anisotropic Tetrahedral Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.; Darmofal, David L.
2008-01-01
An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.
Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.
2008-06-01
An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.
PLASMA TURBULENCE AND KINETIC INSTABILITIES AT ION SCALES IN THE EXPANDING SOLAR WIND
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hellinger, Petr; Trávnícek, Pavel M.; Matteini, Lorenzo
The relationship between a decaying strong turbulence and kinetic instabilities in a slowly expanding plasma is investigated using two-dimensional (2D) hybrid expanding box simulations. We impose an initial ambient magnetic field perpendicular to the simulation box, and we start with a spectrum of large-scale, linearly polarized, random-phase Alfvénic fluctuations that have energy equipartition between kinetic and magnetic fluctuations and vanishing correlation between the two fields. A turbulent cascade rapidly develops; magnetic field fluctuations exhibit a power-law spectrum at large scales and a steeper spectrum at ion scales. The turbulent cascade leads to an overall anisotropic proton heating, protons are heatedmore » in the perpendicular direction, and, initially, also in the parallel direction. The imposed expansion leads to generation of a large parallel proton temperature anisotropy which is at later stages partly reduced by turbulence. The turbulent heating is not sufficient to overcome the expansion-driven perpendicular cooling and the system eventually drives the oblique firehose instability in a form of localized nonlinear wave packets which efficiently reduce the parallel temperature anisotropy. This work demonstrates that kinetic instabilities may coexist with strong plasma turbulence even in a constrained 2D regime.« less
Integrated Parallel Reception, Excitation, and Shimming (iPRES)
Han, Hui; Song, Allen W.; Truong, Trong-Kha
2013-01-01
Purpose To develop a new concept for a hardware platform that enables integrated parallel reception, excitation, and shimming (iPRES). Theory This concept uses a single coil array rather than separate arrays for parallel excitation/reception and B0 shimming. It relies on a novel design that allows a radiofrequency current (for excitation/reception) and a direct current (for B0 shimming) to coexist independently in the same coil. Methods Proof-of-concept B0 shimming experiments were performed with a two-coil array in a phantom, whereas B0 shimming simulations were performed with a 48-coil array in the human brain. Results Our experiments show that individually optimized direct currents applied in each coil can reduce the B0 root-mean-square error by 62–81% and minimize distortions in echo-planar images. The simulations show that dynamic shimming with the 48-coil iPRES array can reduce the B0 root-mean-square error in the prefrontal and temporal regions by 66–79% as compared to static 2nd-order spherical harmonic shimming and by 12–23% as compared to dynamic shimming with a 48-coil conventional shim array. Conclusion Our results demonstrate the feasibility of the iPRES concept to perform parallel excitation/reception and B0 shimming with a unified coil system as well as its promise for in vivo applications. PMID:23629974
Besozzi, Daniela; Pescini, Dario; Mauri, Giancarlo
2014-01-01
Tau-leaping is a stochastic simulation algorithm that efficiently reconstructs the temporal evolution of biological systems, modeled according to the stochastic formulation of chemical kinetics. The analysis of dynamical properties of these systems in physiological and perturbed conditions usually requires the execution of a large number of simulations, leading to high computational costs. Since each simulation can be executed independently from the others, a massive parallelization of tau-leaping can bring to relevant reductions of the overall running time. The emerging field of General Purpose Graphic Processing Units (GPGPU) provides power-efficient high-performance computing at a relatively low cost. In this work we introduce cuTauLeaping, a stochastic simulator of biological systems that makes use of GPGPU computing to execute multiple parallel tau-leaping simulations, by fully exploiting the Nvidia's Fermi GPU architecture. We show how a considerable computational speedup is achieved on GPU by partitioning the execution of tau-leaping into multiple separated phases, and we describe how to avoid some implementation pitfalls related to the scarcity of memory resources on the GPU streaming multiprocessors. Our results show that cuTauLeaping largely outperforms the CPU-based tau-leaping implementation when the number of parallel simulations increases, with a break-even directly depending on the size of the biological system and on the complexity of its emergent dynamics. In particular, cuTauLeaping is exploited to investigate the probability distribution of bistable states in the Schlögl model, and to carry out a bidimensional parameter sweep analysis to study the oscillatory regimes in the Ras/cAMP/PKA pathway in S. cerevisiae. PMID:24663957
Limits to high-speed simulations of spiking neural networks using general-purpose computers.
Zenke, Friedemann; Gerstner, Wulfram
2014-01-01
To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.
Particle-in-cell simulations of magnetically driven reconnection using laser-powered capacitor coils
NASA Astrophysics Data System (ADS)
Huang, Kai; Lu, Quanming; Gao, Lan; Ji, Hantao; Wang, Xueyi; Fan, Feibin
2018-05-01
In this paper, we propose an experimental scheme to fulfill magnetically driven reconnections. Here, two laser beams are focused on a capacitor-coil target and then strong currents are wired in two parallel circular coils. Magnetic reconnection occurs between the two magnetic bubbles created by the currents in the two parallel circular coils. A two-dimensional particle-in-cell simulation model in the cylindrical coordinate is used to investigate such a process, and the simulations are performed in the (r ,z ) plane. The results show that with the increase of the currents in the two coils, the associated magnetic bubbles expand and a current sheet is formed between the two bubbles. Magnetic reconnection occurs when the current sheet is sufficiently thin. A quadrupole structure of the magnetic field in the θ direction ( Bθ ) is generated in the diffusion region and a strong electron current along the r direction ( Je r ) is also formed due to the existence of the high-speed electron flow away from the X line in the center of the outflow region. Because the X line is a circle along the θ direction, the convergence of the plasma flow around r =0 will lead to the asymmetry of Je r and Bθ between the two outflow regions of magnetic reconnection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Easy, L., E-mail: le590@york.ac.uk; CCFE, Culham Science Centre, Abingdon OX14 3DB; Militello, F.
2016-01-15
The propagation of filaments in the Scrape Off Layer (SOL) of tokamaks largely determines the plasma profiles in the region. In a conduction limited SOL, parallel temperature gradients are expected, such that the resistance to parallel currents is greater at the target than further upstream. Since the perpendicular motion of an isolated filament is largely determined by balance of currents that flow through it, this may be expected to affect filament transport. 3D simulations have thus been used to study the influence of enhanced parallel resistivity on the dynamics of filaments. Filaments with the smallest perpendicular length scales, which weremore » inertially limited at low resistivity (meaning that polarization rather than parallel currents determines their radial velocities), were unaffected by resistivity. For larger filaments, faster velocities were produced at higher resistivities due to two mechanisms. First parallel currents were reduced and polarization currents were enhanced, meaning that the inertial regime extended to larger filaments, and second, a potential difference formed along the parallel direction so that higher potentials were produced in the region of the filament for the same amount of current to flow into the sheath. These results indicate that broader SOL profiles could be produced at higher resistivities.« less
Qiu, Gongzhe
2017-01-01
Due to the symmetry of conventional periodic-permanent-magnet electromagnetic acoustic transducers (PPM EMATs), two shear (SH) waves can be generated and propagated simultaneously in opposite directions, which makes the signal recognition and interpretation complicatedly. Thus, this work presents a new SH wave PPM EMAT design, rotating the parallel line sources to realize the wave beam focusing in a single-direction. The theoretical model of distributed line sources was deduced firstly, and the effects of some parameters, such as the inner coil width, adjacent line sources spacing and the angle between parallel line sources, on SH wave focusing and directivity were studied mainly with the help of 3D FEM. Employing the proposed PPM EMATs, some experiments are carried out to verify the reliability of FEM simulation. The results indicate that rotating the parallel line sources can strength the wave on the closing side of line sources, decreasing the inner coil width and the adjacent line sources spacing can improve the amplitude and directivity of signals excited by transducers. Compared with traditional PPM EMATs, both the capacity of unidirectional excitation and directivity of the proposed PPM EMATs are improved significantly. PMID:29186790
Song, Xiaochun; Qiu, Gongzhe
2017-11-24
Due to the symmetry of conventional periodic-permanent-magnet electromagnetic acoustic transducers (PPM EMATs), two shear (SH) waves can be generated and propagated simultaneously in opposite directions, which makes the signal recognition and interpretation complicatedly. Thus, this work presents a new SH wave PPM EMAT design, rotating the parallel line sources to realize the wave beam focusing in a single-direction. The theoretical model of distributed line sources was deduced firstly, and the effects of some parameters, such as the inner coil width, adjacent line sources spacing and the angle between parallel line sources, on SH wave focusing and directivity were studied mainly with the help of 3D FEM. Employing the proposed PPM EMATs, some experiments are carried out to verify the reliability of FEM simulation. The results indicate that rotating the parallel line sources can strength the wave on the closing side of line sources, decreasing the inner coil width and the adjacent line sources spacing can improve the amplitude and directivity of signals excited by transducers. Compared with traditional PPM EMATs, both the capacity of unidirectional excitation and directivity of the proposed PPM EMATs are improved significantly.
Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik
2015-06-09
Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.
Direct Machining of Low-Loss THz Waveguide Components With an RF Choke.
Lewis, Samantha M; Nanni, Emilio A; Temkin, Richard J
2014-12-01
We present results for the successful fabrication of low-loss THz metallic waveguide components using direct machining with a CNC end mill. The approach uses a split-block machining process with the addition of an RF choke running parallel to the waveguide. The choke greatly reduces coupling to the parasitic mode of the parallel-plate waveguide produced by the split-block. This method has demonstrated loss as low as 0.2 dB/cm at 280 GHz for a copper WR-3 waveguide. It has also been used in the fabrication of 3 and 10 dB directional couplers in brass, demonstrating excellent agreement with design simulations from 240-260 GHz. The method may be adapted to structures with features on the order of 200 μm.
NASA Astrophysics Data System (ADS)
Engdahl, N. B.
2016-12-01
Mixing rates in porous media have been a heavily research topic in recent years covering analytic, random, and structured fields. However, there are some persistent assumptions and common features to these models that raise some questions about the generality of the results. One of these commonalities is the orientation of the flow field with respect to the heterogeneity structure, which are almost always defined to be parallel each other if there is an elongated axis of permeability correlation. Given the vastly different tortuosities for flow parallel to bedding and flow transverse to bedding, this assumption of parallel orientation may have significant effects on reaction rates when natural flows deviate from this assumed setting. This study investigates the role of orientation on mixing and reaction rates in multi-scale, 3D heterogeneous porous media with varying degrees of anisotropy in the correlation structure. Ten realizations of a small flow field, with three anisotropy levels, were simulated for flow parallel and transverse to bedding. Transport was simulated in each model with an advective-diffusive random walk and reactions were simulated using the chemical Langevin equation. The reaction system is a vertically segregated, transverse mixing problem between two mobile reactants. The results show that different transport behaviors and reaction rates are obtained by simply rotating the direction of flow relative to bedding, even when the net flux in both directions is the same. This kind of behavior was observed for three different weightings of the initial condition: 1) uniform, 2) flux-based, and 3) travel time based. The different schemes resulted in 20-50% more mass formation in the transverse direction than the longitudinal. The greatest variability in mass was observed for the flux weights and these were proportionate to the level of anisotropy. The implications of this study are that flux or travel time weights do not provide any guarantee of a fair comparison in this kind of a mixing scenario and that the role of directional tendencies on reaction rates can be significant. Further, it may be necessary to include anisotropy in future upscaled models to create robust methods that give representative reaction rates for any flow direction relative to geologic bedding.
Effect of alignment of easy axes on dynamic magnetization of immobilized magnetic nanoparticles
NASA Astrophysics Data System (ADS)
Yoshida, Takashi; Matsugi, Yuki; Tsujimura, Naotaka; Sasayama, Teruyoshi; Enpuku, Keiji; Viereck, Thilo; Schilling, Meinhard; Ludwig, Frank
2017-04-01
In some biomedical applications of magnetic nanoparticles (MNPs), the particles are physically immobilized. In this study, we explore the effect of the alignment of the magnetic easy axes on the dynamic magnetization of immobilized MNPs under an AC excitation field. We prepared three immobilized MNP samples: (1) a sample in which easy axes are randomly oriented, (2) a parallel-aligned sample in which easy axes are parallel to the AC field, and (3) an orthogonally aligned sample in which easy axes are perpendicular to the AC field. First, we show that the parallel-aligned sample has the largest hysteresis in the magnetization curve and the largest harmonic magnetization spectra, followed by the randomly oriented and orthogonally aligned samples. For example, 1.6-fold increase was observed in the area of the hysteresis loop of the parallel-aligned sample compared to that of the randomly oriented sample. To quantitatively discuss the experimental results, we perform a numerical simulation based on a Fokker-Planck equation, in which probability distributions for the directions of the easy axes are taken into account in simulating the prepared MNP samples. We obtained quantitative agreement between experiment and simulation. These results indicate that the dynamic magnetization of immobilized MNPs is significantly affected by the alignment of the easy axes.
Multiphase three-dimensional direct numerical simulation of a rotating impeller with code Blue
NASA Astrophysics Data System (ADS)
Kahouadji, Lyes; Shin, Seungwon; Chergui, Jalel; Juric, Damir; Craster, Richard V.; Matar, Omar K.
2017-11-01
The flow driven by a rotating impeller inside an open fixed cylindrical cavity is simulated using code Blue, a solver for massively-parallel simulations of fully three-dimensional multiphase flows. The impeller is composed of four blades at a 45° inclination all attached to a central hub and tube stem. In Blue, solid forms are constructed through the definition of immersed objects via a distance function that accounts for the object's interaction with the flow for both single and two-phase flows. We use a moving frame technique for imposing translation and/or rotation. The variation of the Reynolds number, the clearance, and the tank aspect ratio are considered, and we highlight the importance of the confinement ratio (blade radius versus the tank radius) in the mixing process. Blue uses a domain decomposition strategy for parallelization with MPI. The fluid interface solver is based on a parallel implementation of a hybrid front-tracking/level-set method designed complex interfacial topological changes. Parallel GMRES and multigrid iterative solvers are applied to the linear systems arising from the implicit solution for the fluid velocities and pressure in the presence of strong density and viscosity discontinuities across fluid phases. EPSRC, UK, MEMPHIS program Grant (EP/K003976/1), RAEng Research Chair (OKM).
NASA Astrophysics Data System (ADS)
Han, Qiguo; Zhu, Kai; Shi, Wenming; Wu, Kuayu; Chen, Kai
2018-02-01
In order to solve the problem of low voltage ride through(LVRT) of the major auxiliary equipment’s variable-frequency drive (VFD) in thermal power plant, the scheme of supercapacitor paralleled in the DC link of VFD is put forward, furthermore, two solutions of direct parallel support and voltage boost parallel support of supercapacitor are proposed. The capacitor values for the relevant motor loads are calculated according to the law of energy conservation, and they are verified by Matlab simulation. At last, a set of test prototype is set up, and the test results prove the feasibility of the proposed schemes.
Simplified Parallel Domain Traversal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson III, David J
2011-01-01
Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep bymore » performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.« less
Multilevel summation method for electrostatic force evaluation.
Hardy, David J; Wu, Zhe; Phillips, James C; Stone, John E; Skeel, Robert D; Schulten, Klaus
2015-02-10
The multilevel summation method (MSM) offers an efficient algorithm utilizing convolution for evaluating long-range forces arising in molecular dynamics simulations. Shifting the balance of computation and communication, MSM provides key advantages over the ubiquitous particle–mesh Ewald (PME) method, offering better scaling on parallel computers and permitting more modeling flexibility, with support for periodic systems as does PME but also for semiperiodic and nonperiodic systems. The version of MSM available in the simulation program NAMD is described, and its performance and accuracy are compared with the PME method. The accuracy feasible for MSM in practical applications reproduces PME results for water property calculations of density, diffusion constant, dielectric constant, surface tension, radial distribution function, and distance-dependent Kirkwood factor, even though the numerical accuracy of PME is higher than that of MSM. Excellent agreement between MSM and PME is found also for interface potentials of air–water and membrane–water interfaces, where long-range Coulombic interactions are crucial. Applications demonstrate also the suitability of MSM for systems with semiperiodic and nonperiodic boundaries. For this purpose, simulations have been performed with periodic boundaries along directions parallel to a membrane surface but not along the surface normal, yielding membrane pore formation induced by an imbalance of charge across the membrane. Using a similar semiperiodic boundary condition, ion conduction through a graphene nanopore driven by an ion gradient has been simulated. Furthermore, proteins have been simulated inside a single spherical water droplet. Finally, parallel scalability results show the ability of MSM to outperform PME when scaling a system of modest size (less than 100 K atoms) to over a thousand processors, demonstrating the suitability of MSM for large-scale parallel simulation.
Parallelization and automatic data distribution for nuclear reactor simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebrock, L.M.
1997-07-01
Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directlymore » affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.« less
A scalable parallel black oil simulator on distributed memory parallel computers
NASA Astrophysics Data System (ADS)
Wang, Kun; Liu, Hui; Chen, Zhangxin
2015-11-01
This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.
NASA Astrophysics Data System (ADS)
Han, Fengshan; Wu, Xinli; Li, Xia; Zhu, Dekang
2018-02-01
Zonal disintegration phenomenon was found in deep mining roadway surrounding rock. It seriously affects the safety of mining and underground engineering and it may lead to the occurrence of natural disasters. in deep mining roadway surrounding rock, tectonic stress in deep mining roadway rock mass, horizontal stress is much greater than the vertical stress, When the direction of maximum principal stress is parallel to the axis of the roadway in deep mining, this is the main reasons for Zonal disintegration phenomenon. Using ABAQUS software to numerical simulation of the three-dimensional model of roadway rupture formation process systematically, and the study shows that when The Direction of maximum main stress in deep underground mining is along the roadway axial direction, Zonal disintegration phenomenon in deep underground mining is successfully reproduced by our numerical simulation..numerical simulation shows that using ABAQUA simulation can reproduce Zonal disintegration phenomenon and the formation process of damage of surrounding rock can be reproduced. which have important engineering practical significance.
Sliding mode control of direct coupled interleaved boost converter for fuel cell
NASA Astrophysics Data System (ADS)
Wang, W. Y.; Ding, Y. H.; Ke, X.; Ma, X.
2017-12-01
A three phase direct coupled interleaved boost converter (TP-DIBC) was recommended in this paper. This converter has a small unbalance current sharing among the branches of TP-DIBC. An adaptive control law sliding mode control (SMC) is designed for the TP-DIBC. The aim is to 1) reduce ripple output voltage, inductor current and regulate output voltage tightly 2) The total current carried by direct coupled interleaved boost converter (DIBC) must be equally shared between different parallel branches. The efficacy and robustness of the proposed TP-DIBC and adaptive SMC is confirmed via computer simulations using Matlab SimPower System Tools. The simulation result is in line with the expectation.
A parallel direct-forcing fictitious domain method for simulating microswimmers
NASA Astrophysics Data System (ADS)
Gao, Tong; Lin, Zhaowu
2017-11-01
We present a 3D parallel direct-forcing fictitious domain method for simulating swimming micro-organisms at small Reynolds numbers. We treat the motile micro-swimmers as spherical rigid particles using the ``Squirmer'' model. The particle dynamics are solved on the moving Larangian meshes that overlay upon a fixed Eulerian mesh for solving the fluid motion, and the momentum exchange between the two phases is resolved by distributing pseudo body-forces over the particle interior regions which constrain the background fictitious fluids to follow the particle movement. While the solid and fluid subproblems are solved separately, no inner-iterations are required to enforce numerical convergence. We demonstrate the accuracy and robustness of the method by comparing our results with the existing analytical and numerical studies for various cases of single particle dynamics and particle-particle interactions. We also perform a series of numerical explorations to obtain statistical and rheological measurements to characterize the dynamics and structures of Squirmer suspensions. NSF DMS 1619960.
Simulation of an array-based neural net model
NASA Technical Reports Server (NTRS)
Barnden, John A.
1987-01-01
Research in cognitive science suggests that much of cognition involves the rapid manipulation of complex data structures. However, it is very unclear how this could be realized in neural networks or connectionist systems. A core question is: how could the interconnectivity of items in an abstract-level data structure be neurally encoded? The answer appeals mainly to positional relationships between activity patterns within neural arrays, rather than directly to neural connections in the traditional way. The new method was initially devised to account for abstract symbolic data structures, but it also supports cognitively useful spatial analogue, image-like representations. As the neural model is based on massive, uniform, parallel computations over 2D arrays, the massively parallel processor is a convenient tool for simulation work, although there are complications in using the machine to the fullest advantage. An MPP Pascal simulation program for a small pilot version of the model is running.
A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor
NASA Technical Reports Server (NTRS)
Rao, Hariprasad Nannapaneni
1989-01-01
The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.
NASA Technical Reports Server (NTRS)
Nicol, David; Fujimoto, Richard
1992-01-01
This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.
Simulation of 2D Kinetic Effects in Plasmas using the Grid Based Continuum Code LOKI
NASA Astrophysics Data System (ADS)
Banks, Jeffrey; Berger, Richard; Chapman, Tom; Brunner, Stephan
2016-10-01
Kinetic simulation of multi-dimensional plasma waves through direct discretization of the Vlasov equation is a useful tool to study many physical interactions and is particularly attractive for situations where minimal fluctuation levels are desired, for instance, when measuring growth rates of plasma wave instabilities. However, direct discretization of phase space can be computationally expensive, and as a result there are few examples of published results using Vlasov codes in more than a single configuration space dimension. In an effort to fill this gap we have developed the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The code is designed to reduce the cost of phase-space computation by using fully 4th order accurate conservative finite differencing, while retaining excellent parallel scalability that efficiently uses large scale computing resources. In this poster I will discuss the algorithms used in the code as well as some aspects of their parallel implementation using MPI. I will also overview simulation results of basic plasma wave instabilities relevant to laser plasma interaction, which have been obtained using the code.
Chen, Mingjun; Zheng, Ting; Wu, Chunya; Xing, Cheng
2014-09-01
The early adsorption stages of collagen onto nano-grooved rutile surface without hydroxylation were studied using molecular dynamics and steered MD simulations. On the basis of plane rutile (110), two kinds of models have been adopted: single groove and parallel grooves along [1-11] crystal orientation with various width dimensions. Initially, collagens were parallel or perpendicular to the groove orientation, respectively, in order to investigate the influence of groove width on collagen adsorption. The simulation result suggests that surface grooves could exert a strong effect on collagen adsorption: when collagen was parallel to the groove direction, adsorption was favored if the groove width matched well with the dimension of collagen. However, adsorption strength may decrease as the groove width expanded. As for the condition of collagen perpendicular to the groove orientation, collagen was difficult to bend and insert into grooves in the free adsorption procedure. But the steered MD simulation results reveal that more energy was consumed for collagen to insert into narrower grooves which may be interpreted as strong barrier for adsorption. We believe that adsorption will be favored if appropriate dimension match between dimension of collagen and the groove width was approached. Copyright © 2014 Elsevier B.V. All rights reserved.
Improved interior wall detection using designated dictionaries in compressive urban sensing problems
NASA Astrophysics Data System (ADS)
Lagunas, Eva; Amin, Moeness G.; Ahmad, Fauzia; Nájar, Montse
2013-05-01
In this paper, we address sparsity-based imaging of building interior structures for through-the-wall radar imaging and urban sensing applications. The proposed approach utilizes information about common building construction practices to form an appropriate sparse representation of the building layout. With a ground based SAR system, and considering that interior walls are either parallel or perpendicular to the exterior walls, the antenna at each position would receive reflections from the walls parallel to the radar's scan direction as well as from the corners between two meeting walls. We propose a two-step approach for wall detection and localization. In the first step, a dictionary of possible wall locations is used to recover the positions of both interior and exterior walls that are parallel to the scan direction. A follow-on step uses a dictionary of possible corner reflectors to locate wall-wall junctions along the detected wall segments, thereby determining the true wall extents and detecting walls perpendicular to the scan direction. The utility of the proposed approach is demonstrated using simulated data.
Synchronization Of Parallel Discrete Event Simulations
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S.
1992-01-01
Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.
Direct numerical simulation of instabilities in parallel flow with spherical roughness elements
NASA Technical Reports Server (NTRS)
Deanna, R. G.
1992-01-01
Results from a direct numerical simulation of laminar flow over a flat surface with spherical roughness elements using a spectral-element method are given. The numerical simulation approximates roughness as a cellular pattern of identical spheres protruding from a smooth wall. Periodic boundary conditions on the domain's horizontal faces simulate an infinite array of roughness elements extending in the streamwise and spanwise directions, which implies the parallel-flow assumption, and results in a closed domain. A body force, designed to yield the horizontal Blasius velocity in the absence of roughness, sustains the flow. Instabilities above a critical Reynolds number reveal negligible oscillations in the recirculation regions behind each sphere and in the free stream, high-amplitude oscillations in the layer directly above the spheres, and a mean profile with an inflection point near the sphere's crest. The inflection point yields an unstable layer above the roughness (where U''(y) is less than 0) and a stable region within the roughness (where U''(y) is greater than 0). Evidently, the instability begins when the low-momentum or wake region behind an element, being the region most affected by disturbances (purely numerical in this case), goes unstable and moves. In compressible flow with periodic boundaries, this motion sends disturbances to all regions of the domain. In the unstable layer just above the inflection point, the disturbances grow while being carried downstream with a propagation speed equal to the local mean velocity; they do not grow amid the low energy region near the roughness patch. The most amplified disturbance eventually arrives at the next roughness element downstream, perturbing its wake and inducing a global response at a frequency governed by the streamwise spacing between spheres and the mean velocity of the most amplified layer.
Effect of cavitation in high-pressure direct injection
NASA Astrophysics Data System (ADS)
Aboulhasanzadeh, Bahman; Johnsen, Eric
2015-11-01
As we move toward higher pressures for Gasoline Direct Injection and Diesel Direct Injection, cavitation has become an important issue. To better understand the effect of cavitation on the nozzle flow and primary atomization, we use a high-order accurate Discontinuous Galerkin approach using multi-GPU parallelism to simulate the compressible flow inside and outside the nozzle. Phase change is included using the six-equations model. We investigate the effect of nozzle geometry on cavitation inside the injector and on primary atomization outside the nozzle.
Seng, Frederick; Yang, Zhenchao; King, Rex; Shumway, LeGrand; Stan, Nikola; Hammond, Alec; Warnick, Karl F; Schultz, Stephen
2017-06-10
This work introduces a passive dipole antenna integrated into the packaging of a slab-coupled optical sensor to enhance the directional sensitivity of electro-optic electric field measurements parallel to the fiber axis. Using the passive integrated dipole antenna described in this work, a sensor that can typically only sense fields transverse to the fiber direction is able to sense a 1.25 kV/m field along the fiber direction with a gain of 17.5. This is verified through simulation and experiment.
Atomistic simulation of frictional anisotropy on quasicrystal approximant surfaces
Ye, Zhijiang; Martini, Ashlie; Thiel, Patricia; ...
2016-06-23
J. Y. Park et al. [Science 309, 1354 (2005)] have reported eight times greater atomic-scale friction in the periodic than in the quasiperiodic direction on the twofold face of a decagonal Al-Ni-Co quasicrystal. Here we present results of molecular-dynamics simulations intended to elucidate mechanisms behind this giant frictional anisotropy. Simulations of a bare atomic-force-microscope tip on several model substrates and under a variety of conditions failed to reproduce experimental results. On the other hand, including the experimental passivation of the tip with chains of hexadecane thiol, we reproduce qualitatively the experimental anisotropy in friction, finding evidence for entrainment of themore » organic chains in surface furrows parallel to the periodic direction.« less
Direct numerical simulation of turbulent flow in a rotating square duct
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Yi-Jun; Huang, Wei-Xi, E-mail: hwx@tsinghua.edu.cn; Xu, Chun-Xiao
A fully developed turbulent flow in a rotating straight square duct is simulated by direct numerical simulations at Re{sub τ} = 300 and 0 ≤ Ro{sub τ} ≤ 40. The rotating axis is parallel to two opposite walls of the duct and normal to the main flow. Variations of the turbulence statistics with the rotation rate are presented, and a comparison with the rotating turbulent channel flow is discussed. Rich secondary flow patterns in the cross section are observed by varying the rotation rate. The appearance of a pair of additional vortices above the pressure wall is carefully examined, andmore » the underlying mechanism is explained according to the budget analysis of the mean momentum equations.« less
Turbulent statistics in flow field due to interaction of two plane parallel jets
NASA Astrophysics Data System (ADS)
Bisoi, Mukul; Das, Manab Kumar; Roy, Subhransu; Patel, Devendra Kumar
2017-12-01
Turbulent characteristics of flow fields due to the interaction of two plane parallel jets separated by the jet width distance are studied. Numerical simulation is carried out by large eddy simulation with a dynamic Smagorinsky model for the sub-grid scale stresses. The energy spectra are observed to follow the -5/3 power law for the inertial sub-range. A proper orthogonal decomposition study indicates that the energy carrying large coherent structures is present close to the nozzle exit. It is shown that these coherent structures interact with each other and finally disintegrate into smaller vortices further downstream. The turbulent fluctuations in the longitudinal and lateral directions are shown to follow a similarity. The mean flow at the same time also maintains a close similarity. Prandtl's mixing length, the Taylor microscale, and the Kolmogorov length scales are shown along the lateral direction for different downstream locations. The autocorrelation in the longitudinal and transverse directions is seen to follow a similarity profile. By plotting the probability density function, the skewness and the flatness (kurtosis) are analyzed. The Reynolds stress anisotropy tensor is calculated, and the anisotropy invariant map known as Lumley's triangle is presented and analyzed.
Magnus-induced ratchet effects for skyrmions interacting with asymmetric substrates
NASA Astrophysics Data System (ADS)
Reichhardt, C.; Ray, D.; Olson Reichhardt, C. J.
2015-07-01
We show using numerical simulations that pronounced ratchet effects can occur for ac driven skyrmions moving over asymmetric quasi-one-dimensional substrates. We find a new type of ratchet effect called a Magnus-induced transverse ratchet that arises when the ac driving force is applied perpendicular rather than parallel to the asymmetry direction of the substrate. This transverse ratchet effect only occurs when the Magnus term is finite, and the threshold ac amplitude needed to induce it decreases as the Magnus term becomes more prominent. Ratcheting skyrmions follow ordered orbits in which the net displacement parallel to the substrate asymmetry direction is quantized. Skyrmion ratchets represent a new ac current-based method for controlling skyrmion positions and motion for spintronic applications.
NASA Astrophysics Data System (ADS)
Lu, San; Artemyev, A. V.; Angelopoulos, V.
2017-11-01
Magnetotail current sheet thinning is a distinctive feature of substorm growth phase, during which magnetic energy is stored in the magnetospheric lobes. Investigation of charged particle dynamics in such thinning current sheets is believed to be important for understanding the substorm energy storage and the current sheet destabilization responsible for substorm expansion phase onset. We use Time History of Events and Macroscale Interactions during Substorms (THEMIS) B and C observations in 2008 and 2009 at 18 - 25 RE to show that during magnetotail current sheet thinning, the electron temperature decreases (cooling), and the parallel temperature decreases faster than the perpendicular temperature, leading to a decrease of the initially strong electron temperature anisotropy (isotropization). This isotropization cannot be explained by pure adiabatic cooling or by pitch angle scattering. We use test particle simulations to explore the mechanism responsible for the cooling and isotropization. We find that during the thinning, a fast decrease of a parallel electric field (directed toward the Earth) can speed up the electron parallel cooling, causing it to exceed the rate of perpendicular cooling, and thus lead to isotropization, consistent with observation. If the parallel electric field is too small or does not change fast enough, the electron parallel cooling is slower than the perpendicular cooling, so the parallel electron anisotropy grows, contrary to observation. The same isotropization can also be accomplished by an increasing parallel electric field directed toward the equatorial plane. Our study reveals the existence of a large-scale parallel electric field, which plays an important role in magnetotail particle dynamics during the current sheet thinning process.
Scalable Molecular Dynamics with NAMD
Phillips, James C.; Braun, Rosemary; Wang, Wei; Gumbart, James; Tajkhorshid, Emad; Villa, Elizabeth; Chipot, Christophe; Skeel, Robert D.; Kalé, Laxmikant; Schulten, Klaus
2008-01-01
NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD scales to hundreds of processors on high-end parallel platforms, as well as tens of processors on low-cost commodity clusters, and also runs on individual desktop and laptop computers. NAMD works with AMBER and CHARMM potential functions, parameters, and file formats. This paper, directed to novices as well as experts, first introduces concepts and methods used in the NAMD program, describing the classical molecular dynamics force field, equations of motion, and integration methods along with the efficient electrostatics evaluation algorithms employed and temperature and pressure controls used. Features for steering the simulation across barriers and for calculating both alchemical and conformational free energy differences are presented. The motivations for and a roadmap to the internal design of NAMD, implemented in C++ and based on Charm++ parallel objects, are outlined. The factors affecting the serial and parallel performance of a simulation are discussed. Next, typical NAMD use is illustrated with representative applications to a small, a medium, and a large biomolecular system, highlighting particular features of NAMD, e.g., the Tcl scripting language. Finally, the paper provides a list of the key features of NAMD and discusses the benefits of combining NAMD with the molecular graphics/sequence analysis software VMD and the grid computing/collaboratory software BioCoRE. NAMD is distributed free of charge with source code at www.ks.uiuc.edu. PMID:16222654
Towards a parallel collisionless shock in LAPD
NASA Astrophysics Data System (ADS)
Weidl, M. S.; Heuer, P.; Schaeffer, D.; Dorst, R.; Winske, D.; Constantin, C.; Niemann, C.
2017-09-01
Using a high-energy laser to produce a super-Alfvénic carbon-ion beam in a strongly magnetized helium plasma, we expect to be able to observe the formation of a collisionless parallel shock inside the Large Plasma Device. We compare early magnetic-field measurements of the resonant right-hand instability with analytical predictions and find excellent agreement. Hybrid simulations show that the carbon ions couple to the background plasma and compress it, although so far the background ions are mainly accelerated perpendicular to the mean-field direction.
Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations
NASA Astrophysics Data System (ADS)
Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.
2012-09-01
Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.
Hybrid simulations of a parallel collisionless shock in the large plasma device
Weidl, Martin S.; Winske, Dan; Jenko, Frank; ...
2016-12-01
We present two-dimensional hybrid kinetic/magnetohydrodynamic simulations of planned laser-ablation experiments in the Large Plasma Device (LAPD). Our results, based on parameters which have been validated in previous experiments, show that a parallel collisionless shock can begin forming within the available space. Carbon-debris ions that stream along the magnetic- eld direction with a blow-o speed of four times the Alfv en velocity excite strong magnetic uctuations, eventually transfering part of their kinetic energy to the surrounding hydrogen ions. This acceleration and compression of the background plasma creates a shock front, which satis es the Rankine{Hugoniot conditions and can therefore propagate onmore » its own. Furthermore, we analyze the upstream turbulence and show that it is dominated by the right-hand resonant instability.« less
A practical guide to replica-exchange Wang—Landau simulations
NASA Astrophysics Data System (ADS)
Vogel, Thomas; Li, Ying Wai; Landau, David P.
2018-04-01
This paper is based on a series of tutorial lectures about the replica-exchange Wang-Landau (REWL) method given at the IX Brazilian Meeting on Simulational Physics (BMSP 2017). It provides a practical guide for the implementation of the method. A complete example code for a model system is available online. In this paper, we discuss the main parallel features of this code after a brief introduction to the REWL algorithm. The tutorial section is mainly directed at users who have written a single-walker Wang–Landau program already but might have just taken their first steps in parallel programming using the Message Passing Interface (MPI). In the last section, we answer “frequently asked questions” from users about the implementation of REWL for different scientific problems.
2015-08-01
Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) Software by N Scott Weingarten and James P Larentzos Approved for...Massively Parallel Simulator ( LAMMPS ) Software by N Scott Weingarten Weapons and Materials Research Directorate, ARL James P Larentzos Engility...Shifted Periodic Boundary Conditions in the Large-Scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) Software 5a. CONTRACT NUMBER 5b
Pressure gradients fail to predict diffusio-osmosis
NASA Astrophysics Data System (ADS)
Liu, Yawei; Ganti, Raman; Frenkel, Daan
2018-05-01
We present numerical simulations of diffusio-osmotic flow, i.e. the fluid flow generated by a concentration gradient along a solid-fluid interface. In our study, we compare a number of distinct approaches that have been proposed for computing such flows and compare them with a reference calculation based on direct, non-equilibrium molecular dynamics simulations. As alternatives, we consider schemes that compute diffusio-osmotic flow from the gradient of the chemical potentials of the constituent species and from the gradient of the component of the pressure tensor parallel to the interface. We find that the approach based on treating chemical potential gradients as external forces acting on various species agrees with the direct simulations, thereby supporting the approach of Marbach et al (2017 J. Chem. Phys. 146 194701). In contrast, an approach based on computing the gradients of the microscopic pressure tensor does not reproduce the direct non-equilibrium results.
Measures of three-dimensional anisotropy and intermittency in strong Alfvénic turbulence
NASA Astrophysics Data System (ADS)
Mallet, A.; Schekochihin, A. A.; Chandran, B. D. G.; Chen, C. H. K.; Horbury, T. S.; Wicks, R. T.; Greenan, C. C.
2016-06-01
We measure the local anisotropy of numerically simulated strong Alfvénic turbulence with respect to two local, physically relevant directions: along the local mean magnetic field and along the local direction of one of the fluctuating Elsasser fields. We find significant scaling anisotropy with respect to both these directions: the fluctuations are `ribbon-like' - statistically, they are elongated along both the mean magnetic field and the fluctuating field. The latter form of anisotropy is due to scale-dependent alignment of the fluctuating fields. The intermittent scalings of the nth-order conditional structure functions in the direction perpendicular to both the local mean field and the fluctuations agree well with the theory of Chandran, Schekochihin & Mallet, while the parallel scalings are consistent with those implied by the critical-balance conjecture. We quantify the relationship between the perpendicular scalings and those in the fluctuation and parallel directions, and find that the scaling exponent of the perpendicular anisotropy (I.e. of the aspect ratio of the Alfvénic structures in the plane perpendicular to the mean magnetic field) depends on the amplitude of the fluctuations. This is shown to be equivalent to the anticorrelation of fluctuation amplitude and alignment at each scale. The dependence of the anisotropy on amplitude is shown to be more significant for the anisotropy between the perpendicular and fluctuation-direction scales than it is between the perpendicular and parallel scales.
Shared Memory Parallelism for 3D Cartesian Discrete Ordinates Solver
NASA Astrophysics Data System (ADS)
Moustafa, Salli; Dutka-Malen, Ivan; Plagne, Laurent; Ponçot, Angélique; Ramet, Pierre
2014-06-01
This paper describes the design and the performance of DOMINO, a 3D Cartesian SN solver that implements two nested levels of parallelism (multicore+SIMD) on shared memory computation nodes. DOMINO is written in C++, a multi-paradigm programming language that enables the use of powerful and generic parallel programming tools such as Intel TBB and Eigen. These two libraries allow us to combine multi-thread parallelism with vector operations in an efficient and yet portable way. As a result, DOMINO can exploit the full power of modern multi-core processors and is able to tackle very large simulations, that usually require large HPC clusters, using a single computing node. For example, DOMINO solves a 3D full core PWR eigenvalue problem involving 26 energy groups, 288 angular directions (S16), 46 × 106 spatial cells and 1 × 1012 DoFs within 11 hours on a single 32-core SMP node. This represents a sustained performance of 235 GFlops and 40:74% of the SMP node peak performance for the DOMINO sweep implementation. The very high Flops/Watt ratio of DOMINO makes it a very interesting building block for a future many-nodes nuclear simulation tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popovich, P.; Carter, T. A.; Friedman, B.
Numerical simulation of plasma turbulence in the Large Plasma Device (LAPD) [W. Gekelman, H. Pfister, Z. Lucky et al., Rev. Sci. Instrum. 62, 2875 (1991)] is presented. The model, implemented in the BOUndary Turbulence code [M. Umansky, X. Xu, B. Dudson et al., Contrib. Plasma Phys. 180, 887 (2009)], includes three-dimensional (3D) collisional fluid equations for plasma density, electron parallel momentum, and current continuity, and also includes the effects of ion-neutral collisions. In nonlinear simulations using measured LAPD density profiles but assuming constant temperature profile for simplicity, self-consistent evolution of instabilities and nonlinearly generated zonal flows results in a saturatedmore » turbulent state. Comparisons of these simulations with measurements in LAPD plasmas reveal good qualitative and reasonable quantitative agreement, in particular in frequency spectrum, spatial correlation, and amplitude probability distribution function of density fluctuations. For comparison with LAPD measurements, the plasma density profile in simulations is maintained either by direct azimuthal averaging on each time step, or by adding particle source/sink function. The inferred source/sink values are consistent with the estimated ionization source and parallel losses in LAPD. These simulations lay the groundwork for more a comprehensive effort to test fluid turbulence simulation against LAPD data.« less
NASA Astrophysics Data System (ADS)
Maneva, Yana; Poedts, Stefaan
2017-04-01
The electromagnetic fluctuations in the solar wind represent a zoo of plasma waves with different properties, whose wavelengths range from largest fluid scales to the smallest dissipation scales. By nature the power spectrum of the magnetic fluctuations is anisotropic with different spectral slopes in parallel and perpendicular directions with respect to the background magnetic field. Furthermore, the magnetic field power spectra steepen as one moves from the inertial to the dissipation range and we observe multiple spectral breaks with different slopes in parallel and perpendicular direction at the ion scales and beyond. The turbulent dissipation of magnetic field fluctuations at the sub-ion scales is believed to go into local ion heating and acceleration, so that the spectral breaks are typically associated with particle energization. The gained energy can be in the form of anisotropic heating, formation of non-thermal features in the particle velocity distributions functions, and redistribution of the differential acceleration between the different ion populations. To study the relation between the evolution of the anisotropic turbulent spectra and the particle heating at the ion and sub-ion scales we perform a series of 2.5D hybrid simulations in a collisionless drifting proton-alpha plasma. We neglect the fast electron dynamics and treat the electrons as an isothermal fluid electrons, whereas the protons and a minor population of alpha particles are evolved in a fully kinetic manner. We start with a given wave spectrum and study the evolution of the magnetic field spectral slopes as a function of the parallel and perpendicular wave¬numbers. Simultaneously, we track the particle response and the energy exchange between the parallel and perpendicular scales. We observe anisotropic behavior of the turbulent power spectra with steeper slopes along the dominant energy-containing direction. This means that for parallel and quasi-parallel waves we have steeper spectral slope in parallel direction, whereas for highly oblique waves the dissipation occurs predominantly in perpendicular direction and the spectral slopes are steeper across the background magnetic field. The value of the spectral slopes depends on the angle of propagation, the spectral range, as well as the plasma properties. In general the dissipation is stronger at small scales and the corresponding spectral slopes there are steeper. For parallel and quasi-parallel propagation the prevailing energy cascade remains along the magnetic field, whereas for initially isotropic oblique turbulence the cascade develops mainly in perpendicular direction.
NASA Astrophysics Data System (ADS)
Reber, J. E.; Schmalholz, S. M.; Burg, J.-P.
2010-10-01
Two orthogonal sets of veins, both orthogonal to bedding, form chocolate tablet structures on the limbs of folded quartzwackes of Carboniferous turbidites in SW Portugal. Structural observations suggest that (1) mode 1 fractures transverse to the fold axes formed while fold amplitudes were small and limbs were under layer-subparallel compression and (2) mode 1 fractures parallel to the fold axes formed while fold amplitudes were large and limbs were brought to be under layer-subparallel tension. We performed two- and three-dimensional numerical simulations investigating the evolution of stress orientations during viscous folding to test whether and how these two successive sets of fractures were related to folding. We employed ellipses and ellipsoids for the visualization and quantification of the local stress field. The numerical simulations show a change in the orientation of the local σ1 direction by almost 90° with respect to the bedding plane in the fold limbs. The coeval σ3 direction rotates from parallel to the fold axis at low fold amplitudes to orthogonal to the fold axis at high fold amplitudes. The stress orientation changes faster in multilayers than in single-layers. The numerical simulations are consistent with observation and provide a mechanical interpretation for the formation of the chocolate tablet structures through consecutive sets of fractures on rotating limbs of folded competent layers.
Automated problem scheduling and reduction of synchronization delay effects
NASA Technical Reports Server (NTRS)
Saltz, Joel H.
1987-01-01
It is anticipated that in order to make effective use of many future high performance architectures, programs will have to exhibit at least a medium grained parallelism. A framework is presented for partitioning very sparse triangular systems of linear equations that is designed to produce favorable preformance results in a wide variety of parallel architectures. Efficient methods for solving these systems are of interest because: (1) they provide a useful model problem for use in exploring heuristics for the aggregation, mapping and scheduling of relatively fine grained computations whose data dependencies are specified by directed acrylic graphs, and (2) because such efficient methods can find direct application in the development of parallel algorithms for scientific computation. Simple expressions are derived that describe how to schedule computational work with varying degrees of granularity. The Encore Multimax was used as a hardware simulator to investigate the performance effects of using the partitioning techniques presented in shared memory architectures with varying relative synchronization costs.
Monte Carlo Methodology Serves Up a Software Success
NASA Technical Reports Server (NTRS)
2003-01-01
Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.
Multi-Species Fluxes for the Parallel Quiet Direct Simulation (QDS) Method
NASA Astrophysics Data System (ADS)
Cave, H. M.; Lim, C.-W.; Jermy, M. C.; Krumdieck, S. P.; Smith, M. R.; Lin, Y.-J.; Wu, J.-S.
2011-05-01
Fluxes of multiple species are implemented in the Quiet Direct Simulation (QDS) scheme for gas flows. Each molecular species streams independently. All species are brought to local equilibrium at the end of each time step. The multi species scheme is compared to DSMC simulation, on a test case of a Mach 20 flow of a xenon/helium mixture over a forward facing step. Depletion of the heavier species in the bow shock and the near-wall layer are seen. The multi-species QDS code is then used to model the flow in a pulsed-pressure chemical vapour deposition reactor set up for carbon film deposition. The injected gas is a mixture of methane and hydrogen. The temporal development of the spatial distribution of methane over the substrate is tracked.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alred, Erik J.; Scheele, Emily G.; Berhanu, Workalemahu M.
Recent experiments indicate a connection between the structure of amyloid aggregates and their cytotoxicity as related to neurodegenerative diseases. Of particular interest is the Iowa Mutant, which causes early-onset of Alzheimer's disease. While wild-type Amyloid β-peptides form only parallel beta-sheet aggregates, the mutant also forms meta-stable antiparallel beta sheets. Since these structural variations may cause the difference in the pathological effects of the two Aβ-peptides, we have studied in silico the relative stability of the wild type and Iowa mutant in both parallel and antiparallel forms. We compare regular molecular dynamics simulations with such where the viscosity of the samplesmore » is reduced, which, we show, leads to higher sampling efficiency. By analyzing and comparing these four sets of all-atom molecular dynamics simulations, we probe the role of the various factors that could lead to the structural differences. Our analysis indicates that the parallel forms of both wild type and Iowa mutant aggregates are stable, while the antiparallel aggregates are meta-stable for the Iowa mutant and not stable for the wild type. The differences result from the direct alignment of hydrophobic interactions in the in-register parallel oligomers, making them more stable than the antiparallel aggregates. The slightly higher thermodynamic stability of the Iowa mutant fibril-like oligomers in its parallel organization over that in antiparallel form is supported by previous experimental measurements showing slow inter-conversion of antiparallel aggregates into parallel ones. Knowledge of the mechanism that selects between parallel and antiparallel conformations and determines their relative stability may open new avenues for the development of therapies targeting familial forms of early-onset Alzheimer's disease.« less
PARALLEL PERTURBATION MODEL FOR CYCLE TO CYCLE VARIABILITY PPM4CCV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ameen, Muhsin Mohammed; Som, Sibendu
This code consists of a Fortran 90 implementation of the parallel perturbation model to compute cyclic variability in spark ignition (SI) engines. Cycle-to-cycle variability (CCV) is known to be detrimental to SI engine operation resulting in partial burn and knock, and result in an overall reduction in the reliability of the engine. Numerical prediction of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are required to accurately capture the in-cylinder turbulent flow field, and (ii) CCV is experienced over long timescales and hence the simulations needmore » to be performed for hundreds of consecutive cycles. In the new technique, the strategy is to perform multiple parallel simulations, each of which encompasses 2-3 cycles, by effectively perturbing the simulation parameters such as the initial and boundary conditions. The PPM4CCV code is a pre-processing code and can be coupled with any engine CFD code. PPM4CCV was coupled with Converge CFD code and a 10-time speedup was demonstrated over the conventional multi-cycle LES in predicting the CCV for a motored engine. Recently, the model is also being applied to fired engines including port fuel injected (PFI) and direct injection spark ignition engines and the preliminary results are very encouraging.« less
Simulation Exploration through Immersive Parallel Planes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas J; Bush, Brian W; Gruchalla, Kenny M
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less
Simulation Exploration through Immersive Parallel Planes: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less
A path-level exact parallelization strategy for sequential simulation
NASA Astrophysics Data System (ADS)
Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.
2018-01-01
Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.
Magnus-induced ratchet effects for skyrmions interacting with asymmetric substrates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reichhardt, C.; Ray, D.; Reichhardt, C. J. Olson
2015-07-31
We show using numerical simulations that pronounced ratchet effects can occur for ac driven skyrmions moving over asymmetric quasi-one-dimensional substrates. We find a new type of ratchet effect called a Magnus-induced transverse ratchet that arises when the ac driving force is applied perpendicular rather than parallel to the asymmetry direction of the substrate. This transverse ratchet effect only occurs when the Magnus term is finite, and the threshold ac amplitude needed to induce it decreases as the Magnus term becomes more prominent. Ratcheting skyrmions follow ordered orbits in which the net displacement parallel to the substrate asymmetry direction is quantized.more » As a result, skyrmion ratchets represent a new ac current-based method for controlling skyrmion positions and motion for spintronic applications.« less
Robertson, Benjamin D; Vadakkeveedu, Siddarth; Sawicki, Gregory S
2017-05-24
We present a novel biorobotic framework comprised of a biological muscle-tendon unit (MTU) mechanically coupled to a feedback controlled robotic environment simulation that mimics in vivo inertial/gravitational loading and mechanical assistance from a parallel elastic exoskeleton. Using this system, we applied select combinations of biological muscle activation (modulated with rate-coded direct neural stimulation) and parallel elastic assistance (applied via closed-loop mechanical environment simulation) hypothesized to mimic human behavior based on previously published modeling studies. These conditions resulted in constant system-level force-length dynamics (i.e., stiffness), reduced biological loads, increased muscle excursion, and constant muscle average positive power output-all consistent with laboratory experiments on intact humans during exoskeleton assisted hopping. Mechanical assistance led to reduced estimated metabolic cost and MTU apparent efficiency, but increased apparent efficiency for the MTU+Exo system as a whole. Findings from this study suggest that the increased natural resonant frequency of the artificially stiffened MTU+Exo system, along with invariant movement frequencies, may underlie observed limits on the benefits of exoskeleton assistance. Our novel approach demonstrates that it is possible to capture the salient features of human locomotion with exoskeleton assistance in an isolated muscle-tendon preparation, and introduces a powerful new tool for detailed, direct examination of how assistive devices affect muscle-level neuromechanics and energetics. Copyright © 2017 Elsevier Ltd. All rights reserved.
Frerichs, H.; Schmitz, Oliver; Evans, Todd; ...
2015-07-13
High resolution plasma transport simulations with the EMC3-EIRENE code have been performed to address the parallel plasma flow structure in the boundary of a poloidal divertor configuration with non-axisymmetric perturbations at DIII-D. Simulation results show that a checkerboard pattern of flows with alternating direction is generated inside the separatrix. This pattern is aligned with the position of the main resonances (i.e. where the safety factor is equal to rational values q = m/n for a perturbation field with base mode number n): m pairs of alternating forward and backward flow channel exist for each resonance. The poloidal oscillations are alignedmore » with the subharmonic Melnikov function, which indicates that the plasma flow is generated by parallel pressure gradients along perturbed field lines. Lastly, an additional scrape-off layer-like domain is introduced by the perturbed separatrix which guides field lines from the interior to the divertor targets, resulting in an enhanced outward flow that is consistent with the experimentally observed particle pump-out effect. However, while the lobe structure of the perturbed separatrix is very well reflected in the temperature profile, the same lobes can appear to be smaller in the flow profile due to a competition between high upstream pressure and downstream particle sources driving flows in opposite directions.« less
Numerical simulation of h-adaptive immersed boundary method for freely falling disks
NASA Astrophysics Data System (ADS)
Zhang, Pan; Xia, Zhenhua; Cai, Qingdong
2018-05-01
In this work, a freely falling disk with aspect ratio 1/10 is directly simulated by using an adaptive numerical model implemented on a parallel computation framework JASMIN. The adaptive numerical model is a combination of the h-adaptive mesh refinement technique and the implicit immersed boundary method (IBM). Our numerical results agree well with the experimental results in all of the six degrees of freedom of the disk. Furthermore, very similar vortex structures observed in the experiment were also obtained.
Optimal estimates of free energies from multistate nonequilibrium work data.
Maragakis, Paul; Spichty, Martin; Karplus, Martin
2006-03-17
We derive the optimal estimates of the free energies of an arbitrary number of thermodynamic states from nonequilibrium work measurements; the work data are collected from forward and reverse switching processes and obey a fluctuation theorem. The maximum likelihood formulation properly reweights all pathways contributing to a free energy difference and is directly applicable to simulations and experiments. We demonstrate dramatic gains in efficiency by combining the analysis with parallel tempering simulations for alchemical mutations of model amino acids.
A compositional reservoir simulator on distributed memory parallel computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rame, M.; Delshad, M.
1995-12-31
This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. Amore » portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented.« less
Application of computational physics within Northrop
NASA Technical Reports Server (NTRS)
George, M. W.; Ling, R. T.; Mangus, J. F.; Thompkins, W. T.
1987-01-01
An overview of Northrop programs in computational physics is presented. These programs depend on access to today's supercomputers, such as the Numerical Aerodynamical Simulator (NAS), and future growth on the continuing evolution of computational engines. Descriptions here are concentrated on the following areas: computational fluid dynamics (CFD), computational electromagnetics (CEM), computer architectures, and expert systems. Current efforts and future directions in these areas are presented. The impact of advances in the CFD area is described, and parallels are drawn to analagous developments in CEM. The relationship between advances in these areas and the development of advances (parallel) architectures and expert systems is also presented.
NASA Astrophysics Data System (ADS)
Ovaysi, S.; Piri, M.
2009-12-01
We present a three-dimensional fully dynamic parallel particle-based model for direct pore-level simulation of incompressible viscous fluid flow in disordered porous media. The model was developed from scratch and is capable of simulating flow directly in three-dimensional high-resolution microtomography images of naturally occurring or man-made porous systems. It reads the images as input where the position of the solid walls are given. The entire medium, i.e., solid and fluid, is then discretized using particles. The model is based on Moving Particle Semi-implicit (MPS) technique. We modify this technique in order to improve its stability. The model handles highly irregular fluid-solid boundaries effectively. It takes into account viscous pressure drop in addition to the gravity forces. It conserves mass and can automatically detect any false connectivity with fluid particles in the neighboring pores and throats. It includes a sophisticated algorithm to automatically split and merge particles to maintain hydraulic connectivity of extremely narrow conduits. Furthermore, it uses novel methods to handle particle inconsistencies and open boundaries. To handle the computational load, we present a fully parallel version of the model that runs on distributed memory computer clusters and exhibits excellent scalability. The model is used to simulate unsteady-state flow problems under different conditions starting from straight noncircular capillary tubes with different cross-sectional shapes, i.e., circular/elliptical, square/rectangular and triangular cross-sections. We compare the predicted dimensionless hydraulic conductances with the data available in the literature and observe an excellent agreement. We then test the scalability of our parallel model with two samples of an artificial sandstone, samples A and B, with different volumes and different distributions (non-uniform and uniform) of solid particles among the processors. An excellent linear scalability is obtained for sample B that has more uniform distribution of solid particles leading to a superior load balancing. The model is then used to simulate fluid flow directly in REV size three-dimensional x-ray images of a naturally occurring sandstone. We analyze the quality and consistency of the predicted flow behavior and calculate absolute permeability, which compares well with the available network modeling and Lattice-Boltzmann permeabilities available in the literature for the same sandstone. We show that the model conserves mass very well and is stable computationally even at very narrow fluid conduits. The transient- and the steady-state fluid flow patterns are presented as well as the steady-state flow rates to compute absolute permeability. Furthermore, we discuss the vital role of our adaptive particle resolution scheme in preserving the original pore connectivity of the samples and their narrow channels through splitting and merging of fluid particles.
Chen, Weiliang; De Schutter, Erik
2017-01-01
Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation. PMID:28239346
Chen, Weiliang; De Schutter, Erik
2017-01-01
Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation.
LAMMPS framework for dynamic bonding and an application modeling DNA
NASA Astrophysics Data System (ADS)
Svaneborg, Carsten
2012-08-01
We have extended the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) to support directional bonds and dynamic bonding. The framework supports stochastic formation of new bonds, breakage of existing bonds, and conversion between bond types. Bond formation can be controlled to limit the maximal functionality of a bead with respect to various bond types. Concomitant with the bond dynamics, angular and dihedral interactions are dynamically introduced between newly connected triplets and quartets of beads, where the interaction type is determined from the local pattern of bead and bond types. When breaking bonds, all angular and dihedral interactions involving broken bonds are removed. The framework allows chemical reactions to be modeled, and use it to simulate a simplistic, coarse-grained DNA model. The resulting DNA dynamics illustrates the power of the present framework. Catalogue identifier: AEME_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEME_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 2 243 491 No. of bytes in distributed program, including test data, etc.: 771 Distribution format: tar.gz Programming language: C++ Computer: Single and multiple core servers Operating system: Linux/Unix/Windows Has the code been vectorized or parallelized?: Yes. The code has been parallelized by the use of MPI directives. RAM: 1 Gb Classification: 16.11, 16.12 Nature of problem: Simulating coarse-grain models capable of chemistry e.g. DNA hybridization dynamics. Solution method: Extending LAMMPS to handle dynamic bonding and directional bonds. Unusual features: Allows bonds to be created and broken while angular and dihedral interactions are kept consistent. Additional comments: The distribution file for this program is approximately 36 Mbytes and therefore is not delivered directly when download or E-mail is requested. Instead an html file giving details of how the program can be obtained is sent. Running time: Hours to days. The examples provided in the distribution take just seconds to run.
Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Chen, Yousu; Wu, Di
2015-12-09
Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less
Three dimensional simulations of viscous folding in diverging microchannels
NASA Astrophysics Data System (ADS)
Xu, Bingrui; Chergui, Jalel; Shin, Seungwon; Juric, Damir
2016-11-01
Three dimensional simulations on the viscous folding in diverging microchannels reported by Cubaud and Mason are performed using the parallel code BLUE for multi-phase flows. The more viscous liquid L1 is injected into the channel from the center inlet, and the less viscous liquid L2 from two side inlets. Liquid L1 takes the form of a thin filament due to hydrodynamic focusing in the long channel that leads to the diverging region. The thread then becomes unstable to a folding instability, due to the longitudinal compressive stress applied to it by the diverging flow of liquid L2. We performed a parameter study in which the flow rate ratio, the viscosity ratio, the Reynolds number, and the shape of the channel were varied relative to a reference model. In our simulations, the cross section of the thread produced by focusing is elliptical rather than circular. The initial folding axis can be either parallel or perpendicular to the narrow dimension of the chamber. In the former case, the folding slowly transforms via twisting to perpendicular folding, or it may remain parallel. The direction of folding onset is determined by the velocity profile and the elliptical shape of the thread cross section in the channel that feeds the diverging part of the cell.
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Siegel, Andrew R.
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
Romano, Paul K.; Siegel, Andrew R.
2017-07-01
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less
Parallel tempering for the traveling salesman problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Percus, Allon; Wang, Richard; Hyman, Jeffrey
We explore the potential of parallel tempering as a combinatorial optimization method, applying it to the traveling salesman problem. We compare simulation results of parallel tempering with a benchmark implementation of simulated annealing, and study how different choices of parameters affect the relative performance of the two methods. We find that a straightforward implementation of parallel tempering can outperform simulated annealing in several crucial respects. When parameters are chosen appropriately, both methods yield close approximation to the actual minimum distance for an instance with 200 nodes. However, parallel tempering yields more consistently accurate results when a series of independent simulationsmore » are performed. Our results suggest that parallel tempering might offer a simple but powerful alternative to simulated annealing for combinatorial optimization problems.« less
NASA Technical Reports Server (NTRS)
Loeb, N. G.; Varnai, Tamas; Winker, David M.
1998-01-01
Recent observational studies have shown that satellite retrievals of cloud optical depth based on plane-parallel model theory suffer from systematic biases that depend on viewing geometry, even when observations are restricted to overcast marine stratus layers, arguably the closest to plane parallel in nature. At moderate to low sun elevations, the plane-parallel model significantly overestimates the reflectance dependence on view angle in the forward-scattering direction but shows a similar dependence in the backscattering direction. Theoretical simulations are performed that show that the likely cause for this discrepancy is because the plane-parallel model assumption does not account for subpixel, scale variations in cloud-top height (i.e., "cloud bumps"). Monte Carlo simulation, comparing ID model radiances to radiances from overcast cloud field with 1) cloud-top height variation, but constant cloud volume extinction; 2) flat tops but horizontal variations in cloud volume extinction; and 3) variations in both cloud top height and cloud extinction are performed over a approximately equal to 4 km x 4 km domain (roughly the size of an individual GAC AVHRR pixel). The comparisons show that when cloud-top height variations are included, departures from 1D theory are remarkably similar (qualitatively) to those obtained observationally. In contrast, when clouds are assumed flat and only cloud extinction is variable, reflectance differences are much smaller and do not show any view-angle dependence. When both cloud-top height and cloud extinction variations are included, however, large increases in cloud extinction variability can enhance reflectance difference. The reason 3D-1D reflectance differences are more sensitive to cloud-top height variations in the forward-scattering direction (at moderate to low, sun elevations) is because photons leaving the cloud field in that direction experience fewer scattering events (low-order scattering) and are restricted to the topmost portions of the cloud. While reflectance deviations from 1D theory are much larger for bumpy clouds than for flat clouds with variable cloud extinction, differences in cloud albedo are comparable for these two cases.
Next Generation Extended Lagrangian Quantum-based Molecular Dynamics
NASA Astrophysics Data System (ADS)
Negre, Christian
2017-06-01
A new framework for extended Lagrangian first-principles molecular dynamics simulations is presented, which overcomes shortcomings of regular, direct Born-Oppenheimer molecular dynamics, while maintaining important advantages of the unified extended Lagrangian formulation of density functional theory pioneered by Car and Parrinello three decades ago. The new framework allows, for the first time, energy conserving, linear-scaling Born-Oppenheimer molecular dynamics simulations, which is necessary to study larger and more realistic systems over longer simulation times than previously possible. Expensive, self-consinstent-field optimizations are avoided and normal integration time steps of regular, direct Born-Oppenheimer molecular dynamics can be used. Linear scaling electronic structure theory is presented using a graph-based approach that is ideal for parallel calculations on hybrid computer platforms. For the first time, quantum based Born-Oppenheimer molecular dynamics simulation is becoming a practically feasible approach in simulations of +100,000 atoms-representing a competitive alternative to classical polarizable force field methods. In collaboration with: Anders Niklasson, Los Alamos National Laboratory.
NASA Astrophysics Data System (ADS)
Rambalakos, Andreas
Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the exponent in the crack propagation rate (Paris equation) and the yield strength of the elements are considered in the analytical model. The structural component is assumed to consist of a prescribed number of elements. This Monte Carlo simulation methodology is used to determine the required non-periodic inspections so that the reliability of the structural component will not fall below a prescribed minimum level. A sensitivity analysis is conducted to determine the effect of three key parameters on the specification of the non-periodic inspection intervals: namely a parameter associated with the time to crack initiation, the applied nominal stress fluctuation and the minimum acceptable reliability level.
Acceleration of low-energy ions at parallel shocks with a focused transport model
Zuo, Pingbing; Zhang, Ming; Rassoul, Hamid K.
2013-04-10
Here, we present a test particle simulation on the injection and acceleration of low-energy suprathermal particles by parallel shocks with a focused transport model. The focused transport equation contains all necessary physics of shock acceleration, but avoids the limitation of diffusive shock acceleration (DSA) that requires a small pitch angle anisotropy. This simulation verifies that the particles with speeds of a fraction of to a few times the shock speed can indeed be directly injected and accelerated into the DSA regime by parallel shocks. At higher energies starting from a few times the shock speed, the energy spectrum of acceleratedmore » particles is a power law with the same spectral index as the solution of standard DSA theory, although the particles are highly anisotropic in the upstream region. The intensity, however, is different from that predicted by DSA theory, indicating a different level of injection efficiency. It is found that the shock strength, the injection speed, and the intensity of an electric cross-shock potential (CSP) jump can affect the injection efficiency of the low-energy particles. A stronger shock has a higher injection efficiency. In addition, if the speed of injected particles is above a few times the shock speed, the produced power-law spectrum is consistent with the prediction of standard DSA theory in both its intensity and spectrum index with an injection efficiency of 1. CSP can increase the injection efficiency through direct particle reflection back upstream, but it has little effect on the energetic particle acceleration once the speed of injected particles is beyond a few times the shock speed. This test particle simulation proves that the focused transport theory is an extension of DSA theory with the capability of predicting the efficiency of particle injection.« less
A parallel direct numerical simulation of dust particles in a turbulent flow
NASA Astrophysics Data System (ADS)
Nguyen, H. V.; Yokota, R.; Stenchikov, G.; Kocurek, G.
2012-04-01
Due to their effects on radiation transport, aerosols play an important role in the global climate. Mineral dust aerosol is a predominant natural aerosol in the desert and semi-desert regions of the Middle East and North Africa (MENA). The Arabian Peninsula is one of the three predominant source regions on the planet "exporting" dust to almost the entire world. Mineral dust aerosols make up about 50% of the tropospheric aerosol mass and therefore produces a significant impact on the Earth's climate and the atmospheric environment, especially in the MENA region that is characterized by frequent dust storms and large aerosol generation. Understanding the mechanisms of dust emission, transport and deposition is therefore essential for correctly representing dust in numerical climate prediction. In this study we present results of numerical simulations of dust particles in a turbulent flow to study the interaction between dust and the atmosphere. Homogenous and passive dust particles in the boundary layers are entrained and advected under the influence of a turbulent flow. Currently no interactions between particles are included. Turbulence is resolved through direct numerical simulation using a parallel incompressible Navier-Stokes flow solver. Model output provides information on particle trajectories, turbulent transport of dust and effects of gravity on dust motion, which will be used to compare with the wind tunnel experiments at University of Texas at Austin. Results of testing of parallel efficiency and scalability is provided. Future versions of the model will include air-particle momentum exchanges, varying particle sizes and saltation effect. The results will be used for interpreting wind tunnel and field experiments and for improvement of dust generation parameterizations in meteorological models.
Mineral lineation produced by 3-D rotation of rigid inclusions in confined viscous simple shear
NASA Astrophysics Data System (ADS)
Marques, Fernando O.
2016-08-01
The solid-state flow of rocks commonly produces a parallel arrangement of elongate minerals with their longest axes coincident with the direction of flow-a mineral lineation. However, this does not conform to Jeffery's theory of the rotation of rigid ellipsoidal inclusions (REIs) in viscous simple shear, because rigid inclusions rotate continuously with applied shear. In 2-dimensional (2-D) flow, the REI's greatest axis (e1) is already in the shear direction; therefore, the problem is to find mechanisms that can prevent the rotation of the REI about one axis, the vorticity axis. In 3-D flow, the problem is to find a mechanism that can make e1 rotate towards the shear direction, and so generate a mineral lineation by rigid rotation about two axes. 3-D analogue and numerical modelling was used to test the effects of confinement on REI rotation and, for narrow channels (shear zone thickness over inclusion's least axis, Wr < 2), the results show that: (1) the rotational behaviour deviates greatly from Jeffery's model; (2) inclusions with aspect ratio Ar (greatest over least principle axis, e1/e3) > 1 can rotate backwards from an initial orientation w e1 parallel to the shear plane, in great contrast to Jeffery's model; (3) back rotation is limited because inclusions reach a stable equilibrium orientation; (4) most importantly and, in contrast to Jeffery's model and to the 2-D simulations, in 3-D, the confined REI gradually rotated about an axis orthogonal to the shear plane towards an orientation with e1 parallel to the shear direction, thus producing a lineation parallel to the shear direction. The modelling results lead to the conclusion that confined simple shear can be responsible for the mineral alignment (lineation) observed in ductile shear zones.
Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; ...
2013-01-01
Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less
NASA Astrophysics Data System (ADS)
Beld, E.; Seevinck, P. R.; Lagendijk, J. J. W.; Viergever, M. A.; Moerland, M. A.
2016-09-01
In the process of developing a robotic MRI-guided high-dose-rate (HDR) prostate brachytherapy treatment, the influence of the MRI scanner’s magnetic field on the dose distribution needs to be investigated. A magnetic field causes a deflection of electrons in the plane perpendicular to the magnetic field, and it leads to less lateral scattering along the direction parallel with the magnetic field. Monte Carlo simulations were carried out to determine the influence of the magnetic field on the electron behavior and on the total dose distribution around an Ir-192 source. Furthermore, the influence of air pockets being present near the source was studied. The Monte Carlo package Geant4 was utilized for the simulations. The simulated geometries consisted of a simplified point source inside a water phantom. Magnetic field strengths of 0 T, 1.5 T, 3 T, and 7 T were considered. The simulation results demonstrated that the dose distribution was nearly unaffected by the magnetic field for all investigated magnetic field strengths. Evidence was found that, from a dose perspective, the HDR prostate brachytherapy treatment using Ir-192 can be performed safely inside the MRI scanner. No need was found to account for the magnetic field during treatment planning. Nevertheless, the presence of air pockets in close vicinity to the source, particularly along the direction parallel with the magnetic field, appeared to be an important point for consideration.
Beld, E; Seevinck, P R; Lagendijk, J J W; Viergever, M A; Moerland, M A
2016-09-21
In the process of developing a robotic MRI-guided high-dose-rate (HDR) prostate brachytherapy treatment, the influence of the MRI scanner's magnetic field on the dose distribution needs to be investigated. A magnetic field causes a deflection of electrons in the plane perpendicular to the magnetic field, and it leads to less lateral scattering along the direction parallel with the magnetic field. Monte Carlo simulations were carried out to determine the influence of the magnetic field on the electron behavior and on the total dose distribution around an Ir-192 source. Furthermore, the influence of air pockets being present near the source was studied. The Monte Carlo package Geant4 was utilized for the simulations. The simulated geometries consisted of a simplified point source inside a water phantom. Magnetic field strengths of 0 T, 1.5 T, 3 T, and 7 T were considered. The simulation results demonstrated that the dose distribution was nearly unaffected by the magnetic field for all investigated magnetic field strengths. Evidence was found that, from a dose perspective, the HDR prostate brachytherapy treatment using Ir-192 can be performed safely inside the MRI scanner. No need was found to account for the magnetic field during treatment planning. Nevertheless, the presence of air pockets in close vicinity to the source, particularly along the direction parallel with the magnetic field, appeared to be an important point for consideration.
Decision-directed detector for overlapping PCM/NRZ signals.
NASA Technical Reports Server (NTRS)
Wang, C. D.; Noack, T. L.
1973-01-01
A decision-directed (DD) technique for the detection of overlapping PCM/NRZ signals in the presence of white Gaussian noise is investigated. The performance of the DD detector is represented by probability of error Pe versus input signal-to-noise ratio (SNR). To examine how much improvement in performance can be achieved with this technique, Pe's with and without DD feedback are evaluated in parallel. Further, analytical results are compared with those found by Monte Carlo simulations. The results are in good agreement.
System Level Applications of Adaptive Computing (SLAAC)
2003-11-01
saved clock cycles, as the computation cycle time was directly proportional to the number of bitplanes in the image. The simulation was undertaken in...S-1][D -1] SK E W E R [k+K S-1][0] SK E W E R [k+K S-1][1] MinMax MinMax MinMax Min - IdxMin Max - IdxMax 0 Figure 3: PPI algorithm architeture ...parallel processing of data. The total throughput in these extended architectures is directly proportional to the amount of resources (CLB slices
A Cartesian cut cell method for rarefied flow simulations around moving obstacles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dechristé, G., E-mail: Guillaume.Dechriste@math.u-bordeaux1.fr; CNRS, IMB, UMR 5251, F-33400 Talence; Mieussens, L., E-mail: Luc.Mieussens@math.u-bordeaux1.fr
2016-06-01
For accurate simulations of rarefied gas flows around moving obstacles, we propose a cut cell method on Cartesian grids: it allows exact conservation and accurate treatment of boundary conditions. Our approach is designed to treat Cartesian cells and various kinds of cut cells by the same algorithm, with no need to identify the specific shape of each cut cell. This makes the implementation quite simple, and allows a direct extension to 3D problems. Such simulations are also made possible by using an adaptive mesh refinement technique and a hybrid parallel implementation. This is illustrated by several test cases, including amore » 3D unsteady simulation of the Crookes radiometer.« less
Hybrid Parallel-Slant Hole Collimators for SPECT Imaging
NASA Astrophysics Data System (ADS)
Bai, Chuanyong; Shao, Ling; Ye, Jinghan; Durbin, M.; Petrillo, M.
2004-06-01
We propose a new collimator geometry, the hybrid parallel-slant (HPS) hole geometry, to improve sensitivity for SPECT imaging with large field of view (LFOV) gamma cameras. A HPS collimator has one segment with parallel holes and one or more segments with slant holes. The collimator can be mounted on a conventional SPECT LFOV system that uses parallel-beam collimators, and no additional detector or collimator motion is required for data acquisition. The parallel segment of the collimator allows for the acquisition of a complete data set of the organs-of-interest and the slant segments provide additional data. In this work, simulation studies of an MCAT phantom were performed with a HPS collimator with one slant segment. The slant direction points from patient head to patient feet with a slant angle of 30/spl deg/. We simulated 64 projection views over 180/spl deg/ with the modeling of nonuniform attenuation effect, and then reconstructed images using an MLEM algorithm that incorporated the hybrid geometry. It was shown that sensitivity to the cardiac region of the phantom was increased by approximately 50% when using the HPS collimator compared with a parallel-hole collimator. No visible artifacts were observed in the myocardium and the signal-to-noise ratio (SNR) of the myocardium walls was improved. Compared with collimators with other geometries, using a HPS collimator has the following advantages: (a) significant sensitivity increase; (b) a complete data set obtained from the parallel segment that allows for artifact-free image reconstruction; and (c) no additional collimator or detector motion. This work demonstrates the potential value of hybrid geometry in collimator design for LFOV SPECT imaging.
Stability of Iowa mutant and wild type Aβ-peptide aggregates
NASA Astrophysics Data System (ADS)
Alred, Erik J.; Scheele, Emily G.; Berhanu, Workalemahu M.; Hansmann, Ulrich H. E.
2014-11-01
Recent experiments indicate a connection between the structure of amyloid aggregates and their cytotoxicity as related to neurodegenerative diseases. Of particular interest is the Iowa Mutant, which causes early-onset of Alzheimer's disease. While wild-type Amyloid β-peptides form only parallel beta-sheet aggregates, the mutant also forms meta-stable antiparallel beta sheets. Since these structural variations may cause the difference in the pathological effects of the two Aβ-peptides, we have studied in silico the relative stability of the wild type and Iowa mutant in both parallel and antiparallel forms. We compare regular molecular dynamics simulations with such where the viscosity of the samples is reduced, which, we show, leads to higher sampling efficiency. By analyzing and comparing these four sets of all-atom molecular dynamics simulations, we probe the role of the various factors that could lead to the structural differences. Our analysis indicates that the parallel forms of both wild type and Iowa mutant aggregates are stable, while the antiparallel aggregates are meta-stable for the Iowa mutant and not stable for the wild type. The differences result from the direct alignment of hydrophobic interactions in the in-register parallel oligomers, making them more stable than the antiparallel aggregates. The slightly higher thermodynamic stability of the Iowa mutant fibril-like oligomers in its parallel organization over that in antiparallel form is supported by previous experimental measurements showing slow inter-conversion of antiparallel aggregates into parallel ones. Knowledge of the mechanism that selects between parallel and antiparallel conformations and determines their relative stability may open new avenues for the development of therapies targeting familial forms of early-onset Alzheimer's disease.
NASA Astrophysics Data System (ADS)
Bolis, A.; Cantwell, C. D.; Moxey, D.; Serson, D.; Sherwin, S. J.
2016-09-01
A hybrid parallelisation technique for distributed memory systems is investigated for a coupled Fourier-spectral/hp element discretisation of domains characterised by geometric homogeneity in one or more directions. The performance of the approach is mathematically modelled in terms of operation count and communication costs for identifying the most efficient parameter choices. The model is calibrated to target a specific hardware platform after which it is shown to accurately predict the performance in the hybrid regime. The method is applied to modelling turbulent flow using the incompressible Navier-Stokes equations in an axisymmetric pipe and square channel. The hybrid method extends the practical limitations of the discretisation, allowing greater parallelism and reduced wall times. Performance is shown to continue to scale when both parallelisation strategies are used.
NASA Astrophysics Data System (ADS)
Lourderaj, Upakarasamy; Sun, Rui; Kohale, Swapnil C.; Barnes, George L.; de Jong, Wibe A.; Windus, Theresa L.; Hase, William L.
2014-03-01
The interface for VENUS and NWChem, and the resulting software package for direct dynamics simulations are described. The coupling of the two codes is considered to be a tight coupling since the two codes are compiled and linked together and act as one executable with data being passed between the two codes through routine calls. The advantages of this type of coupling are discussed. The interface has been designed to have as little interference as possible with the core codes of both VENUS and NWChem. VENUS is the code that propagates the direct dynamics trajectories and, therefore, is the program that drives the overall execution of VENUS/NWChem. VENUS has remained an essentially sequential code, which uses the highly parallel structure of NWChem. Subroutines of the interface that accomplish the data transmission and communication between the two computer programs are described. Recent examples of the use of VENUS/NWChem for direct dynamics simulations are summarized.
NASA Technical Reports Server (NTRS)
Stupl, Jan; Faber, Nicolas; Foster, Cyrus; Yang, Fan Yang; Nelson, Bron; Aziz, Jonathan; Nuttall, Andrew; Henze, Chris; Levit, Creon
2014-01-01
This paper provides an updated efficiency analysis of the LightForce space debris collision avoidance scheme. LightForce aims to prevent collisions on warning by utilizing photon pressure from ground based, commercial off the shelf lasers. Past research has shown that a few ground-based systems consisting of 10 kilowatt class lasers directed by 1.5 meter telescopes with adaptive optics could lower the expected number of collisions in Low Earth Orbit (LEO) by an order of magnitude. Our simulation approach utilizes the entire Two Line Element (TLE) catalogue in LEO for a given day as initial input. Least-squares fitting of a TLE time series is used for an improved orbit estimate. We then calculate the probability of collision for all LEO objects in the catalogue for a time step of the simulation. The conjunctions that exceed a threshold probability of collision are then engaged by a simulated network of laser ground stations. After those engagements, the perturbed orbits are used to re-assess the probability of collision and evaluate the efficiency of the system. This paper describes new simulations with three updated aspects: 1) By utilizing a highly parallel simulation approach employing hundreds of processors, we have extended our analysis to a much broader dataset. The simulation time is extended to one year. 2) We analyze not only the efficiency of LightForce on conjunctions that naturally occur, but also take into account conjunctions caused by orbit perturbations due to LightForce engagements. 3) We use a new simulation approach that is regularly updating the LightForce engagement strategy, as it would be during actual operations. In this paper we present our simulation approach to parallelize the efficiency analysis, its computational performance and the resulting expected efficiency of the LightForce collision avoidance system. Results indicate that utilizing a network of four LightForce stations with 20 kilowatt lasers, 85% of all conjunctions with a probability of collision Pc > 10 (sup -6) can be mitigated.
Correlation Analysis between Spin, Velocity Shear, and Vorticity of Baryonic and Dark Matter Halos
NASA Astrophysics Data System (ADS)
Liu, Li-li
2017-04-01
Based on the cosmological hydrodynamic simulations, we investigate the correlations between the spin, velocity shear and vorticity in dark matter halos, as well as the relationship between the baryonic matter and the dark matter. We find that (1) the difference between the vorticity of baryonic matter and that of dark matter is evident on the scales of < 0.2 h-1 Mpc; (2) the vorticity of baryonic matter exhibits a stronger correlation with the tensor of velocity shear than the vorticity of dark matter does; and (3) the spinning direction of small-mass dark matter halos tends to be parallel to the direction of their host filaments, while the spinning direction of massive dark matter halos tends to be perpendicular to the direction of their host filaments, and the intensity of this kind correlation depends on the size of simulation box, and the simulation accuracy. These factors may cause the relationship between the the spins of dark matter halos and those of galaxies to be complicated, and affect the correlation between the galaxy spins and the nearby large-scale structures.
Parallel Signal Processing and System Simulation using aCe
NASA Technical Reports Server (NTRS)
Dorband, John E.; Aburdene, Maurice F.
2003-01-01
Recently, networked and cluster computation have become very popular for both signal processing and system simulation. A new language is ideally suited for parallel signal processing applications and system simulation since it allows the programmer to explicitly express the computations that can be performed concurrently. In addition, the new C based parallel language (ace C) for architecture-adaptive programming allows programmers to implement algorithms and system simulation applications on parallel architectures by providing them with the assurance that future parallel architectures will be able to run their applications with a minimum of modification. In this paper, we will focus on some fundamental features of ace C and present a signal processing application (FFT).
NASA Astrophysics Data System (ADS)
Yan, Hui; Wang, K. G.; Jones, Jim E.
2016-06-01
A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.
NASA Technical Reports Server (NTRS)
Menon, R. G.; Kurdila, A. J.
1992-01-01
This paper presents a concurrent methodology to simulate the dynamics of flexible multibody systems with a large number of degrees of freedom. A general class of open-loop structures is treated and a redundant coordinate formulation is adopted. A range space method is used in which the constraint forces are calculated using a preconditioned conjugate gradient method. By using a preconditioner motivated by the regular ordering of the directed graph of the structures, it is shown that the method is order N in the total number of coordinates of the system. The overall formulation has the advantage that it permits fine parallelization and does not rely on system topology to induce concurrency. It can be efficiently implemented on the present generation of parallel computers with a large number of processors. Validation of the method is presented via numerical simulations of space structures incorporating large number of flexible degrees of freedom.
Parallelization of the Coupled Earthquake Model
NASA Technical Reports Server (NTRS)
Block, Gary; Li, P. Peggy; Song, Yuhe T.
2007-01-01
This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.
Suppressing correlations in massively parallel simulations of lattice models
NASA Astrophysics Data System (ADS)
Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle
2017-11-01
For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.
SPEEDES - A multiple-synchronization environment for parallel discrete-event simulation
NASA Technical Reports Server (NTRS)
Steinman, Jeff S.
1992-01-01
Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES) is a unified parallel simulation environment. It supports multiple-synchronization protocols without requiring users to recompile their code. When a SPEEDES simulation runs on one node, all the extra parallel overhead is removed automatically at run time. When the same executable runs in parallel, the user preselects the synchronization algorithm from a list of options. SPEEDES currently runs on UNIX networks and on the California Institute of Technology/Jet Propulsion Laboratory Mark III Hypercube. SPEEDES also supports interactive simulations. Featured in the SPEEDES environment is a new parallel synchronization approach called Breathing Time Buckets. This algorithm uses some of the conservative techniques found in Time Bucket synchronization, along with the optimism that characterizes the Time Warp approach. A mathematical model derived from first principles predicts the performance of Breathing Time Buckets. Along with the Breathing Time Buckets algorithm, this paper discusses the rules for processing events in SPEEDES, describes the implementation of various other synchronization protocols supported by SPEEDES, describes some new ones for the future, discusses interactive simulations, and then gives some performance results.
NASA Technical Reports Server (NTRS)
Eidson, T. M.; Erlebacher, G.
1994-01-01
While parallel computers offer significant computational performance, it is generally necessary to evaluate several programming strategies. Two programming strategies for a fairly common problem - a periodic tridiagonal solver - are developed and evaluated. Simple model calculations as well as timing results are presented to evaluate the various strategies. The particular tridiagonal solver evaluated is used in many computational fluid dynamic simulation codes. The feature that makes this algorithm unique is that these simulation codes usually require simultaneous solutions for multiple right-hand-sides (RHS) of the system of equations. Each RHS solutions is independent and thus can be computed in parallel. Thus a Gaussian elimination type algorithm can be used in a parallel computation and the more complicated approaches such as cyclic reduction are not required. The two strategies are a transpose strategy and a distributed solver strategy. For the transpose strategy, the data is moved so that a subset of all the RHS problems is solved on each of the several processors. This usually requires significant data movement between processor memories across a network. The second strategy attempts to have the algorithm allow the data across processor boundaries in a chained manner. This usually requires significantly less data movement. An approach to accomplish this second strategy in a near-perfect load-balanced manner is developed. In addition, an algorithm will be shown to directly transform a sequential Gaussian elimination type algorithm into the parallel chained, load-balanced algorithm.
Population annealing with weighted averages: A Monte Carlo method for rough free-energy landscapes
NASA Astrophysics Data System (ADS)
Machta, J.
2010-08-01
The population annealing algorithm introduced by Hukushima and Iba is described. Population annealing combines simulated annealing and Boltzmann weighted differential reproduction within a population of replicas to sample equilibrium states. Population annealing gives direct access to the free energy. It is shown that unbiased measurements of observables can be obtained by weighted averages over many runs with weight factors related to the free-energy estimate from the run. Population annealing is well suited to parallelization and may be a useful alternative to parallel tempering for systems with rough free-energy landscapes such as spin glasses. The method is demonstrated for spin glasses.
Collisionless dissipation processes in quasi-parallel shocks. [in solar wind
NASA Technical Reports Server (NTRS)
Quest, K. B.; Forslund, D. W.; Brackbill, J. U.; Lee, K.
1983-01-01
The evolution of collisionless, quasi-parallel shocks (the angle between the shock normal and the upstream magnetic field being less than 45 deg) is examined using two dimensional particle simulations. Reflected ions upstream from the shock are observed with average guiding center velocity and gyrational energy which agree well with the prediction of simple specular reflection. Strong ion heating through the shock ramp is apparently caused by large amplitude whistler turbulence. A flux of suprathermal electrons is also the magnetic field direction. Much stronger ion heating occurs in the shock than electron heating. The relevance of this work to the earth's bow shock is discussed.
Random number generators for large-scale parallel Monte Carlo simulations on FPGA
NASA Astrophysics Data System (ADS)
Lin, Y.; Wang, F.; Liu, B.
2018-05-01
Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.
A sweep algorithm for massively parallel simulation of circuit-switched networks
NASA Technical Reports Server (NTRS)
Gaujal, Bruno; Greenberg, Albert G.; Nicol, David M.
1992-01-01
A new massively parallel algorithm is presented for simulating large asymmetric circuit-switched networks, controlled by a randomized-routing policy that includes trunk-reservation. A single instruction multiple data (SIMD) implementation is described, and corresponding experiments on a 16384 processor MasPar parallel computer are reported. A multiple instruction multiple data (MIMD) implementation is also described, and corresponding experiments on an Intel IPSC/860 parallel computer, using 16 processors, are reported. By exploiting parallelism, our algorithm increases the possible execution rate of such complex simulations by as much as an order of magnitude.
Dissipation and particle energization in moderate to low beta turbulent plasma via PIC simulations
NASA Astrophysics Data System (ADS)
Makwana, Kirit; Li, Hui; Guo, Fan; Li, Xiaocan
2017-05-01
We simulate decaying turbulence in electron-positron pair plasmas using a fully-kinetic particle-in-cell (PIC) code. We run two simulations with moderate-to-low plasma β (the ratio of thermal pressure to magnetic pressure). The energy decay rate is found to be similar in both cases. The perpendicular wave-number spectrum of magnetic energy shows a slope between {k}\\perp -1.3 and {k}\\perp -1.1, where the perpendicular (⊥) and parallel (∥) directions are defined with respect to the magnetic field. The particle kinetic energy distribution function shows the formation of a non-thermal feature in the case of lower plasma β, with a slope close to E-1. The correlation between thin turbulent current sheets and Ohmic heating by the dot product of electric field (E) and current density (J) is investigated. Heating by the parallel E∥ · J∥ term dominates the perpendicular E⊥ · J⊥ term. Regions of strong E∥ · J∥ are spatially well-correlated with regions of intense current sheets, which also appear correlated with regions of strong E∥ in the low β simulation, suggesting an important role of magnetic reconnection in the dissipation of low β plasma turbulence.
Interpretations of the impact of cross-field drifts on divertor flows in DIII-D with UEDGE
Jaervinen, Aaro E.; Allen, Steve L.; Groth, Mathias; ...
2017-01-27
Simulations using the multi-fluid code UEDGE indicates that, in low confinement (Lmode) plasmas in DIII-D, recycling driven flows dominate poloidal particle flows in the divertor, whereas E×B drift flows dominate the radial particle flows. In contrast, in high confinement (H-mode) conditions E×B drift flows dominate both poloidal and radial particle flows in the divertor. UEDGE indicates that the toroidal C 2+ flow velocities in the divertor plasma are entrained within 30% to the background deuterium flow in both Land H-mode plasmas in the plasma region where the CIII 465 nm emission is measured. Therefore, UEDGE indicates that the Carbon Dopplermore » Coherence Imaging System (CIS), measuring the toroidal velocity of the C 2+ ions, can provide insight to the deuterium flows in the divertor. Parallel-to-B velocity dominates the toroidal divertor flow; direct drift impact being less than 1%. Toroidal divertor flow is predicted to reverse when the magnetic field is reversed. This is explained by the parallel-B flow towards the nearest divertor plate corresponding to opposite toroidal directions in opposite toroidal field configurations. Due to strong poloidal E×B flows in H-mode, net poloidal particle transport can be in opposite direction than the poloidal component of the parallel-B plasma flow.« less
Relation of Parallel Discrete Event Simulation algorithms with physical models
NASA Astrophysics Data System (ADS)
Shchur, L. N.; Shchur, L. V.
2015-09-01
We extend concept of local simulation times in parallel discrete event simulation (PDES) in order to take into account architecture of the current hardware and software in high-performance computing. We shortly review previous research on the mapping of PDES on physical problems, and emphasise how physical results may help to predict parallel algorithms behaviour.
Parallel Simulation of Subsonic Fluid Dynamics on a Cluster of Workstations.
1994-11-01
inside wind musical instruments. Typical simulations achieve $80\\%$ parallel efficiency (speedup/processors) using 20 HP-Apollo workstations. Detailed...TERMS AI, MIT, Artificial Intelligence, Distributed Computing, Workstation Cluster, Network, Fluid Dynamics, Musical Instruments 17. SECURITY...for example, the flow of air inside wind musical instruments. Typical simulations achieve 80% parallel efficiency (speedup/processors) using 20 HP
The Acceleration of Thermal Protons and Minor Ions at a Quasi-Parallel Interplanetary Shock
NASA Astrophysics Data System (ADS)
Giacalone, J.; Lario, D.; Lepri, S. T.
2017-12-01
We compare the results from self-consistent hybrid simulations (kinetic ions, massless fluid electrons) and spacecraft observations of a strong, quasi-parallel interplanetary shock that crossed the Advanced Composition Explorer (ACE) on DOY 94, 2001. In our simulations, the un-shocked plasma-frame ion distributions are Maxwellian. Our simulations include protons and minor ions (alphas, 3He++, and C5+). The interplanetary shock crossed both the ACE and the Wind spacecraft, and was associated with significant increases in the flux of > 50 keV/nuc ions. Our simulation uses parameters (ion densities, magnetic field strength, Mach number, etc.) consistent with those observed. Acceleration of the ions by the shock, in a manner similar to that expected from diffusive shock acceleration theory, leads to a high-energy tail in the distribution of the post-shock plasma for all ions we considered. The simulated distributions are directly compared to those observed by ACE/SWICS, EPAM, and ULEIS, and Wind/STICS and 3DP, covering the energy range from below the thermal peak to the suprathermal tail. We conclude from our study that the solar wind is the most significant source of the high-energy ions for this event. Our results have important implications for the physics of the so-called `injection problem', which will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Küchlin, Stephan, E-mail: kuechlin@ifd.mavt.ethz.ch; Jenny, Patrick
2017-01-01
A major challenge for the conventional Direct Simulation Monte Carlo (DSMC) technique lies in the fact that its computational cost becomes prohibitive in the near continuum regime, where the Knudsen number (Kn)—characterizing the degree of rarefaction—becomes small. In contrast, the Fokker–Planck (FP) based particle Monte Carlo scheme allows for computationally efficient simulations of rarefied gas flows in the low and intermediate Kn regime. The Fokker–Planck collision operator—instead of performing binary collisions employed by the DSMC method—integrates continuous stochastic processes for the phase space evolution in time. This allows for time step and grid cell sizes larger than the respective collisionalmore » scales required by DSMC. Dynamically switching between the FP and the DSMC collision operators in each computational cell is the basis of the combined FP-DSMC method, which has been proven successful in simulating flows covering the whole Kn range. Until recently, this algorithm had only been applied to two-dimensional test cases. In this contribution, we present the first general purpose implementation of the combined FP-DSMC method. Utilizing both shared- and distributed-memory parallelization, this implementation provides the capability for simulations involving many particles and complex geometries by exploiting state of the art computer cluster technologies.« less
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1992-01-01
The nonlinear stability of compact schemes for shock calculations is investigated. In recent years compact schemes were used in various numerical simulations including direct numerical simulation of turbulence. However to apply them to problems containing shocks, one has to resolve the problem of spurious numerical oscillation and nonlinear instability. A framework to apply nonlinear limiting to a local mean is introduced. The resulting scheme can be proven total variation (1D) or maximum norm (multi D) stable and produces nice numerical results in the test cases. The result is summarized in the preprint entitled 'Nonlinearly Stable Compact Schemes for Shock Calculations', which was submitted to SIAM Journal on Numerical Analysis. Research was continued on issues related to two and three dimensional essentially non-oscillatory (ENO) schemes. The main research topics include: parallel implementation of ENO schemes on Connection Machines; boundary conditions; shock interaction with hydrogen bubbles, a preparation for the full combustion simulation; and direct numerical simulation of compressible sheared turbulence.
A Computer Simulation of the System-Wide Effects of Parallel-Offset Route Maneuvers
NASA Technical Reports Server (NTRS)
Lauderdale, Todd A.; Santiago, Confesor; Pankok, Carl
2010-01-01
Most aircraft managed by air-traffic controllers in the National Airspace System are capable of flying parallel-offset routes. This paper presents the results of two related studies on the effects of increased use of offset routes as a conflict resolution maneuver. The first study analyzes offset routes in the context of all standard resolution types which air-traffic controllers currently use. This study shows that by utilizing parallel-offset route maneuvers, significant system-wide savings in delay due to conflict resolution of up to 30% are possible. It also shows that most offset resolutions replace horizontal-vectoring resolutions. The second study builds on the results of the first and directly compares offset resolutions and standard horizontal-vectoring maneuvers to determine that in-trail conflicts are often more efficiently resolved by offset maneuvers.
Conceptual design of a hybrid parallel mechanism for mask exchanging of TMT
NASA Astrophysics Data System (ADS)
Wang, Jianping; Zhou, Hongfei; Li, Kexuan; Zhou, Zengxiang; Zhai, Chao
2015-10-01
Mask exchange system is an important part of the Multi-Object Broadband Imaging Echellette (MOBIE) on the Thirty Meter Telescope (TMT). To solve the problem of stiffness changing with the gravity vector of the mask exchange system in the MOBIE, the hybrid parallel mechanism design method was introduced into the whole research. By using the characteristics of high stiffness and precision of parallel structure, combined with large moving range of serial structure, a conceptual design of a hybrid parallel mask exchange system based on 3-RPS parallel mechanism was presented. According to the position requirements of the MOBIE, the SolidWorks structure model of the hybrid parallel mask exchange robot was established and the appropriate installation position without interfering with the related components and light path in the MOBIE of TMT was analyzed. Simulation results in SolidWorks suggested that 3-RPS parallel platform had good stiffness property in different gravity vector directions. Furthermore, through the research of the mechanism theory, the inverse kinematics solution of the 3-RPS parallel platform was calculated and the mathematical relationship between the attitude angle of moving platform and the angle of ball-hinges on the moving platform was established, in order to analyze the attitude adjustment ability of the hybrid parallel mask exchange robot. The proposed conceptual design has some guiding significance for the design of mask exchange system of the MOBIE on TMT.
NASA Astrophysics Data System (ADS)
Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves
2009-03-01
This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.
Xyce parallel electronic simulator users guide, version 6.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas; Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers; A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models; Device models that are specifically tailored to meet Sandia's needs, including some radiationaware devices (for Sandia users only); and Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase-a message passing parallel implementation-which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less
Xyce parallel electronic simulator users' guide, Version 6.0.1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less
Xyce parallel electronic simulator users guide, version 6.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less
Data parallel sorting for particle simulation
NASA Technical Reports Server (NTRS)
Dagum, Leonardo
1992-01-01
Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine.
NASA Astrophysics Data System (ADS)
Yuan, G.; Wang, D. H.
2017-03-01
Multi-directional and multi-degree-of-freedom (multi-DOF) vibration energy harvesting are attracting more and more research interest in recent years. In this paper, the principle of a piezoelectric six-DOF vibration energy harvester based on parallel mechanism is proposed to convert the energy of the six-DOF vibration to single-DOF vibrations of the limbs on the energy harvester and output voltages. The dynamic model of the piezoelectric six-DOF vibration energy harvester is established to estimate the vibrations of the limbs. On this basis, a Stewart-type piezoelectric six-DOF vibration energy harvester is developed and explored. In order to validate the established dynamic model and the analysis results, the simulation model of the Stewart-type piezoelectric six-DOF vibration energy harvester is built and tested with different vibration excitations by SimMechanics, and some preliminary experiments are carried out. The results show that the vibration of the limbs on the piezoelectric six-DOF vibration energy harvester can be estimated by the established dynamic model. The developed Stewart-type piezoelectric six-DOF vibration energy harvester can harvest the energy of multi-directional linear vibration and multi-axis rotating vibration with resonance frequencies of 17 Hz, 25 Hz, and 47 Hz. Moreover, the resonance frequencies of the developed piezoelectric six-DOF vibration energy harvester are not affected by the direction changing of the vibration excitation.
NASA Astrophysics Data System (ADS)
Bao, J.; Liu, D.; Lin, Z.
2017-10-01
A conservative scheme of drift kinetic electrons for gyrokinetic simulations of kinetic-magnetohydrodynamic processes in toroidal plasmas has been formulated and verified. Both vector potential and electron perturbed distribution function are decomposed into adiabatic part with analytic solution and non-adiabatic part solved numerically. The adiabatic parallel electric field is solved directly from the electron adiabatic response, resulting in a high degree of accuracy. The consistency between electrostatic potential and parallel vector potential is enforced by using the electron continuity equation. Since particles are only used to calculate the non-adiabatic response, which is used to calculate the non-adiabatic vector potential through Ohm's law, the conservative scheme minimizes the electron particle noise and mitigates the cancellation problem. Linear dispersion relations of the kinetic Alfvén wave and the collisionless tearing mode in cylindrical geometry have been verified in gyrokinetic toroidal code simulations, which show that the perpendicular grid size can be larger than the electron collisionless skin depth when the mode wavelength is longer than the electron skin depth.
NASA Astrophysics Data System (ADS)
Li, Zongchao; Chen, Xueliang; Gao, Mengtan; Jiang, Han; Li, Tiefei
2017-03-01
Earthquake engineering parameters are very important in the engineering field, especially engineering anti-seismic design and earthquake disaster prevention. In this study, we focus on simulating earthquake engineering parameters by the empirical Green's function method. The simulated earthquake (MJMA6.5) occurred in Kyushu, Japan, 1997. Horizontal ground motion is separated as fault parallel and fault normal, in order to assess characteristics of two new direction components. Broadband frequency range of ground motion simulation is from 0.1 to 20 Hz. Through comparing observed parameters and synthetic parameters, we analyzed distribution characteristics of earthquake engineering parameters. From the comparison, the simulated waveform has high similarity with the observed waveform. We found the following. (1) Near-field PGA attenuates radically all around with strip radiation patterns in fault parallel while radiation patterns of fault normal is circular; PGV has a good similarity between observed record and synthetic record, but has different distribution characteristic in different components. (2) Rupture direction and terrain have a large influence on 90 % significant duration. (3) Arias Intensity is attenuating with increasing epicenter distance. Observed values have a high similarity with synthetic values. (4) Predominant period is very different in the part of Kyushu in fault normal. It is affected greatly by site conditions. (5) Most parameters have good reference values where the hypo-central is less than 35 km. (6) The GOF values of all these parameters are generally higher than 45 which means a good result according to Olsen's classification criterion. Not all parameters can fit well. Given these synthetic ground motion parameters, seismic hazard analysis can be performed and earthquake disaster analysis can be conducted in future urban planning.
NASA Astrophysics Data System (ADS)
Engel, V.; Cheng, Y.; Stieglitz, M.
2009-12-01
Pattern formation in vegetated communities reflects the underlying mechanisms governing resource utilization and distribution across the landscape. An example of a patterned ecosystem is the Florida Everglades, which is characterized by parallel and slightly elevated peat "ridges" separated by deeper water "slough" communities (R&S). Ridges are dominated by sawgrass (Cladium jamaiscence). These patterns are thought to be aligned with and develop in response to the historic surface water flow direction, though the precise mechanisms which lead to their formation are poorly understood. Over the years this R&S habitat has degraded in areas where the natural flow regime, hydroperiod, and water depths have been impacted by human development. Managing and restoring this habitat has been an objective of the U.S. Federal and Florida State governments since the Comprehensive Everglades Restoration Plan (CERP) was authorized in 2000. It is imperative, however, to develop a mechanistic understanding of ridge-slough formation before the potential benefits of hydrologic forecasts associated with CERP can be evaluated. Recently, Cheng et al (see Cheng et al, session NG14) employed a simple 2D advection-diffusion model developed by Rietkerk et al (2004) to describe for the first time, the formation of parallel stripes from hydrologic interactions. To simulate parallel stripes, Cheng et al retained the basic equations of the Rietkerk model but allowed for constant advection of water and nutrient in one direction to simulate slope conditions, with evapotranspiration driven advection of water and nutrient perpendicular to the downhill flow direction. We employ this modeling framework and parameterize the model with Everglades field data to simulate ridge-slough formation. In this model, the relatively higher rates of evapotranspiration on the ridges compared to the sloughs create hydraulic gradients which carry dissolved nutrients from the sloughs to the faster growing ridges. With time, the patches aggregate and spread laterally in the direction of the downhill flow. The characteristic wavelengths and spatial patterning of the ridge-slough habitat found in the historic Everglades is reproduced by the model. Nutrient distributions across the landscape and across the ridge-slough interfaces also match observations. Perturbations to the system are modeled in the form of altered hydraulic gradients and nutrient input functions, similar to actual stressors on the system. Under the altered conditions, a loss of patterning in the habitat is observed, in some cases leading to ridge expansion into the sloughs, and in others leading to a complete loss of vegetation pattern. Simulations indicate that the hydrologic changes required to regenerate coherence in the ridge slough patterns in degraded areas are different from those in which the system originally formed. Plant-nutrient interactions and the overall nutrient status are shown to be a major determinant in how the system will respond to hydrologic changes associated with CERP.
A Peridynamic Approach for Nanoscratch Simulation of the Cement Mortar
NASA Astrophysics Data System (ADS)
Zhao, Jingjing; Zhang, Qing; Lu, Guangda; Chen, Depeng
2018-03-01
The present study develops a peridynamic approach for simulating the nanoscratch procedure on the cement mortar interface. In this approach, the cement and sand are considered as discrete particles with certain mechanical properties on the nanoscale. Besides, the interaction force functions for different components in the interface are represented by combining the van der Waals force and the peridynamic force. The nanoscratch procedures with the indenter moving along certain direction either parallel or perpendicular to the interface are simulated in this paper. The simulation results show the damage evolution processes and the final damage distributions of the cement mortar under different scratching speed and depth of the indenter, indicating that the interface between cement and sand is a weak area.
Deformation, crystal preferred orientations, and seismic anisotropy in the Earth's D″ layer
NASA Astrophysics Data System (ADS)
Tommasi, Andréa; Goryaeva, Alexandra; Carrez, Philippe; Cordier, Patrick; Mainprice, David
2018-06-01
We use a forward multiscale model that couples atomistic modeling of intracrystalline plasticity mechanisms (dislocation glide ± twinning) in MgSiO3 post-perovskite (PPv) and periclase (MgO) at lower mantle pressures and temperatures to polycrystal plasticity simulations to predict crystal preferred orientations (CPO) development and seismic anisotropy in D″. We model the CPO evolution in aggregates of 70% PPv and 30% MgO submitted to simple shear, axial shortening, and along corner-flow streamlines, which simulate changes in flow orientation similar to those expected at the transition between a downwelling and flow parallel to the core-mantle boundary (CMB) within D″ or between CMB-parallel flow and upwelling at the borders of the large low shear wave velocity provinces (LLSVP) in the lowermost mantle. Axial shortening results in alignment of PPv [010] axes with the shortening direction. Simple shear produces PPv CPO with a monoclinic symmetry that rapidly rotates towards parallelism between the dominant [100](010) slip system and the macroscopic shear. These predictions differ from MgSiO3 post-perovskite textures formed in diamond-anvil cell experiments, but agree with those obtained in simple shear and compression experiments using CaIrO3 post-perovskite. Development of CPO in PPv and MgO results in seismic anisotropy in D″. For shear parallel to the CMB, at low strain, the inclination of ScS, Sdiff, and SKKS fast polarizations and delay times vary depending on the propagation direction. At moderate and high shear strains, all S-waves are polarized nearly horizontally. Downwelling flow produces Sdiff, ScS, and SKKS fast polarization directions and birefringence that vary gradually as a function of the back-azimuth from nearly parallel to inclined by up to 70° to CMB and from null to ∼5%. Change in the flow to shear parallel to the CMB results in dispersion of the CPO, weakening of the anisotropy, and strong azimuthal variation of the S-wave splitting up to 250 km from the corner. Transition from horizontal shear to upwelling also produces weakening of the CPO and complex seismic anisotropy patterns, with dominantly inclined fast ScS and SKKS polarizations, over most of the upwelling path. Models that take into account twinning in PPv explain most observations of seismic anisotropy in D″, but heterogeneity of the flow at scales <1000 km is needed to comply with the seismological evidence for low apparent birefringence in D″.
A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator
Engelmann, Christian; Naughton, III, Thomas J.
2016-03-22
Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less
A Systems Approach to Scalable Transportation Network Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S
2006-01-01
Emerging needs in transportation network modeling and simulation are raising new challenges with respect to scal-ability of network size and vehicular traffic intensity, speed of simulation for simulation-based optimization, and fidel-ity of vehicular behavior for accurate capture of event phe-nomena. Parallel execution is warranted to sustain the re-quired detail, size and speed. However, few parallel simulators exist for such applications, partly due to the challenges underlying their development. Moreover, many simulators are based on time-stepped models, which can be computationally inefficient for the purposes of modeling evacuation traffic. Here an approach is presented to de-signing a simulator with memory andmore » speed efficiency as the goals from the outset, and, specifically, scalability via parallel execution. The design makes use of discrete event modeling techniques as well as parallel simulation meth-ods. Our simulator, called SCATTER, is being developed, incorporating such design considerations. Preliminary per-formance results are presented on benchmark road net-works, showing scalability to one million vehicles simu-lated on one processor.« less
ANNarchy: a code generation approach to neural simulations on parallel hardware
Vitay, Julien; Dinkelbach, Helge Ü.; Hamker, Fred H.
2015-01-01
Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions. PMID:26283957
Xyce parallel electronic simulator : users' guide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.
2011-05-01
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-artmore » algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical simulation capability, designed to meet the unique needs of the laboratory.« less
NASA Technical Reports Server (NTRS)
Dagum, Leonardo
1989-01-01
The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine.
GPAW - massively parallel electronic structure calculations with Python-based software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enkovaara, J.; Romero, N.; Shende, S.
2011-01-01
Electronic structure calculations are a widely used tool in materials science and large consumer of supercomputing resources. Traditionally, the software packages for these kind of simulations have been implemented in compiled languages, where Fortran in its different versions has been the most popular choice. While dynamic, interpreted languages, such as Python, can increase the effciency of programmer, they cannot compete directly with the raw performance of compiled languages. However, by using an interpreted language together with a compiled language, it is possible to have most of the productivity enhancing features together with a good numerical performance. We have used thismore » approach in implementing an electronic structure simulation software GPAW using the combination of Python and C programming languages. While the chosen approach works well in standard workstations and Unix environments, massively parallel supercomputing systems can present some challenges in porting, debugging and profiling the software. In this paper we describe some details of the implementation and discuss the advantages and challenges of the combined Python/C approach. We show that despite the challenges it is possible to obtain good numerical performance and good parallel scalability with Python based software.« less
Protons and alpha particles in the expanding solar wind: Hybrid simulations
NASA Astrophysics Data System (ADS)
Hellinger, Petr; Trávníček, Pavel M.
2013-09-01
We present results of a two‒dimensional hybrid expanding box simulation of a plasma system with three ion populations, beam and core protons, and alpha particles (and fluid electrons), drifting with respect to each other. The expansion with a strictly radial magnetic field leads to a decrease of the ion perpendicular to parallel temperature ratios as well as to an increase of the ratio between the ion relative velocities and the local Alfvén velocity creating a free energy for many different instabilities. The system is most of the time marginally stable with respect to kinetic instabilities mainly due to the ion relative velocities; these instabilities determine the system evolution counteracting some effects of the expansion. Nonlinear evolution of these instabilities leads to large modifications of the ion velocity distribution functions. The beam protons and alpha particles are decelerated with respect to the core protons and all the populations are cooled in the parallel direction and heated in the perpendicular one. On the macroscopic level, the kinetic instabilities cause large departures of the system evolution from the double adiabatic prediction and lead to perpendicular heating and parallel cooling rates which are comparable to the heating rates estimated from the Helios observations.
NASA Technical Reports Server (NTRS)
Kopasakis, George; Connolly, Joseph W.; Cheng, Larry
2015-01-01
This paper covers the development of stage-by-stage and parallel flow path compressor modeling approaches for a Variable Cycle Engine. The stage-by-stage compressor modeling approach is an extension of a technique for lumped volume dynamics and performance characteristic modeling. It was developed to improve the accuracy of axial compressor dynamics over lumped volume dynamics modeling. The stage-by-stage compressor model presented here is formulated into a parallel flow path model that includes both axial and rotational dynamics. This is done to enable the study of compressor and propulsion system dynamic performance under flow distortion conditions. The approaches utilized here are generic and should be applicable for the modeling of any axial flow compressor design accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.
Symplectic molecular dynamics simulations on specially designed parallel computers.
Borstnik, Urban; Janezic, Dusanka
2005-01-01
We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.
Parallel discrete-event simulation of FCFS stochastic queueing networks
NASA Technical Reports Server (NTRS)
Nicol, David M.
1988-01-01
Physical systems are inherently parallel. Intuition suggests that simulations of these systems may be amenable to parallel execution. The parallel execution of a discrete-event simulation requires careful synchronization of processes in order to ensure the execution's correctness; this synchronization can degrade performance. Largely negative results were recently reported in a study which used a well-known synchronization method on queueing network simulations. Discussed here is a synchronization method (appointments), which has proven itself to be effective on simulations of FCFS queueing networks. The key concept behind appointments is the provision of lookahead. Lookahead is a prediction on a processor's future behavior, based on an analysis of the processor's simulation state. It is shown how lookahead can be computed for FCFS queueing network simulations, give performance data that demonstrates the method's effectiveness under moderate to heavy loads, and discuss performance tradeoffs between the quality of lookahead, and the cost of computing lookahead.
Parallelization of Lower-Upper Symmetric Gauss-Seidel Method for Chemically Reacting Flow
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan; Jost, Gabriele; Chang, Sherry
2005-01-01
Development of technologies for exploration of the solar system has revived an interest in computational simulation of chemically reacting flows since planetary probe vehicles exhibit non-equilibrium phenomena during the atmospheric entry of a planet or a moon as well as the reentry to the Earth. Stability in combustion is essential for new propulsion systems. Numerical solution of real-gas flows often increases computational work by an order-of-magnitude compared to perfect gas flow partly because of the increased complexity of equations to solve. Recently, as part of Project Columbia, NASA has integrated a cluster of interconnected SGI Altix systems to provide a ten-fold increase in current supercomputing capacity that includes an SGI Origin system. Both the new and existing machines are based on cache coherent non-uniform memory access architecture. Lower-Upper Symmetric Gauss-Seidel (LU-SGS) relaxation method has been implemented into both perfect and real gas flow codes including Real-Gas Aerodynamic Simulator (RGAS). However, the vectorized RGAS code runs inefficiently on cache-based shared-memory machines such as SGI system. Parallelization of a Gauss-Seidel method is nontrivial due to its sequential nature. The LU-SGS method has been vectorized on an oblique plane in INS3D-LU code that has been one of the base codes for NAS Parallel benchmarks. The oblique plane has been called a hyperplane by computer scientists. It is straightforward to parallelize a Gauss-Seidel method by partitioning the hyperplanes once they are formed. Another way of parallelization is to schedule processors like a pipeline using software. Both hyperplane and pipeline methods have been implemented using openMP directives. The present paper reports the performance of the parallelized RGAS code on SGI Origin and Altix systems.
Progress in Unsteady Turbopump Flow Simulations
NASA Technical Reports Server (NTRS)
Kiris, Cetin C.; Chan, William; Kwak, Dochan; Williams, Robert
2002-01-01
This viewgraph presentation discusses unsteady flow simulations for a turbopump intended for a reusable launch vehicle (RLV). The simulation process makes use of computational grids and parallel processing. The architecture of the parallel computers used is discussed, as is the scripting of turbopump simulations.
NASA Astrophysics Data System (ADS)
Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya
2017-10-01
Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.
A hybrid parallel framework for the cellular Potts model simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Yi; He, Kejing; Dong, Shoubin
2009-01-01
The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approachmore » achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).« less
Two-dimensional symmetry breaking of fluid density distribution in closed nanoslits.
Berim, Gersh O; Ruckenstein, Eli
2008-01-14
Stable and metastable fluid density distributions (FDDs) in a closed nanoslit between two identical parallel solid walls have been identified on the basis of a nonlocal canonical ensemble density functional theory. Similar to Monte Carlo simulations, periodicity of the FDD in one of the lateral (parallel to the walls surfaces) directions, denoted as the x direction, was assumed. In the other lateral direction, y direction, the FDD was considered uniform. It was found that depending on the average fluid density in the slit, both uniform as well as nonuniform FDDs in the x direction can occur. The uniform FDDs are either symmetric or asymmetric about the middle plane between walls; the latter FDD being the consequence of a symmetry breaking across the slit. The nonuniform FDDs in the x direction occur either in the form of a bump on a thin liquid film covering the walls or as a liquid bridge between those walls and provide symmetry breaking in the x direction. For small and large average densities, the stable state is uniform in the x direction and is symmetric about the middle plane between walls. In the intermediate range of the average density and depending on the length L(x) of the FDD period, the stable state can be represented either by a FDD, which is uniform in the x direction and asymmetric about the middle of the slit (small values of L(x)), or by a bump- and bridgelike FDD for intermediate and large values of L(x), respectively. These results are in agreement with the Monte Carlo simulations performed earlier by other authors. Because the free energy of the stable state decreases monotonically with increasing L(x), one can conclude that the real period is very large (infinite) and that for the values of the parameters employed, a single bridge of finite length over the entire slit is generated.
Hairpin vortices in turbulent boundary layers
NASA Astrophysics Data System (ADS)
Eitel-Amor, G.; Örlü, R.; Schlatter, P.; Flores, O.
2015-02-01
The present work presents a number of parallel and spatially developing simulations of boundary layers to address the question of whether hairpin vortices are a dominant feature of near-wall turbulence, and which role they play during transition. In the first part, the parent-offspring regeneration mechanism is investigated in parallel (temporal) simulations of a single hairpin vortex introduced in a mean shear flow corresponding to either turbulent channels or boundary layers (Reτ ≲ 590). The effect of a turbulent background superimposed on the mean flow is considered by using an eddy viscosity computed from resolved simulations. Tracking the vortical structure downstream, it is found that secondary hairpins are only created shortly after initialization, with all rotational structures decaying for later times. For hairpins in a clean (laminar) environment, the decay is relatively slow, while hairpins in weak turbulent environments (10% of νt) dissipate after a couple of eddy turnover times. In the second part, the role of hairpin vortices in laminar-turbulent transition is studied using simulations of spatial boundary layers tripped by hairpin vortices. These vortices are generated by means of specific volumetric forces representing an ejection event, creating a synthetic turbulent boundary layer initially dominated by hairpin-like vortices. These hairpins are advected towards the wake region of the boundary layer, while a sinusoidal instability of the streaks near the wall results in rapid development of a turbulent boundary layer. For Reθ > 400, the boundary layer is fully developed, with no evidence of hairpin vortices reaching into the wall region. The results from both the parallel and spatial simulations strongly suggest that the regeneration process is rather short-lived and may not sustain once a turbulent background is developed. From the transitional flow simulations, it is conjectured that the forest of hairpins reported in former direct numerical simulation studies is reminiscent of the transitional boundary layer and may not be connected to some aspects of the dynamics of the fully developed wall-bounded turbulence.
Parallel discrete event simulation: A shared memory approach
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.
1987-01-01
With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.
Eom, Seung-Hyun; Seo, Yunsik; Lim, Sungjoon
2015-01-01
In this paper, we propose a paper-based pattern switchable antenna system using inkjet-printing technology for bi-direction sensor applications. The proposed antenna system is composed of two directional bow-tie antennas and a switching network. The switching network consists of a single-pole-double-throw (SPDT) switch and a balun element. A double-sided parallel-strip line (DSPSL) is employed to convert the unbalanced microstrip mode to the balanced strip mode. Two directional bow-tie antennas have different radiation patterns because of the different orientation of the reflectors and antennas. It is demonstrated from electromagnetic (EM) simulation and measurement that the radiation patterns of the proposed antenna are successfully switched by the SPDT switch. PMID:26690443
Eom, Seung-Hyun; Seo, Yunsik; Lim, Sungjoon
2015-12-10
In this paper, we propose a paper-based pattern switchable antenna system using inkjet-printing technology for bi-direction sensor applications. The proposed antenna system is composed of two directional bow-tie antennas and a switching network. The switching network consists of a single-pole-double-throw (SPDT) switch and a balun element. A double-sided parallel-strip line (DSPSL) is employed to convert the unbalanced microstrip mode to the balanced strip mode. Two directional bow-tie antennas have different radiation patterns because of the different orientation of the reflectors and antennas. It is demonstrated from electromagnetic (EM) simulation and measurement that the radiation patterns of the proposed antenna are successfully switched by the SPDT switch.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rutland, Christopher J.
2009-04-26
The Terascale High-Fidelity Simulations of Turbulent Combustion (TSTC) project is a multi-university collaborative effort to develop a high-fidelity turbulent reacting flow simulation capability utilizing terascale, massively parallel computer technology. The main paradigm of the approach is direct numerical simulation (DNS) featuring the highest temporal and spatial accuracy, allowing quantitative observations of the fine-scale physics found in turbulent reacting flows as well as providing a useful tool for development of sub-models needed in device-level simulations. Under this component of the TSTC program the simulation code named S3D, developed and shared with coworkers at Sandia National Laboratories, has been enhanced with newmore » numerical algorithms and physical models to provide predictive capabilities for turbulent liquid fuel spray dynamics. Major accomplishments include improved fundamental understanding of mixing and auto-ignition in multi-phase turbulent reactant mixtures and turbulent fuel injection spray jets.« less
NWChem: A comprehensive and scalable open-source solution for large scale molecular simulations
NASA Astrophysics Data System (ADS)
Valiev, M.; Bylaska, E. J.; Govind, N.; Kowalski, K.; Straatsma, T. P.; Van Dam, H. J. J.; Wang, D.; Nieplocha, J.; Apra, E.; Windus, T. L.; de Jong, W. A.
2010-09-01
The latest release of NWChem delivers an open-source computational chemistry package with extensive capabilities for large scale simulations of chemical and biological systems. Utilizing a common computational framework, diverse theoretical descriptions can be used to provide the best solution for a given scientific problem. Scalable parallel implementations and modular software design enable efficient utilization of current computational architectures. This paper provides an overview of NWChem focusing primarily on the core theoretical modules provided by the code and their parallel performance. Program summaryProgram title: NWChem Catalogue identifier: AEGI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open Source Educational Community License No. of lines in distributed program, including test data, etc.: 11 709 543 No. of bytes in distributed program, including test data, etc.: 680 696 106 Distribution format: tar.gz Programming language: Fortran 77, C Computer: all Linux based workstations and parallel supercomputers, Windows and Apple machines Operating system: Linux, OS X, Windows Has the code been vectorised or parallelized?: Code is parallelized Classification: 2.1, 2.2, 3, 7.3, 7.7, 16.1, 16.2, 16.3, 16.10, 16.13 Nature of problem: Large-scale atomistic simulations of chemical and biological systems require efficient and reliable methods for ground and excited solutions of many-electron Hamiltonian, analysis of the potential energy surface, and dynamics. Solution method: Ground and excited solutions of many-electron Hamiltonian are obtained utilizing density-functional theory, many-body perturbation approach, and coupled cluster expansion. These solutions or a combination thereof with classical descriptions are then used to analyze potential energy surface and perform dynamical simulations. Additional comments: Full documentation is provided in the distribution file. This includes an INSTALL file giving details of how to build the package. A set of test runs is provided in the examples directory. The distribution file for this program is over 90 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Running time depends on the size of the chemical system, complexity of the method, number of cpu's and the computational task. It ranges from several seconds for serial DFT energy calculations on a few atoms to several hours for parallel coupled cluster energy calculations on tens of atoms or ab-initio molecular dynamics simulation on hundreds of atoms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biyikli, Emre; To, Albert C., E-mail: albertto@pitt.edu
Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with themore » associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3–8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.« less
Multiresolution molecular mechanics: Implementation and efficiency
NASA Astrophysics Data System (ADS)
Biyikli, Emre; To, Albert C.
2017-01-01
Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with the associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3-8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.
Wakefield Simulation of CLIC PETS Structure Using Parallel 3D Finite Element Time-Domain Solver T3P
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candel, A.; Kabel, A.; Lee, L.
In recent years, SLAC's Advanced Computations Department (ACD) has developed the parallel 3D Finite Element electromagnetic time-domain code T3P. Higher-order Finite Element methods on conformal unstructured meshes and massively parallel processing allow unprecedented simulation accuracy for wakefield computations and simulations of transient effects in realistic accelerator structures. Applications include simulation of wakefield damping in the Compact Linear Collider (CLIC) power extraction and transfer structure (PETS).
Parallel Multi-cycle LES of an Optical Pent-roof DISI Engine Under Motored Operating Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Dam, Noah; Sjöberg, Magnus; Zeng, Wei
The use of Large-eddy Simulations (LES) has increased due to their ability to resolve the turbulent fluctuations of engine flows and capture the resulting cycle-to-cycle variability. One drawback of LES, however, is the requirement to run multiple engine cycles to obtain the necessary cycle statistics for full validation. The standard method to obtain the cycles by running a single simulation through many engine cycles sequentially can take a long time to complete. Recently, a new strategy has been proposed by our research group to reduce the amount of time necessary to simulate the many engine cycles by running individual enginemore » cycle simulations in parallel. With modern large computing systems this has the potential to reduce the amount of time necessary for a full set of simulated engine cycles to finish by up to an order of magnitude. In this paper, the Parallel Perturbation Methodology (PPM) is used to simulate up to 35 engine cycles of an optically accessible, pent-roof Directinjection Spark-ignition (DISI) engine at two different motored engine operating conditions, one throttled and one un-throttled. Comparisons are made against corresponding sequential-cycle simulations to verify the similarity of results using either methodology. Mean results from the PPM approach are very similar to sequential-cycle results with less than 0.5% difference in pressure and a magnitude structure index (MSI) of 0.95. Differences in cycle-to-cycle variability (CCV) predictions are larger, but close to the statistical uncertainty in the measurement for the number of cycles simulated. PPM LES results were also compared against experimental data. Mean quantities such as pressure or mean velocities were typically matched to within 5- 10%. Pressure CCVs were under-predicted, mostly due to the lack of any perturbations in the pressure boundary conditions between cycles. Velocity CCVs for the simulations had the same average magnitude as experiments, but the experimental data showed greater spatial variation in the root-mean-square (RMS). Conversely, circular standard deviation results showed greater repeatability of the flow directionality and swirl vortex positioning than the simulations.« less
Xyce Parallel Electronic Simulator Users' Guide Version 6.8
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Aadithya, Karthik Venkatraman; Mei, Ting
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been de- signed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel com- puting platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows onemore » to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase$-$ a message passing parallel implementation $-$ which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less
Perspectives on the Future of CFD
NASA Technical Reports Server (NTRS)
Kwak, Dochan
2000-01-01
This viewgraph presentation gives an overview of the future of computational fluid dynamics (CFD), which in the past has pioneered the field of flow simulation. Over time CFD has progressed as computing power. Numerical methods have been advanced as CPU and memory capacity increases. Complex configurations are routinely computed now and direct numerical simulations (DNS) and large eddy simulations (LES) are used to study turbulence. As the computing resources changed to parallel and distributed platforms, computer science aspects such as scalability (algorithmic and implementation) and portability and transparent codings have advanced. Examples of potential future (or current) challenges include risk assessment, limitations of the heuristic model, and the development of CFD and information technology (IT) tools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sierra Thermal /Fluid Team
The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the coremore » architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.« less
Particle-in-cell simulations of collisionless magnetic reconnection with a non-uniform guide field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, F., E-mail: fw237@st-andrews.ac.uk; Neukirch, T., E-mail: tn3@st-andrews.ac.uk; Harrison, M. G.
Results are presented of a first study of collisionless magnetic reconnection starting from a recently found exact nonlinear force-free Vlasov–Maxwell equilibrium. The initial state has a Harris sheet magnetic field profile in one direction and a non-uniform guide field in a second direction, resulting in a spatially constant magnetic field strength as well as a constant initial plasma density and plasma pressure. It is found that the reconnection process initially resembles guide field reconnection, but that a gradual transition to anti-parallel reconnection happens as the system evolves. The time evolution of a number of plasma parameters is investigated, and themore » results are compared with simulations starting from a Harris sheet equilibrium and a Harris sheet plus constant guide field equilibrium.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franci, Luca; INFN-Sezione di Firenze, Via G. Sansone 1, I-50019 Sesto F.no; Hellinger, Petr, E-mail: petr.hellinger@asu.cas.cz
2016-03-25
Proton temperature anisotropies between the directions parallel and perpendicular to the mean magnetic field are usually observed in the solar wind plasma. Here, we employ a high-resolution hybrid particle-in-cell simulation in order to investigate the relation between spatial properties of the proton temperature and the peaks in the current density and in the flow vorticity. Our results indicate that, although regions where the proton temperature is enhanced and temperature anisotropies are larger correspond approximately to regions where many thin current sheets form, no firm quantitative evidence supports the idea of a direct causality between the two phenomena. On the othermore » hand, quite a clear correlation between the behavior of the proton temperature and the out-of-plane vorticity is obtained.« less
The effect of thermomechanical processing on second phase particle redistribution in U-10 wt%Mo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Xiaohua; Wang, Xiaowo; Joshi, Vineet V.
2018-03-01
The multi-pass hot-rolling process of an annealed uranium-10 wt% molybdenum coupon was studied by plane-strain compression finite element modeling. Two point correlation function (2PCF) was used to analyze the carbide particle distribution after each rolling reduction. The hot rolling simulation results show that the alignment of UC particles along grain boundaries will rotate during rolling until it is parallel to the rolling direction, to form stringer-like distributions which are typically observed in rolled products that contain inclusions. 2PCF analysis of simulation shows that the interparticle spacing shrinks along the normal direction. The number of major peaks of 2PCF along NDmore » decreases after large reduction. The locations of major peaks indicate the inter-stringer distances.« less
Testing New Programming Paradigms with NAS Parallel Benchmarks
NASA Technical Reports Server (NTRS)
Jin, H.; Frumkin, M.; Schultz, M.; Yan, J.
2000-01-01
Over the past decade, high performance computing has evolved rapidly, not only in hardware architectures but also with increasing complexity of real applications. Technologies have been developing to aim at scaling up to thousands of processors on both distributed and shared memory systems. Development of parallel programs on these computers is always a challenging task. Today, writing parallel programs with message passing (e.g. MPI) is the most popular way of achieving scalability and high performance. However, writing message passing programs is difficult and error prone. Recent years new effort has been made in defining new parallel programming paradigms. The best examples are: HPF (based on data parallelism) and OpenMP (based on shared memory parallelism). Both provide simple and clear extensions to sequential programs, thus greatly simplify the tedious tasks encountered in writing message passing programs. HPF is independent of memory hierarchy, however, due to the immaturity of compiler technology its performance is still questionable. Although use of parallel compiler directives is not new, OpenMP offers a portable solution in the shared-memory domain. Another important development involves the tremendous progress in the internet and its associated technology. Although still in its infancy, Java promisses portability in a heterogeneous environment and offers possibility to "compile once and run anywhere." In light of testing these new technologies, we implemented new parallel versions of the NAS Parallel Benchmarks (NPBs) with HPF and OpenMP directives, and extended the work with Java and Java-threads. The purpose of this study is to examine the effectiveness of alternative programming paradigms. NPBs consist of five kernels and three simulated applications that mimic the computation and data movement of large scale computational fluid dynamics (CFD) applications. We started with the serial version included in NPB2.3. Optimization of memory and cache usage was applied to several benchmarks, noticeably BT and SP, resulting in better sequential performance. In order to overcome the lack of an HPF performance model and guide the development of the HPF codes, we employed an empirical performance model for several primitives found in the benchmarks. We encountered a few limitations of HPF, such as lack of supporting the "REDISTRIBUTION" directive and no easy way to handle irregular computation. The parallelization with OpenMP directives was done at the outer-most loop level to achieve the largest granularity. The performance of six HPF and OpenMP benchmarks is compared with their MPI counterparts for the Class-A problem size in the figure in next page. These results were obtained on an SGI Origin2000 (195MHz) with MIPSpro-f77 compiler 7.2.1 for OpenMP and MPI codes and PGI pghpf-2.4.3 compiler with MPI interface for HPF programs.
A hybrid algorithm for parallel molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Mangiardi, Chris M.; Meyer, R.
2017-10-01
This article describes algorithms for the hybrid parallelization and SIMD vectorization of molecular dynamics simulations with short-range forces. The parallelization method combines domain decomposition with a thread-based parallelization approach. The goal of the work is to enable efficient simulations of very large (tens of millions of atoms) and inhomogeneous systems on many-core processors with hundreds or thousands of cores and SIMD units with large vector sizes. In order to test the efficiency of the method, simulations of a variety of configurations with up to 74 million atoms have been performed. Results are shown that were obtained on multi-core systems with Sandy Bridge and Haswell processors as well as systems with Xeon Phi many-core processors.
Evolution of Kelvin-Helmholtz instability at Venus in the presence of the parallel magnetic field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, H. Y.; Key Laboratory of Planetary Sciences, Chinese Academy of Sciences, Nanjing 210008; Cao, J. B.
2015-06-15
Two-dimensional MHD simulations were performed to study the evolution of the Kelvin-Helmholtz (KH) instability at the Venusian ionopause in response to the strong flow shear in presence of the in-plane magnetic field parallel to the flow direction. The physical behavior of the KH instability as well as the triggering and occurrence conditions for highly rolled-up vortices are characterized through several physical parameters, including Alfvén Mach number on the upper side of the layer, the density ratio, and the ratio of parallel magnetic fields between two sides of the layer. Using these parameters, the simulations show that both the high densitymore » ratio and the parallel magnetic field component across the boundary layer play a role of stabilizing the instability. In the high density ratio case, the amount of total magnetic energy in the final quasi-steady status is much more than that in the initial status, which is clearly different from the case with low density ratio. We particularly investigate the nonlinear development of the case that has a high density ratio and uniform magnetic field. Before the instability saturation, a single magnetic island is formed and evolves into two quasi-steady islands in the non-linear phase. A quasi-steady pattern eventually forms and is embedded within a uniform magnetic field and a broadened boundary layer. The estimation of loss rates of ions from Venus indicates that the stabilizing effect of the parallel magnetic field component on the KH instability becomes strong in the case of high density ratio.« less
NASA Astrophysics Data System (ADS)
Diama, A.; Matthies, B.; Herwig, K. W.; Hansen, F. Y.; Criswell, L.; Mo, H.; Bai, M.; Taub, H.
2009-08-01
We present evidence from neutron diffraction measurements and molecular dynamics (MD) simulations of three different monolayer phases of the intermediate-length alkanes tetracosane (n-C24H50 denoted as C24) and dotriacontane (n-C32H66 denoted as C32) adsorbed on a graphite basal-plane surface. Our measurements indicate that the two monolayer films differ principally in the transition temperatures between phases. At the lowest temperatures, both C24 and C32 form a crystalline monolayer phase with a rectangular-centered (RC) structure. The two sublattices of the RC structure each consists of parallel rows of molecules in their all-trans conformation aligned with their long axis parallel to the surface and forming so-called lamellas of width approximately equal to the all-trans length of the molecule. The RC structure is uniaxially commensurate with the graphite surface in its [110] direction such that the distance between molecular rows in a lamella is 4.26 Å=√3 ag, where ag=2.46 Å is the lattice constant of the graphite basal plane. Molecules in adjacent rows of a lamella alternate in orientation between the carbon skeletal plane being parallel and perpendicular to the graphite surface. Upon heating, the crystalline monolayers transform to a "smectic" phase in which the inter-row spacing within a lamella expands by ˜10% and the molecules are predominantly oriented with the carbon skeletal plane parallel to the graphite surface. In the smectic phase, the MD simulations show evidence of broadening of the lamella boundaries as a result of molecules diffusing parallel to their long axis. At still higher temperatures, they indicate that the introduction of gauche defects into the alkane chains drives a melting transition to a monolayer fluid phase as reported previously.
Diama, A; Matthies, B; Herwig, K W; Hansen, F Y; Criswell, L; Mo, H; Bai, M; Taub, H
2009-08-28
We present evidence from neutron diffraction measurements and molecular dynamics (MD) simulations of three different monolayer phases of the intermediate-length alkanes tetracosane (n-C(24)H(50) denoted as C24) and dotriacontane (n-C(32)H(66) denoted as C32) adsorbed on a graphite basal-plane surface. Our measurements indicate that the two monolayer films differ principally in the transition temperatures between phases. At the lowest temperatures, both C24 and C32 form a crystalline monolayer phase with a rectangular-centered (RC) structure. The two sublattices of the RC structure each consists of parallel rows of molecules in their all-trans conformation aligned with their long axis parallel to the surface and forming so-called lamellas of width approximately equal to the all-trans length of the molecule. The RC structure is uniaxially commensurate with the graphite surface in its [110] direction such that the distance between molecular rows in a lamella is 4.26 A=sqrt[3a(g)], where a(g)=2.46 A is the lattice constant of the graphite basal plane. Molecules in adjacent rows of a lamella alternate in orientation between the carbon skeletal plane being parallel and perpendicular to the graphite surface. Upon heating, the crystalline monolayers transform to a "smectic" phase in which the inter-row spacing within a lamella expands by approximately 10% and the molecules are predominantly oriented with the carbon skeletal plane parallel to the graphite surface. In the smectic phase, the MD simulations show evidence of broadening of the lamella boundaries as a result of molecules diffusing parallel to their long axis. At still higher temperatures, they indicate that the introduction of gauche defects into the alkane chains drives a melting transition to a monolayer fluid phase as reported previously.
He, Dalong; Wang, Yao; Song, Silong; Liu, Song; Deng, Yuan
2017-12-27
Design of composites with ordered fillers arrangement results in anisotropic performances with greatly enhanced properties along a specific direction, which is a powerful tool to optimize physical properties of composites. Well-aligned core-shell SiC@SiO 2 whiskers in poly(vinylidene fluoride) (PVDF) matrix has been achieved via a modified spinning approach. Because of the high aspect ratio of SiC whiskers, strong anisotropy and significant enhancement in dielectric constant were observed with permittivity 854 along the parallel direction versus 71 along the perpendicular direction at 20 vol % SiC@SiO 2 loading, while little increase in dielectric loss was found due to the highly insulating SiO 2 shell. The anisotropic dielectric behavior of the composite is perfectly understood macroscopically to have originated from anisotropic intensity of interfacial polarization based on an equivalent circuit model of two parallel RC circuits connected in series. Furthermore, finite element simulations on the three-dimensional distribution of local electric field, polarization, and leakage current density in oriented SiC@SiO 2 /PVDF composites under different applied electrical field directions unambiguously revealed that aligned core-shell SiC@SiO 2 whiskers with a high aspect ratio significantly improved dielectric performances. Importantly, the thermal conductivity of the composite was synchronously enhanced over 7 times as compared to that of PVDF matrix along the parallel direction at 20 vol % SiC@SiO 2 whiskers loading. This study highlights an effective strategy to achieve excellent comprehensive properties for high-k dielectrics.
Nonvolatile “AND,” “OR,” and “NOT” Boolean logic gates based on phase-change memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y.; Zhong, Y. P.; Deng, Y. F.
2013-12-21
Electronic devices or circuits that can implement both logic and memory functions are regarded as the building blocks for future massive parallel computing beyond von Neumann architecture. Here we proposed phase-change memory (PCM)-based nonvolatile logic gates capable of AND, OR, and NOT Boolean logic operations verified in SPICE simulations and circuit experiments. The logic operations are parallel computing and results can be stored directly in the states of the logic gates, facilitating the combination of computing and memory in the same circuit. These results are encouraging for ultralow-power and high-speed nonvolatile logic circuit design based on novel memory devices.
A parallel simulated annealing algorithm for standard cell placement on a hypercube computer
NASA Technical Reports Server (NTRS)
Jones, Mark Howard
1987-01-01
A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.
Methods of parallel computation applied on granular simulations
NASA Astrophysics Data System (ADS)
Martins, Gustavo H. B.; Atman, Allbens P. F.
2017-06-01
Every year, parallel computing has becoming cheaper and more accessible. As consequence, applications were spreading over all research areas. Granular materials is a promising area for parallel computing. To prove this statement we study the impact of parallel computing in simulations of the BNE (Brazil Nut Effect). This property is due the remarkable arising of an intruder confined to a granular media when vertically shaken against gravity. By means of DEM (Discrete Element Methods) simulations, we study the code performance testing different methods to improve clock time. A comparison between serial and parallel algorithms, using OpenMP® is also shown. The best improvement was obtained by optimizing the function that find contacts using Verlet's cells.
Turbomachinery CFD on parallel computers
NASA Technical Reports Server (NTRS)
Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.
1992-01-01
The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.
Massively parallel multicanonical simulations
NASA Astrophysics Data System (ADS)
Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard
2018-03-01
Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.
Solar wind interaction with Venus and Mars in a parallel hybrid code
NASA Astrophysics Data System (ADS)
Jarvinen, Riku; Sandroos, Arto
2013-04-01
We discuss the development and applications of a new parallel hybrid simulation, where ions are treated as particles and electrons as a charge-neutralizing fluid, for the interaction between the solar wind and Venus and Mars. The new simulation code under construction is based on the algorithm of the sequential global planetary hybrid model developed at the Finnish Meteorological Institute (FMI) and on the Corsair parallel simulation platform also developed at the FMI. The FMI's sequential hybrid model has been used for studies of plasma interactions of several unmagnetized and weakly magnetized celestial bodies for more than a decade. Especially, the model has been used to interpret in situ particle and magnetic field observations from plasma environments of Mars, Venus and Titan. Further, Corsair is an open source MPI (Message Passing Interface) particle and mesh simulation platform, mainly aimed for simulations of diffusive shock acceleration in solar corona and interplanetary space, but which is now also being extended for global planetary hybrid simulations. In this presentation we discuss challenges and strategies of parallelizing a legacy simulation code as well as possible applications and prospects of a scalable parallel hybrid model for the solar wind interactions of Venus and Mars.
Simulation of dispersion in layered coastal aquifer systems
Reilly, T.E.
1990-01-01
A density-dependent solute-transport formulation is used to examine ground-water flow in layered coastal aquifers. The numerical experiments indicate that although the transition zone may be thought of as an impermeable 'sharp' interface with freshwater flow parallel to the transition zone in homogeneous aquifers, this is not the case for layered systems. Freshwater can discharge through the transition zone in the confining units. Further, for the best simulation of layered coastal aquifer systems, either a flow-direction-dependent dispersion formulation is required, or the dispersivities must change spatially to reflect the tight thin confining unit. ?? 1990.
Scalable parallel distance field construction for large-scale applications
Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; ...
2015-10-01
Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate itsmore » efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.« less
Scalable Parallel Distance Field Construction for Large-Scale Applications.
Yu, Hongfeng; Xie, Jinrong; Ma, Kwan-Liu; Kolla, Hemanth; Chen, Jacqueline H
2015-10-01
Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. A new distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking over time, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. Our work greatly extends the usability of distance fields for demanding applications.
Acoustic simulation in architecture with parallel algorithm
NASA Astrophysics Data System (ADS)
Li, Xiaohong; Zhang, Xinrong; Li, Dan
2004-03-01
In allusion to complexity of architecture environment and Real-time simulation of architecture acoustics, a parallel radiosity algorithm was developed. The distribution of sound energy in scene is solved with this method. And then the impulse response between sources and receivers at frequency segment, which are calculated with multi-process, are combined into whole frequency response. The numerical experiment shows that parallel arithmetic can improve the acoustic simulating efficiency of complex scene.
2008-02-09
Campbell, S. Ogata, and F. Shimojo, “ Multimillion atom simulations of nanosystems on parallel computers,” in Proceedings of the International...nanomesas: multimillion -atom molecular dynamics simulations on parallel computers,” J. Appl. Phys. 94, 6762 (2003). 21. P. Vashishta, R. K. Kalia...and A. Nakano, “ Multimillion atom molecular dynamics simulations of nanoparticles on parallel computers,” Journal of Nanoparticle Research 5, 119-135
Massively parallel simulator of optical coherence tomography of inhomogeneous turbid media.
Malektaji, Siavash; Lima, Ivan T; Escobar I, Mauricio R; Sherif, Sherif S
2017-10-01
An accurate and practical simulator for Optical Coherence Tomography (OCT) could be an important tool to study the underlying physical phenomena in OCT such as multiple light scattering. Recently, many researchers have investigated simulation of OCT of turbid media, e.g., tissue, using Monte Carlo methods. The main drawback of these earlier simulators is the long computational time required to produce accurate results. We developed a massively parallel simulator of OCT of inhomogeneous turbid media that obtains both Class I diffusive reflectivity, due to ballistic and quasi-ballistic scattered photons, and Class II diffusive reflectivity due to multiply scattered photons. This Monte Carlo-based simulator is implemented on graphic processing units (GPUs), using the Compute Unified Device Architecture (CUDA) platform and programming model, to exploit the parallel nature of propagation of photons in tissue. It models an arbitrary shaped sample medium as a tetrahedron-based mesh and uses an advanced importance sampling scheme. This new simulator speeds up simulations of OCT of inhomogeneous turbid media by about two orders of magnitude. To demonstrate this result, we have compared the computation times of our new parallel simulator and its serial counterpart using two samples of inhomogeneous turbid media. We have shown that our parallel implementation reduced simulation time of OCT of the first sample medium from 407 min to 92 min by using a single GPU card, to 12 min by using 8 GPU cards and to 7 min by using 16 GPU cards. For the second sample medium, the OCT simulation time was reduced from 209 h to 35.6 h by using a single GPU card, and to 4.65 h by using 8 GPU cards, and to only 2 h by using 16 GPU cards. Therefore our new parallel simulator is considerably more practical to use than its central processing unit (CPU)-based counterpart. Our new parallel OCT simulator could be a practical tool to study the different physical phenomena underlying OCT, or to design OCT systems with improved performance. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Hsieh, Shang-Hsien
1993-01-01
The principal objective of this research is to develop, test, and implement coarse-grained, parallel-processing strategies for nonlinear dynamic simulations of practical structural problems. There are contributions to four main areas: finite element modeling and analysis of rotational dynamics, numerical algorithms for parallel nonlinear solutions, automatic partitioning techniques to effect load-balancing among processors, and an integrated parallel analysis system.
Enhanced sampling techniques in biomolecular simulations.
Spiwok, Vojtech; Sucur, Zoran; Hosek, Petr
2015-11-01
Biomolecular simulations are routinely used in biochemistry and molecular biology research; however, they often fail to match expectations of their impact on pharmaceutical and biotech industry. This is caused by the fact that a vast amount of computer time is required to simulate short episodes from the life of biomolecules. Several approaches have been developed to overcome this obstacle, including application of massively parallel and special purpose computers or non-conventional hardware. Methodological approaches are represented by coarse-grained models and enhanced sampling techniques. These techniques can show how the studied system behaves in long time-scales on the basis of relatively short simulations. This review presents an overview of new simulation approaches, the theory behind enhanced sampling methods and success stories of their applications with a direct impact on biotechnology or drug design. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Stepanova, Larisa; Bronnikov, Sergej
2018-03-01
The crack growth directional angles in the isotropic linear elastic plane with the central crack under mixed-mode loading conditions for the full range of the mixity parameter are found. Two fracture criteria of traditional linear fracture mechanics (maximum tangential stress and minimum strain energy density criteria) are used. Atomistic simulations of the central crack growth process in an infinite plane medium under mixed-mode loading using Large-scale Molecular Massively Parallel Simulator (LAMMPS), a classical molecular dynamics code, are performed. The inter-atomic potential used in this investigation is Embedded Atom Method (EAM) potential. The plane specimens with initial central crack were subjected to Mixed-Mode loadings. The simulation cell contains 400000 atoms. The crack propagation direction angles under different values of the mixity parameter in a wide range of values from pure tensile loading to pure shear loading in a wide diapason of temperatures (from 0.1 К to 800 К) are obtained and analyzed. It is shown that the crack propagation direction angles obtained by molecular dynamics method coincide with the crack propagation direction angles given by the multi-parameter fracture criteria based on the strain energy density and the multi-parameter description of the crack-tip fields.
Crashworthiness simulations with DYNA3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schauer, D.A.; Hoover, C.G.; Kay, G.J.
1996-04-01
Current progress in parallel algorithm research and applications in vehicle crash simulation is described for the explicit, finite element algorithms in DYNA3D. Problem partitioning methods and parallel algorithms for contact at material interfaces are the two challenging algorithm research problems that are addressed. Two prototype parallel contact algorithms have been developed for treating the cases of local and arbitrary contact. Demonstration problems for local contact are crashworthiness simulations with 222 locally defined contact surfaces and a vehicle/barrier collision modeled with arbitrary contact. A simulation of crash tests conducted for a vehicle impacting a U-channel small sign post embedded in soilmore » has been run on both the serial and parallel versions of DYNA3D. A significant reduction in computational time has been observed when running these problems on the parallel version. However, to achieve maximum efficiency, complex problems must be appropriately partitioned, especially when contact dominates the computation.« less
pWeb: A High-Performance, Parallel-Computing Framework for Web-Browser-Based Medical Simulation.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2014-01-01
This work presents a pWeb - a new language and compiler for parallelization of client-side compute intensive web applications such as surgical simulations. The recently introduced HTML5 standard has enabled creating unprecedented applications on the web. Low performance of the web browser, however, remains the bottleneck of computationally intensive applications including visualization of complex scenes, real time physical simulations and image processing compared to native ones. The new proposed language is built upon web workers for multithreaded programming in HTML5. The language provides fundamental functionalities of parallel programming languages as well as the fork/join parallel model which is not supported by web workers. The language compiler automatically generates an equivalent parallel script that complies with the HTML5 standard. A case study on realistic rendering for surgical simulations demonstrates enhanced performance with a compact set of instructions.
n-body simulations using message passing parallel computers.
NASA Astrophysics Data System (ADS)
Grama, A. Y.; Kumar, V.; Sameh, A.
The authors present new parallel formulations of the Barnes-Hut method for n-body simulations on message passing computers. These parallel formulations partition the domain efficiently incurring minimal communication overhead. This is in contrast to existing schemes that are based on sorting a large number of keys or on the use of global data structures. The new formulations are augmented by alternate communication strategies which serve to minimize communication overhead. The impact of these communication strategies is experimentally studied. The authors report on experimental results obtained from an astrophysical simulation on an nCUBE2 parallel computer.
A conservative approach to parallelizing the Sharks World simulation
NASA Technical Reports Server (NTRS)
Nicol, David M.; Riffe, Scott E.
1990-01-01
Parallelizing a benchmark problem for parallel simulation, the Sharks World, is described. The described solution is conservative, in the sense that no state information is saved, and no 'rollbacks' occur. The used approach illustrates both the principal advantage and principal disadvantage of conservative parallel simulation. The advantage is that by exploiting lookahead an approach was found that dramatically improves the serial execution time, and also achieves excellent speedups. The disadvantage is that if the model rules are changed in such a way that the lookahead is destroyed, it is difficult to modify the solution to accommodate the changes.
Terascale direct numerical simulations of turbulent combustion using S3D
NASA Astrophysics Data System (ADS)
Chen, J. H.; Choudhary, A.; de Supinski, B.; DeVries, M.; Hawkes, E. R.; Klasky, S.; Liao, W. K.; Ma, K. L.; Mellor-Crummey, J.; Podhorszki, N.; Sankaran, R.; Shende, S.; Yoo, C. S.
2009-01-01
Computational science is paramount to the understanding of underlying processes in internal combustion engines of the future that will utilize non-petroleum-based alternative fuels, including carbon-neutral biofuels, and burn in new combustion regimes that will attain high efficiency while minimizing emissions of particulates and nitrogen oxides. Next-generation engines will likely operate at higher pressures, with greater amounts of dilution and utilize alternative fuels that exhibit a wide range of chemical and physical properties. Therefore, there is a significant role for high-fidelity simulations, direct numerical simulations (DNS), specifically designed to capture key turbulence-chemistry interactions in these relatively uncharted combustion regimes, and in particular, that can discriminate the effects of differences in fuel properties. In DNS, all of the relevant turbulence and flame scales are resolved numerically using high-order accurate numerical algorithms. As a consequence terascale DNS are computationally intensive, require massive amounts of computing power and generate tens of terabytes of data. Recent results from terascale DNS of turbulent flames are presented here, illustrating its role in elucidating flame stabilization mechanisms in a lifted turbulent hydrogen/air jet flame in a hot air coflow, and the flame structure of a fuel-lean turbulent premixed jet flame. Computing at this scale requires close collaborations between computer and combustion scientists to provide optimized scaleable algorithms and software for terascale simulations, efficient collective parallel I/O, tools for volume visualization of multiscale, multivariate data and automating the combustion workflow. The enabling computer science, applied to combustion science, is also required in many other terascale physics and engineering simulations. In particular, performance monitoring is used to identify the performance of key kernels in the DNS code, S3D and especially memory intensive loops in the code. Through the careful application of loop transformations, data reuse in cache is exploited thereby reducing memory bandwidth needs, and hence, improving S3D's nodal performance. To enhance collective parallel I/O in S3D, an MPI-I/O caching design is used to construct a two-stage write-behind method for improving the performance of write-only operations. The simulations generate tens of terabytes of data requiring analysis. Interactive exploration of the simulation data is enabled by multivariate time-varying volume visualization. The visualization highlights spatial and temporal correlations between multiple reactive scalar fields using an intuitive user interface based on parallel coordinates and time histogram. Finally, an automated combustion workflow is designed using Kepler to manage large-scale data movement, data morphing, and archival and to provide a graphical display of run-time diagnostics.
SIERRA - A 3-D device simulator for reliability modeling
NASA Astrophysics Data System (ADS)
Chern, Jue-Hsien; Arledge, Lawrence A., Jr.; Yang, Ping; Maeda, John T.
1989-05-01
SIERRA is a three-dimensional general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under dc, transient, and small-signal conditions. Executing on a vector/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method. The ILU-CGS method provides a good compromise between memory size and convergence rate. The authors have observed a 5x to 7x speedup over standard direct methods in simulations of transient problems containing highly coupled Poisson and continuity equations such as those found in reliability-oriented simulations. The application of SIERRA to parasitic CMOS latchup and dynamic random-access memory single-event-upset studies is described.
Horsch, Martin; Vrabec, Jadran; Bernreuther, Martin; Grottel, Sebastian; Reina, Guido; Wix, Andrea; Schaber, Karlheinz; Hasse, Hans
2008-04-28
Molecular dynamics (MD) simulation is applied to the condensation process of supersaturated vapors of methane, ethane, and carbon dioxide. Simulations of systems with up to a 10(6) particles were conducted with a massively parallel MD program. This leads to reliable statistics and makes nucleation rates down to the order of 10(30) m(-3) s(-1) accessible to the direct simulation approach. Simulation results are compared to the classical nucleation theory (CNT) as well as the modification of Laaksonen, Ford, and Kulmala (LFK) which introduces a size dependence of the specific surface energy. CNT describes the nucleation of ethane and carbon dioxide excellently over the entire studied temperature range, whereas LFK provides a better approach to methane at low temperatures.
Aperture-based antihydrogen gravity experiment: Parallel plate geometry
NASA Astrophysics Data System (ADS)
Rocha, J. R.; Hedlof, R. M.; Ordonez, C. A.
2013-10-01
An analytical model and a Monte Carlo simulation are presented of an experiment that could be used to determine the direction of the acceleration of antihydrogen due to gravity. The experiment would rely on methods developed by existing antihydrogen research collaborations. The configuration consists of two circular, parallel plates that have an axis of symmetry directed away from the center of the earth. The plates are separated by a small vertical distance, and include one or more pairs of circular barriers that protrude from the upper and lower plates, thereby forming an aperture between the plates. Antihydrogen annihilations that occur just beyond each barrier, within a "shadow" region, are asymmetric on the upper plate relative to the lower plate. The probability for such annihilations is determined for a point, line and spheroidal source of antihydrogen. The production of 100,000 antiatoms is predicted to be necessary for the aperture-based experiment to indicate the direction of free fall acceleration of antimatter, provided that antihydrogen is produced within a sufficiently small antiproton plasma at a temperature of 4 K.
AC losses in horizontally parallel HTS tapes for possible wireless power transfer applications
NASA Astrophysics Data System (ADS)
Shen, Boyang; Geng, Jianzhao; Zhang, Xiuchang; Fu, Lin; Li, Chao; Zhang, Heng; Dong, Qihuan; Ma, Jun; Gawith, James; Coombs, T. A.
2017-12-01
This paper presents the concept of using horizontally parallel HTS tapes with AC loss study, and the investigation on possible wireless power transfer (WPT) applications. An example of three parallel HTS tapes was proposed, whose AC loss study was carried out both from experiment using electrical method; and simulation using 2D H-formulation on the FEM platform of COMSOL Multiphysics. The electromagnetic induction around the three parallel tapes was monitored using COMSOL simulation. The electromagnetic induction and AC losses generated by a conventional three turn coil was simulated as well, and then compared to the case of three parallel tapes with the same AC transport current. The analysis demonstrates that HTS parallel tapes could be potentially used into wireless power transfer systems, which could have lower total AC losses than conventional HTS coils.
Dissipation and particle energization in moderate to low beta turbulent plasma via PIC simulations
Makwana, Kirit; Li, Hui; Guo, Fan; ...
2017-05-30
Here, we simulate decaying turbulence in electron-positron pair plasmas using a fully-kinetic particle-in-cell (PIC) code. We run two simulations with moderate-to-low plasma β (the ratio of thermal pressure to magnetic pressure). The energy decay rate is found to be similar in both cases. The perpendicular wave-number spectrum of magnetic energy shows a slope betweenmore » $${k}_{\\perp }^{-1.3}$$ and $${k}_{\\perp }^{-1.1}$$, where the perpendicular (⊥) and parallel (∥) directions are defined with respect to the magnetic field. The particle kinetic energy distribution function shows the formation of a non-thermal feature in the case of lower plasma β, with a slope close to E-1. The correlation between thin turbulent current sheets and Ohmic heating by the dot product of electric field (E) and current density (J) is investigated. By heating the parallel E∥ centerdot J∥ term dominates the perpendicular E⊥ centerdot J⊥ term. Regions of strong E∥ centerdot J∥ are spatially well-correlated with regions of intense current sheets, which also appear correlated with regions of strong E∥ in the low β simulation, suggesting an important role of magnetic reconnection in the dissipation of low β plasma turbulence.« less
Dissipation and particle energization in moderate to low beta turbulent plasma via PIC simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makwana, Kirit; Li, Hui; Guo, Fan
Here, we simulate decaying turbulence in electron-positron pair plasmas using a fully-kinetic particle-in-cell (PIC) code. We run two simulations with moderate-to-low plasma β (the ratio of thermal pressure to magnetic pressure). The energy decay rate is found to be similar in both cases. The perpendicular wave-number spectrum of magnetic energy shows a slope betweenmore » $${k}_{\\perp }^{-1.3}$$ and $${k}_{\\perp }^{-1.1}$$, where the perpendicular (⊥) and parallel (∥) directions are defined with respect to the magnetic field. The particle kinetic energy distribution function shows the formation of a non-thermal feature in the case of lower plasma β, with a slope close to E-1. The correlation between thin turbulent current sheets and Ohmic heating by the dot product of electric field (E) and current density (J) is investigated. By heating the parallel E∥ centerdot J∥ term dominates the perpendicular E⊥ centerdot J⊥ term. Regions of strong E∥ centerdot J∥ are spatially well-correlated with regions of intense current sheets, which also appear correlated with regions of strong E∥ in the low β simulation, suggesting an important role of magnetic reconnection in the dissipation of low β plasma turbulence.« less
1983-06-01
system, provides a convenient, low- noise , fully parallel method of improving contrast and enhancing structural detail in an image prior to input to a...directed towards problems in deconvolution, reconstruction from projections, bandlimited extrapolation, and shift varying deblurring of images...deconvolution algorithm has been studied with promising 5 results [I] for simulated motion blurs. Future work will focus on noise effects and the extension
NASA Astrophysics Data System (ADS)
Li, Xinhua; Song, Zhenyu; Zhan, Yongjie; Wu, Qiongzhi
2009-12-01
Since the system capacity is severely limited, reducing the multiple access interfere (MAI) is necessary in the multiuser direct-sequence code division multiple access (DS-CDMA) system which is used in the telecommunication terminals data-transferred link system. In this paper, we adopt an adaptive multistage parallel interference cancellation structure in the demodulator based on the least mean square (LMS) algorithm to eliminate the MAI on the basis of overviewing various of multiuser dectection schemes. Neither a training sequence nor a pilot signal is needed in the proposed scheme, and its implementation complexity can be greatly reduced by a LMS approximate algorithm. The algorithm and its FPGA implementation is then derived. Simulation results of the proposed adaptive PIC can outperform some of the existing interference cancellation methods in AWGN channels. The hardware setup of mutiuser demodulator is described, and the experimental results based on it demonstrate that the simulation results shows large performance gains over the conventional single-user demodulator.
New aspects of whistler waves driven by an electron beam studied by a 3-D electromagnetic code
NASA Technical Reports Server (NTRS)
Nishikawa, Ken-Ichi; Buneman, Oscar; Neubert, Torsten
1994-01-01
We have restudied electron beam driven whistler waves with a 3-D electromagnetic particle code. The simulation results show electromagnetic whistler wave emissions and electrostatic beam modes like those observed in the Spacelab 2 electron beam experiment. It has been suggested in the past that the spatial bunching of beam electrons associated with the beam mode may directly generate whistler waves. However, the simulation results indicate several inconsistencies with this picture: (1) whistler waves continue to be generated even after the beam mode space charge modulation looses its coherence, (2) the parallel (to the background magnetic field) wavelength of the whistler wave is longer than that of the beam instability, and (3) the parallel phase velocity of the whistler wave is smaller than that of the beam mode. The complex structure of the whistler waves in the vicinity of the beam suggest that the transverse motion (gyration) of the beam and background electrons is also involved in the generation of whistler waves.
Electromagnetically induced disintegration and polarization plane rotation of laser pulses
NASA Astrophysics Data System (ADS)
Parshkov, Oleg M.; Budyak, Victoria V.; Kochetkova, Anastasia E.
2017-04-01
The numerical simulation results of disintegration effect of linear polarized shot probe pulses of electromagnetically induced transparency in the counterintuitive superposed linear polarized control field are presented. It is shown, that this disintegration occurs, if linear polarizations of interacting pulses are not parallel or mutually perpendicular. In case of weak input probe field the polarization of one probe pulse in the medium is parallel, whereas the polarization of another probe pulse is perpendicular to polarization direction of input control radiation. The concerned effect is analogous to the effect, which must to take place when short laser pulse propagates along main axes of biaxial crystal because of group velocity of normal mod difference. The essential difference of probe pulse disintegration and linear process in biaxial crystal is that probe pulse preserves linear polarization in all stages of propagation. The numerical simulation is performed for scheme of degenerated quantum transitions between 3P0 , 3P01 and 3P2 energy levels of 208Pb isotope.
Computer-Aided Parallelizer and Optimizer
NASA Technical Reports Server (NTRS)
Jin, Haoqiang
2011-01-01
The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.
Xyce™ Parallel Electronic Simulator Users' Guide, Version 6.5.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Aadithya, Karthik V.; Mei, Ting
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The information herein is subject to change without notice. Copyright © 2002-2016 Sandia Corporation. All rights reserved.« less
Petascale computation of multi-physics seismic simulations
NASA Astrophysics Data System (ADS)
Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Wollherr, Stephanie; Duru, Kenneth C.
2017-04-01
Capturing the observed complexity of earthquake sources in concurrence with seismic wave propagation simulations is an inherently multi-scale, multi-physics problem. In this presentation, we present simulations of earthquake scenarios resolving high-detail dynamic rupture evolution and high frequency ground motion. The simulations combine a multitude of representations of model complexity; such as non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and fault strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure to capture dynamic rupture behavior at the source; and seismic wave attenuation, 3D subsurface structure and bathymetry impacting seismic wave propagation. Performing such scenarios at the necessary spatio-temporal resolution requires highly optimized and massively parallel simulation tools which can efficiently exploit HPC facilities. Our up to multi-PetaFLOP simulations are performed with SeisSol (www.seissol.org), an open-source software package based on an ADER-Discontinuous Galerkin (DG) scheme solving the seismic wave equations in velocity-stress formulation in elastic, viscoelastic, and viscoplastic media with high-order accuracy in time and space. Our flux-based implementation of frictional failure remains free of spurious oscillations. Tetrahedral unstructured meshes allow for complicated model geometry. SeisSol has been optimized on all software levels, including: assembler-level DG kernels which obtain 50% peak performance on some of the largest supercomputers worldwide; an overlapping MPI-OpenMP parallelization shadowing the multiphysics computations; usage of local time stepping; parallel input and output schemes and direct interfaces to community standard data formats. All these factors enable aim to minimise the time-to-solution. The results presented highlight the fact that modern numerical methods and hardware-aware optimization for modern supercomputers are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis. Lastly, we will conclude with an outlook on future exascale ADER-DG solvers for seismological applications.
NASA Astrophysics Data System (ADS)
Montoliu, C.; Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Colom, R. J.
2013-10-01
The use of atomistic methods, such as the Continuous Cellular Automaton (CCA), is currently regarded as a computationally efficient and experimentally accurate approach for the simulation of anisotropic etching of various substrates in the manufacture of Micro-electro-mechanical Systems (MEMS). However, when the features of the chemical process are modified, a time-consuming calibration process needs to be used to transform the new macroscopic etch rates into a corresponding set of atomistic rates. Furthermore, changing the substrate requires a labor-intensive effort to reclassify most atomistic neighborhoods. In this context, the Level Set (LS) method provides an alternative approach where the macroscopic forces affecting the front evolution are directly applied at the discrete level, thus avoiding the need for reclassification and/or calibration. Correspondingly, we present a fully-operational Sparse Field Method (SFM) implementation of the LS approach, discussing in detail the algorithm and providing a thorough characterization of the computational cost and simulation accuracy, including a comparison to the performance by the most recent CCA model. We conclude that the SFM implementation achieves similar accuracy as the CCA method with less fluctuations in the etch front and requiring roughly 4 times less memory. Although SFM can be up to 2 times slower than CCA for the simulation of anisotropic etchants, it can also be up to 10 times faster than CCA for isotropic etchants. In addition, we present a parallel, GPU-based implementation (gSFM) and compare it to an optimized, multicore CPU version (cSFM), demonstrating that the SFM algorithm can be successfully parallelized and the simulation times consequently reduced, while keeping the accuracy of the simulations. Although modern multicore CPUs provide an acceptable option, the massively parallel architecture of modern GPUs is more suitable, as reflected by computational times for gSFM up to 7.4 times faster than for cSFM.
Parallelisation study of a three-dimensional environmental flow model
NASA Astrophysics Data System (ADS)
O'Donncha, Fearghal; Ragnoli, Emanuele; Suits, Frank
2014-03-01
There are many simulation codes in the geosciences that are serial and cannot take advantage of the parallel computational resources commonly available today. One model important for our work in coastal ocean current modelling is EFDC, a Fortran 77 code configured for optimal deployment on vector computers. In order to take advantage of our cache-based, blade computing system we restructured EFDC from serial to parallel, thereby allowing us to run existing models more quickly, and to simulate larger and more detailed models that were previously impractical. Since the source code for EFDC is extensive and involves detailed computation, it is important to do such a port in a manner that limits changes to the files, while achieving the desired speedup. We describe a parallelisation strategy involving surgical changes to the source files to minimise error-prone alteration of the underlying computations, while allowing load-balanced domain decomposition for efficient execution on a commodity cluster. The use of conjugate gradient posed particular challenges due to implicit non-local communication posing a hindrance to standard domain partitioning schemes; a number of techniques are discussed to address this in a feasible, computationally efficient manner. The parallel implementation demonstrates good scalability in combination with a novel domain partitioning scheme that specifically handles mixed water/land regions commonly found in coastal simulations. The approach presented here represents a practical methodology to rejuvenate legacy code on a commodity blade cluster with reasonable effort; our solution has direct application to other similar codes in the geosciences.
NASA Astrophysics Data System (ADS)
Maeda, T.; Furumura, T.; Noguchi, S.; Takemura, S.; Iwai, K.; Lee, S.; Sakai, S.; Shinohara, M.
2011-12-01
The fault rupture of the 2011 Tohoku (Mw9.0) earthquake spread approximately 550 km by 260 km with a long source rupture duration of ~200 s. For such large earthquake with a complicated source rupture process the radiation of seismic wave from the source rupture and initiation of tsunami due to the coseismic deformation is considered to be very complicated. In order to understand such a complicated process of seismic wave, coseismic deformation and tsunami, we proposed a unified approach for total modeling of earthquake induced phenomena in a single numerical scheme based on a finite-difference method simulation (Maeda and Furumura, 2011). This simulation model solves the equation of motion of based on the linear elastic theory with equilibrium between quasi-static pressure and gravity in the water column. The height of tsunami is obtained from this simulation as a vertical displacement of ocean surface. In order to simulate seismic waves, ocean acoustics, coseismic deformations, and tsunami from the 2011 Tohoku earthquake, we assembled a high-resolution 3D heterogeneous subsurface structural model of northern Japan. The area of simulation is 1200 km x 800 km and 120 km in depth, which have been discretized with grid interval of 1 km in horizontal directions and 0.25 km in vertical direction, respectively. We adopt a source-rupture model proposed by Lee et al. (2011) which is obtained by the joint inversion of teleseismic, near-field strong motion, and coseismic deformation. For conducting such a large-scale simulation, we fully parallelized our simulation code based on a domain-partitioning procedure which achieved a good speed-up by parallel computing up to 8192 core processors with parallel efficiency of 99.839%. The simulation result demonstrates clearly the process in which the seismic wave radiates from the complicated source rupture over the fault plane and propagating in heterogeneous structure of northern Japan. Then, generation of tsunami from coseismic ground deformation at sea floor due to the earthquake and propagation is also well demonstrated . The simulation also demonstrates that a very large slip up to 40 m at shallow plate boundary near the trench pushes up sea floor with source rupture propagation, and the highly elevated sea surface gradually start propagation as tsunamis due to the gravity. The result of simulation of vertical-component displacement waveform matches the ocean-bottom pressure gauge record which is installed just above the source fault area (Maeda et al., 2011) very consistently. Strong reverberation of the ocean-acoustic waves between sea surface and sea bottom particularly near the Japan Trench for long time after the source rupture ends is confirmed in the present simulation. Accordingly, long wavetrains of high-frequency ocean acoustic waves is developed and overlap to later tsunami waveforms as we found in the observations.
Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael
2000-01-01
The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.
Particle-in-cell simulations of the critical ionization velocity effect in finite size clouds
NASA Technical Reports Server (NTRS)
Moghaddam-Taaheri, E.; Lu, G.; Goertz, C. K.; Nishikawa, K. - I.
1994-01-01
The critical ionization velocity (CIV) mechanism in a finite size cloud is studied with a series of electrostatic particle-in-cell simulations. It is observed that an initial seed ionization, produced by non-CIV mechanisms, generates a cross-field ion beam which excites a modified beam-plasma instability (MBPI) with frequency in the range of the lower hybrid frequency. The excited waves accelerate electrons along the magnetic field up to the ion drift energy that exceeds the ionization energy of the neutral atoms. The heated electrons in turn enhance the ion beam by electron-neutral impact ionization, which establishes a positive feedback loop in maintaining the CIV process. It is also found that the efficiency of the CIV mechanism depends on the finite size of the gas cloud in the following ways: (1) Along the ambient magnetic field the finite size of the cloud, L (sub parallel), restricts the growth of the fastest growing mode, with a wavelength lambda (sub m parallel), of the MBPI. The parallel electron heating at wave saturation scales approximately as (L (sub parallel)/lambda (sub m parallel)) (exp 1/2); (2) Momentum coupling between the cloud and the ambient plasma via the Alfven waves occurs as a result of the finite size of the cloud in the direction perpendicular to both the ambient magnetic field and the neutral drift. This reduces exponentially with time the relative drift between the ambient plasma and the neutrals. The timescale is inversely proportional to the Alfven velocity. (3) The transvers e charge separation field across the cloud was found to result in the modulation of the beam velocity which reduces the parallel heating of electrons and increases the transverse acceleration of electrons. (4) Some energetic electrons are lost from the cloud along the magnetic field at a rate characterized by the acoustic velocity, instead of the electron thermal velocity. The loss of energetic electrons from the cloud seems to be larger in the direction of plasma drift relative to the neutrals, where the loss rate is characterized by the neutral drift velocity. It is also shown that a factor of 4 increase in the ambient plasma density, increases the CIV ionization yield by almost 2 orders of magnitude at the end of a typical run. It is concluded that a larger ambient plasma density can result in a larger CIV yield because of (1) larger seed ion production by non-CIV mechanisms, (2) smaller Alfven velocity and hence weak momentum coupling, and (3) smaller ratio of the ion beam density to the ambient ion density, and therefore a weaker modulation of the beam velocity. The simulation results are used to interpret various chemical release experiments in space.
NASA Astrophysics Data System (ADS)
Valasek, Lukas; Glasa, Jan
2017-12-01
Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.
Automatic Multilevel Parallelization Using OpenMP
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)
2002-01-01
In this paper we describe the extension of the CAPO (CAPtools (Computer Aided Parallelization Toolkit) OpenMP) parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report some results for several benchmark codes and one full application that have been parallelized using our system.
NASA Astrophysics Data System (ADS)
Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao
2017-01-01
Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.
Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmalz, Mark S
2011-07-24
Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less
NASA Astrophysics Data System (ADS)
Wang, Liping; Jiang, Yao; Li, Tiemin
2014-09-01
Parallel kinematic machines have drawn considerable attention and have been widely used in some special fields. However, high precision is still one of the challenges when they are used for advanced machine tools. One of the main reasons is that the kinematic chains of parallel kinematic machines are composed of elongated links that can easily suffer deformations, especially at high speeds and under heavy loads. A 3-RRR parallel kinematic machine is taken as a study object for investigating its accuracy with the consideration of the deformations of its links during the motion process. Based on the dynamic model constructed by the Newton-Euler method, all the inertia loads and constraint forces of the links are computed and their deformations are derived. Then the kinematic errors of the machine are derived with the consideration of the deformations of the links. Through further derivation, the accuracy of the machine is given in a simple explicit expression, which will be helpful to increase the calculating speed. The accuracy of this machine when following a selected circle path is simulated. The influences of magnitude of the maximum acceleration and external loads on the running accuracy of the machine are investigated. The results show that the external loads will deteriorate the accuracy of the machine tremendously when their direction coincides with the direction of the worst stiffness of the machine. The proposed method provides a solution for predicting the running accuracy of the parallel kinematic machines and can also be used in their design optimization as well as selection of suitable running parameters.
Program For Parallel Discrete-Event Simulation
NASA Technical Reports Server (NTRS)
Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.
1991-01-01
User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.
DynaSim: A MATLAB Toolbox for Neural Modeling and Simulation
Sherfey, Jason S.; Soplata, Austin E.; Ardid, Salva; Roberts, Erik A.; Stanley, David A.; Pittman-Polletta, Benjamin R.; Kopell, Nancy J.
2018-01-01
DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community. PMID:29599715
DynaSim: A MATLAB Toolbox for Neural Modeling and Simulation.
Sherfey, Jason S; Soplata, Austin E; Ardid, Salva; Roberts, Erik A; Stanley, David A; Pittman-Polletta, Benjamin R; Kopell, Nancy J
2018-01-01
DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community.
Effective contaminant detection networks in uncertain groundwater flow fields.
Hudak, P F
2001-01-01
A mass transport simulation model tested seven contaminant detection-monitoring networks under a 40 degrees range of groundwater flow directions. Each monitoring network contained five wells located 40 m from a rectangular landfill. The 40-m distance (lag) was measured in different directions, depending upon the strategy used to design a particular monitoring network. Lagging the wells parallel to the central flow path was more effective than alternative design strategies. Other strategies allowed higher percentages of leaks to migrate between monitoring wells. Results of this study suggest that centrally lagged groundwater monitoring networks perform most effectively in uncertain groundwater-flow fields.
Progress on 3-D ICF simulations and Ray-Traced Power Deposition Method
NASA Astrophysics Data System (ADS)
Schmitt, Andrew J.; Fyfe, David E.
2016-10-01
We have performed 3D simulations of Omega-scale and NIF-scale spherical direct-drive targets with the massively parallel
Direct Large-Scale N-Body Simulations of Planetesimal Dynamics
NASA Astrophysics Data System (ADS)
Richardson, Derek C.; Quinn, Thomas; Stadel, Joachim; Lake, George
2000-01-01
We describe a new direct numerical method for simulating planetesimal dynamics in which N˜10 6 or more bodies can be evolved simultaneously in three spatial dimensions over hundreds of dynamical times. This represents several orders of magnitude improvement in resolution over previous studies. The advance is made possible through modification of a stable and tested cosmological code optimized for massively parallel computers. However, owing to the excellent scalability and portability of the code, modest clusters of workstations can treat problems with N˜10 5 particles in a practical fashion. The code features algorithms for detection and resolution of collisions and takes into account the strong central force field and flattened Keplerian disk geometry of planetesimal systems. We demonstrate the range of problems that can be addressed by presenting simulations that illustrate oligarchic growth of protoplanets, planet formation in the presence of giant planet perturbations, the formation of the jovian moons, and orbital migration via planetesimal scattering. We also describe methods under development for increasing the timescale of the simulations by several orders of magnitude.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomquist, Heidi K.; Fixel, Deborah A.; Fett, David Brian
The Xyce Parallel Electronic Simulator simulates electronic circuit behavior in DC, AC, HB, MPDE and transient mode using standard analog (DAE) and/or device (PDE) device models including several age and radiation aware devices. It supports a variety of computing platforms (both serial and parallel) computers. Lastly, it uses a variety of modern solution algorithms dynamic parallel load-balancing and iterative solvers.
NASA Technical Reports Server (NTRS)
Bruno, John
1984-01-01
The results of an investigation into the feasibility of using the MPP for direct and large eddy simulations of the Navier-Stokes equations is presented. A major part of this study was devoted to the implementation of two of the standard numerical algorithms for CFD. These implementations were not run on the Massively Parallel Processor (MPP) since the machine delivered to NASA Goddard does not have sufficient capacity. Instead, a detailed implementation plan was designed and from these were derived estimates of the time and space requirements of the algorithms on a suitably configured MPP. In addition, other issues related to the practical implementation of these algorithms on an MPP-like architecture were considered; namely, adaptive grid generation, zonal boundary conditions, the table lookup problem, and the software interface. Performance estimates show that the architectural components of the MPP, the Staging Memory and the Array Unit, appear to be well suited to the numerical algorithms of CFD. This combined with the prospect of building a faster and larger MMP-like machine holds the promise of achieving sustained gigaflop rates that are required for the numerical simulations in CFD.
Tutorial: Parallel Computing of Simulation Models for Risk Analysis.
Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D
2016-10-01
Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Liakos, Anastasios; Malamataris, Nikolaos A.
2014-05-01
The topology and evolution of flow around a surface mounted cubical object in three dimensional channel flow is examined for low to moderate Reynolds numbers. Direct numerical simulations were performed via a home made parallel finite element code. The computational domain has been designed according to actual laboratory experiment conditions. Analysis of the results is performed using the three dimensional theory of separation. Our findings indicate that a tornado-like vortex by the side of the cube is present for all Reynolds numbers for which flow was simulated. A horseshoe vortex upstream from the cube was formed at Reynolds number approximately 1266. Pressure distributions are shown along with three dimensional images of the tornado-like vortex and the horseshoe vortex at selected Reynolds numbers. Finally, and in accordance to previous work, our results indicate that the upper limit for the Reynolds number for which steady state results are physically realizable is roughly 2000.
NASA Astrophysics Data System (ADS)
Goyal, Abheeti; Toschi, Federico; van der Schoot, Paul
2017-11-01
We study the morphological evolution and dynamics of phase separation of multi-component mixture in thin film constrained by a substrate. Specifically, we have explored the surface-directed spinodal decomposition of multicomponent mixture numerically by Free Energy Lattice Boltzmann (LB) simulations. The distinguishing feature of this model over the Shan-Chen (SC) model is that we have explicit and independent control over the free energy functional and EoS of the system. This vastly expands the ambit of physical systems that can be realistically simulated by LB simulations. We investigate the effect of composition, film thickness and substrate wetting on the phase morphology and the mechanism of growth in the vicinity of the substrate. The phase morphology and averaged size in the vicinity of the substrate fluctuate greatly due to the wetting of the substrate in both the parallel and perpendicular directions. Additionally, we also describe how the model presented here can be extended to include an arbitrary number of fluid components.
NASA Astrophysics Data System (ADS)
Thompson, Derek S.; Keniley, Shane; Curreli, Davide; Henriquez, Miguel F.; Caron, David D.; Jemiolo, Andrew J.; McLaughlin, Jacob W.; Dufor, Mikal T.; Neal, Luke A.; Scime, Earl E.; Siddiqui, M. Umair
2017-10-01
We present progress toward the first paired 3D laser induced fluorescence measurements of ion and neutral velocity distribution functions (I/NVDFs) in a magnetized plasma boundary. These measurements are performed in the presheath region of an absorbing boundary immersed in a background magnetic field that is obliquely incident to the boundary surface (ψ =74°). Parallel and perpendicular flow measurements demonstrate that cross-field ion flows occur and that ions within several gyro-radii of the surface are accelerated in the E-> × B-> direction. We present electrostatic probe measurements of electron temperature, plasma density, and electric potential in the same region. Ion, neutral and electron measurements are compared to Boltzmann simulations, allowing direct comparison between measured and theoretical distribution functions in the boundary region. NSF PHYS 1360278.
Real-time electron dynamics for massively parallel excited-state simulations
NASA Astrophysics Data System (ADS)
Andrade, Xavier
The simulation of the real-time dynamics of electrons, based on time dependent density functional theory (TDDFT), is a powerful approach to study electronic excited states in molecular and crystalline systems. What makes the method attractive is its flexibility to simulate different kinds of phenomena beyond the linear-response regime, including strongly-perturbed electronic systems and non-adiabatic electron-ion dynamics. Electron-dynamics simulations are also attractive from a computational point of view. They can run efficiently on massively parallel architectures due to the low communication requirements. Our implementations of electron dynamics, based on the codes Octopus (real-space) and Qball (plane-waves), allow us to simulate systems composed of thousands of atoms and to obtain good parallel scaling up to 1.6 million processor cores. Due to the versatility of real-time electron dynamics and its parallel performance, we expect it to become the method of choice to apply the capabilities of exascale supercomputers for the simulation of electronic excited states.
Discontinuous Galerkin Methods and High-Speed Turbulent Flows
NASA Astrophysics Data System (ADS)
Atak, Muhammed; Larsson, Johan; Munz, Claus-Dieter
2014-11-01
Discontinuous Galerkin methods gain increasing importance within the CFD community as they combine arbitrary high order of accuracy in complex geometries with parallel efficiency. Particularly the discontinuous Galerkin spectral element method (DGSEM) is a promising candidate for both the direct numerical simulation (DNS) and large eddy simulation (LES) of turbulent flows due to its excellent scaling attributes. In this talk, we present a DNS of a compressible turbulent boundary layer along a flat plate at a free-stream Mach number of M = 2.67 and assess the computational efficiency of the DGSEM at performing high-fidelity simulations of both transitional and turbulent boundary layers. We compare the accuracy of the results as well as the computational performance to results using a high order finite difference method.
Schwerdtfeger, Peter; Smits, Odile; Pahl, Elke; Jerabek, Paul
2018-06-12
State-of-the-art relativistic coupled-cluster theory is used to construct many-body potentials for the rare gas element radon in order to determine its bulk properties including the solid-to-liquid phase transition from parallel tempering Monte Carlo simulations through either direct sampling of the bulk or from a finite cluster approach. The calculated melting temperature are 201(3) K and 201(6) K from bulk simulations and from extrapolation of finite cluster values, respectively. This is in excellent agreement with the often debated (but widely cited) and only available value of 202 K, dating back to measurements by Gray and Ramsay in 1909. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S; Seal, Sudip K
2010-01-01
The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over themore » $$\\mu$$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.« less
Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows
NASA Technical Reports Server (NTRS)
Givi, P.; Frankel, S. H.; Adumitroaie, V.; Sabini, G.; Madnia, C. K.
1993-01-01
The primary objective of this research is to extend current capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. Our efforts in the first two years of this research have been concentrated on a priori investigations of single-point Probability Density Function (PDF) methods for providing subgrid closures in reacting turbulent flows. In the efforts initiated in the third year, our primary focus has been on performing actual LES by means of PDF methods. The approach is based on assumed PDF methods and we have performed extensive analysis of turbulent reacting flows by means of LES. This includes simulations of both three-dimensional (3D) isotropic compressible flows and two-dimensional reacting planar mixing layers. In addition to these LES analyses, some work is in progress to assess the extent of validity of our assumed PDF methods. This assessment is done by making detailed companions with recent laboratory data in predicting the rate of reactant conversion in parallel reacting shear flows. This report provides a summary of our achievements for the first six months of the third year of this program.
Analysis of Tyman green detection system based on polarization interference
NASA Astrophysics Data System (ADS)
Huang, Yaolin; Wang, Min; Shao, Xiaoping; Kou, Yuanfeng
2018-02-01
The optical surface deviation of the lens can directly affect the quality of the optical system.In order to effectively and accurately detect the surface shape, an optical surface on-line detection system based on polarization interference technology is designed and developed. The system is based on Tyman-Green interference optical path, join the polarization interference measuring technology. Based on the theoretical derivation of the optical path and the ZEMAX software simulation, the experimental optical path is constructed. The parallel light is used to detect the concave lens. The parallel light is used as the light source, the size of the polarization splitting prism, detection radius of curvature, the relations between and among the size of the lens aperture, a detection range is given.
Computational methods and software systems for dynamics and control of large space structures
NASA Technical Reports Server (NTRS)
Park, K. C.; Felippa, C. A.; Farhat, C.; Pramono, E.
1990-01-01
This final report on computational methods and software systems for dynamics and control of large space structures covers progress to date, projected developments in the final months of the grant, and conclusions. Pertinent reports and papers that have not appeared in scientific journals (or have not yet appeared in final form) are enclosed. The grant has supported research in two key areas of crucial importance to the computer-based simulation of large space structure. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area, as reported here, involves massively parallel computers.
Hybrid simulations of radial transport driven by the Rayleigh-Taylor instability
NASA Astrophysics Data System (ADS)
Delamere, P. A.; Stauffer, B. H.; Ma, X.
2017-12-01
Plasma transport in the rapidly rotating giant magnetospheres is thought to involve a centrifugally-driven flux tube interchange instability, similar to the Rayleigh-Taylor (RT) instability. In three dimensions, the convective flow patterns associated with the RT instability can produce strong guide field reconnection, allowing plasma mass to move radially outward while conserving magnetic flux (Ma et al., 2016). We present a set of hybrid (kinetic ion / fluid electron) plasma simulations of the RT instability using high plasma beta conditions appropriate for Jupiter's inner and middle magnetosphere. A density gradient, combined with a centrifugal force, provide appropriate RT onset conditions. Pressure balance is achieved by initializing two ion populations: one with fixed temperature, but varying density, and the other with fixed density, but a temperature gradient that offsets the density gradient from the first population and the centrifugal force (effective gravity). We first analyze two-dimensional results for the plane perpendicular to the magnetic field by comparing growth rates as a function of wave vector following Huba et al. (1998). Prescribed perpendicular wave modes are seeded with an initial velocity perturbation. We then extend the model to three dimensions, introducing a stabilizing parallel wave vector. Boundary conditions in the parallel direction prohibit motion of the magnetic field line footprints to model the eigenmodes of the magnetodisc's resonant cavity. We again compare growth rates based on perpendicular wave number, but also on the parallel extent of the resonant cavity, which fixes the size of the largest parallel wavelength. Finally, we search for evidence of strong guide field magnetic reconnection within the domain by identifying areas with large parallel electric fields or changes in magnetic field topology.
NASA Technical Reports Server (NTRS)
West, Jeff; Yang, H. Q.
2014-01-01
There are many instances involving liquid/gas interfaces and their dynamics in the design of liquid engine powered rockets such as the Space Launch System (SLS). Some examples of these applications are: Propellant tank draining and slosh, subcritical condition injector analysis for gas generators, preburners and thrust chambers, water deluge mitigation for launch induced environments and even solid rocket motor liquid slag dynamics. Commercially available CFD programs simulating gas/liquid interfaces using the Volume of Fluid approach are currently limited in their parallel scalability. In 2010 for instance, an internal NASA/MSFC review of three commercial tools revealed that parallel scalability was seriously compromised at 8 cpus and no additional speedup was possible after 32 cpus. Other non-interface CFD applications at the time were demonstrating useful parallel scalability up to 4,096 processors or more. Based on this review, NASA/MSFC initiated an effort to implement a Volume of Fluid implementation within the unstructured mesh, pressure-based algorithm CFD program, Loci-STREAM. After verification was achieved by comparing results to the commercial CFD program CFD-Ace+, and validation by direct comparison with data, Loci-STREAM-VoF is now the production CFD tool for propellant slosh force and slosh damping rate simulations at NASA/MSFC. On these applications, good parallel scalability has been demonstrated for problems sizes of tens of millions of cells and thousands of cpu cores. Ongoing efforts are focused on the application of Loci-STREAM-VoF to predict the transient flow patterns of water on the SLS Mobile Launch Platform in order to support the phasing of water for launch environment mitigation so that vehicle determinantal effects are not realized.
Modulated heat pulse propagation and partial transport barriers in chaotic magnetic fields
del-Castillo-Negrete, Diego; Blazevski, Daniel
2016-04-01
Direct numerical simulations of the time dependent parallel heat transport equation modeling heat pulses driven by power modulation in 3-dimensional chaotic magnetic fields are presented. The numerical method is based on the Fourier formulation of a Lagrangian-Green's function method that provides an accurate and efficient technique for the solution of the parallel heat transport equation in the presence of harmonic power modulation. The numerical results presented provide conclusive evidence that even in the absence of magnetic flux surfaces, chaotic magnetic field configurations with intermediate levels of stochasticity exhibit transport barriers to modulated heat pulse propagation. In particular, high-order islands and remnants of destroyed flux surfaces (Cantori) act as partial barriers that slow down or even stop the propagation of heat waves at places where the magnetic field connection length exhibits a strong gradient. The key parameter ismore » $$\\gamma=\\sqrt{\\omega/2 \\chi_\\parallel}$$ that determines the length scale, $$1/\\gamma$$, of the heat wave penetration along the magnetic field line. For large perturbation frequencies, $$\\omega \\gg 1$$, or small parallel thermal conductivities, $$\\chi_\\parallel \\ll 1$$, parallel heat transport is strongly damped and the magnetic field partial barriers act as robust barriers where the heat wave amplitude vanishes and its phase speed slows down to a halt. On the other hand, in the limit of small $$\\gamma$$, parallel heat transport is largely unimpeded, global transport is observed and the radial amplitude and phase speed of the heat wave remain finite. Results on modulated heat pulse propagation in fully stochastic fields and across magnetic islands are also presented. In qualitative agreement with recent experiments in LHD and DIII-D, it is shown that the elliptic (O) and hyperbolic (X) points of magnetic islands have a direct impact on the spatio-temporal dependence of the amplitude and the time delay of modulated heat pulses.« less
A NUMERICAL SIMULATION OF COSMIC RAY MODULATION NEAR THE HELIOPAUSE. II. SOME PHYSICAL INSIGHTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Xi; Feng, Xueshang; Potgieter, Marius S.
Cosmic ray (CR) transport near the heliopause (HP) is studied using a hybrid transport model, with the parameters constrained by observations from the Voyager 1 spacecraft. We simulate the CR radial flux along different directions in the heliosphere. There is no well-defined thin layer between the solar wind region and the interstellar region along the tail and polar directions of the heliosphere. By analyzing the radial flux curve along the direction of Voyager 2 , together with its trajectory information, the crossing time of the HP by Voyager 2 is predicted to be in 2017.14. We simulate the CR radialmore » flux for different energy values along the direction of Voyager 1 . We find that there is only a modest modulation region of about 10 au wide beyond the HP, so that Voyager 1 observing the Local Interstellar Spectra is justified in numerical modeling. We analyze the heliospheric exit information of pseudo-particles in our stochastic numerical (time-backward) method, conjecturing that they represent the behavior of CR particles, and we find that pseudo-particles that have been traced from the nose region exit in the tail region. This implies that many CR particles diffuse directly from the heliospheric tail region to the nose region near the HP. In addition, when pseudo-particles were traced from the Local Interstellar Medium (LISM), it is found that their exit location (entrance for real particles) from the simulation domain is along the prescribed Interstellar Magnetic Field direction. This indicates that parallel diffusion dominates CR particle transport in the LISM.« less
NASA Technical Reports Server (NTRS)
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
Performance Analysis of an Actor-Based Distributed Simulation
NASA Technical Reports Server (NTRS)
Schoeffler, James D.
1998-01-01
Object-oriented design of simulation programs appears to be very attractive because of the natural association of components in the simulated system with objects. There is great potential in distributing the simulation across several computers for the purpose of parallel computation and its consequent handling of larger problems in less elapsed time. One approach to such a design is to use "actors", that is, active objects with their own thread of control. Because these objects execute concurrently, communication is via messages. This is in contrast to an object-oriented design using passive objects where communication between objects is via method calls (direct calls when they are in the same address space and remote procedure calls when they are in different address spaces or different machines). This paper describes a performance analysis program for the evaluation of a design for distributed simulations based upon actors.
Modelling and simulation of parallel triangular triple quantum dots (TTQD) by using SIMON 2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fathany, Maulana Yusuf, E-mail: myfathany@gmail.com; Fuada, Syifaul, E-mail: fsyifaul@gmail.com; Lawu, Braham Lawas, E-mail: bram-labs@rocketmail.com
2016-04-19
This research presents analysis of modeling on Parallel Triple Quantum Dots (TQD) by using SIMON (SIMulation Of Nano-structures). Single Electron Transistor (SET) is used as the basic concept of modeling. We design the structure of Parallel TQD by metal material with triangular geometry model, it is called by Triangular Triple Quantum Dots (TTQD). We simulate it with several scenarios using different parameters; such as different value of capacitance, various gate voltage, and different thermal condition.
Parallel-Processing Test Bed For Simulation Software
NASA Technical Reports Server (NTRS)
Blech, Richard; Cole, Gary; Townsend, Scott
1996-01-01
Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).
A parallel finite element simulator for ion transport through three-dimensional ion channel systems.
Tu, Bin; Chen, Minxin; Xie, Yan; Zhang, Linbo; Eisenberg, Bob; Lu, Benzhuo
2013-09-15
A parallel finite element simulator, ichannel, is developed for ion transport through three-dimensional ion channel systems that consist of protein and membrane. The coordinates of heavy atoms of the protein are taken from the Protein Data Bank and the membrane is represented as a slab. The simulator contains two components: a parallel adaptive finite element solver for a set of Poisson-Nernst-Planck (PNP) equations that describe the electrodiffusion process of ion transport, and a mesh generation tool chain for ion channel systems, which is an essential component for the finite element computations. The finite element method has advantages in modeling irregular geometries and complex boundary conditions. We have built a tool chain to get the surface and volume mesh for ion channel systems, which consists of a set of mesh generation tools. The adaptive finite element solver in our simulator is implemented using the parallel adaptive finite element package Parallel Hierarchical Grid (PHG) developed by one of the authors, which provides the capability of doing large scale parallel computations with high parallel efficiency and the flexibility of choosing high order elements to achieve high order accuracy. The simulator is applied to a real transmembrane protein, the gramicidin A (gA) channel protein, to calculate the electrostatic potential, ion concentrations and I - V curve, with which both primitive and transformed PNP equations are studied and their numerical performances are compared. To further validate the method, we also apply the simulator to two other ion channel systems, the voltage dependent anion channel (VDAC) and α-Hemolysin (α-HL). The simulation results agree well with Brownian dynamics (BD) simulation results and experimental results. Moreover, because ionic finite size effects can be included in PNP model now, we also perform simulations using a size-modified PNP (SMPNP) model on VDAC and α-HL. It is shown that the size effects in SMPNP can effectively lead to reduced current in the channel, and the results are closer to BD simulation results. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Yang, Liping; Zhang, Lei; He, Jiansen; Tu, Chuanyi; Li, Shengtai; Wang, Xin; Wang, Linghua
2018-03-01
Multi-order structure functions in the solar wind are reported to display a monofractal scaling when sampled parallel to the local magnetic field and a multifractal scaling when measured perpendicularly. Whether and to what extent will the scaling anisotropy be weakened by the enhancement of turbulence amplitude relative to the background magnetic strength? In this study, based on two runs of the magnetohydrodynamic (MHD) turbulence simulation with different relative levels of turbulence amplitude, we investigate and compare the scaling of multi-order magnetic structure functions and magnetic probability distribution functions (PDFs) as well as their dependence on the direction of the local field. The numerical results show that for the case of large-amplitude MHD turbulence, the multi-order structure functions display a multifractal scaling at all angles to the local magnetic field, with PDFs deviating significantly from the Gaussian distribution and a flatness larger than 3 at all angles. In contrast, for the case of small-amplitude MHD turbulence, the multi-order structure functions and PDFs have different features in the quasi-parallel and quasi-perpendicular directions: a monofractal scaling and Gaussian-like distribution in the former, and a conversion of a monofractal scaling and Gaussian-like distribution into a multifractal scaling and non-Gaussian tail distribution in the latter. These results hint that when intermittencies are abundant and intense, the multifractal scaling in the structure functions can appear even if it is in the quasi-parallel direction; otherwise, the monofractal scaling in the structure functions remains even if it is in the quasi-perpendicular direction.
Automatic Multilevel Parallelization Using OpenMP
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)
2002-01-01
In this paper we describe the extension of the CAPO parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report first results for several benchmark codes and one full application that have been parallelized using our system.
NASA Astrophysics Data System (ADS)
Shoemaker, C. A.; Pang, M.; Akhtar, T.; Bindel, D.
2016-12-01
New parallel surrogate global optimization algorithms are developed and applied to objective functions that are expensive simulations (possibly with multiple local minima). The algorithms can be applied to most geophysical simulations, including those with nonlinear partial differential equations. The optimization does not require simulations be parallelized. Asynchronous (and synchronous) parallel execution is available in the optimization toolbox "pySOT". The parallel algorithms are modified from serial to eliminate fine grained parallelism. The optimization is computed with open source software pySOT, a Surrogate Global Optimization Toolbox that allows user to pick the type of surrogate (or ensembles), the search procedure on surrogate, and the type of parallelism (synchronous or asynchronous). pySOT also allows the user to develop new algorithms by modifying parts of the code. In the applications here, the objective function takes up to 30 minutes for one simulation, and serial optimization can take over 200 hours. Results from Yellowstone (NSF) and NCSS (Singapore) supercomputers are given for groundwater contaminant hydrology simulations with applications to model parameter estimation and decontamination management. All results are compared with alternatives. The first results are for optimization of pumping at many wells to reduce cost for decontamination of groundwater at a superfund site. The optimization runs with up to 128 processors. Superlinear speed up is obtained for up to 16 processors, and efficiency with 64 processors is over 80%. Each evaluation of the objective function requires the solution of nonlinear partial differential equations to describe the impact of spatially distributed pumping and model parameters on model predictions for the spatial and temporal distribution of groundwater contaminants. The second application uses an asynchronous parallel global optimization for groundwater quality model calibration. The time for a single objective function evaluation varies unpredictably, so efficiency is improved with asynchronous parallel calculations to improve load balancing. The third application (done at NCSS) incorporates new global surrogate multi-objective parallel search algorithms into pySOT and applies it to a large watershed calibration problem.
2012-10-01
using the open-source code Large-scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) (http://lammps.sandia.gov) (23). The commercial...parameters are proprietary and cannot be ported to the LAMMPS 4 simulation code. In our molecular dynamics simulations at the atomistic resolution, we...IBI iterative Boltzmann inversion LAMMPS Large-scale Atomic/Molecular Massively Parallel Simulator MAPS Materials Processes and Simulations MS
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-10-20
Look-ahead dynamic simulation software system incorporates the high performance parallel computing technologies, significantly reduces the solution time for each transient simulation case, and brings the dynamic simulation analysis into on-line applications to enable more transparency for better reliability and asset utilization. It takes the snapshot of the current power grid status, functions in parallel computing the system dynamic simulation, and outputs the transient response of the power system in real time.
Active control of lateral leakage in thin-ridge SOI waveguide structures
NASA Astrophysics Data System (ADS)
Dalvand, Naser; Nguyen, Thach G.; Tummidi, Ravi S.; Koch, Thomas L.; Mitchell, Arnan
2011-12-01
We report on the design and simulation of a novel Silicon-On-Insulator waveguide structures which when excited with TM guided light, emit TE polarized radiation with controlled radiation characteristics[1]. The structures utilize parallel leaky waveguides of specific separations. The structures are simulated using a full-vector mode-matching approach which allows visualisation of the evolution of the propagating and radiating fields over the length of the waveguide structure. It is shown that radiation can be resonantly enhanced or suppressed in different directions depending on the choice of the phase of the excitation of the waveguide components. Steps toward practical demonstration are identified.
NASA Technical Reports Server (NTRS)
Ellison, D. C.; Jones, F. C.; Eichler, D.
1981-01-01
A collisionless quasi-parallel shock is simulated by Monte Carlo techniques. The scattering of all velocity particles from thermal to high energy is assumed to occur so that the mean free path is directly proportional to velocity times the mass-to-charge-ratio, and inversely proporational to the plasma density. The shock profile and velocity spectra are obtained, showing preferential acceleration of high A/Z particles relative to protons. The inclusion of the back pressure of the scattering particles on the inflowing plasma produces a smoothing of the shock profile, which implies that the spectra are steeper than for a discontinuous shock.
A wavelet approach to binary blackholes with asynchronous multitasking
NASA Astrophysics Data System (ADS)
Lim, Hyun; Hirschmann, Eric; Neilsen, David; Anderson, Matthew; Debuhr, Jackson; Zhang, Bo
2016-03-01
Highly accurate simulations of binary black holes and neutron stars are needed to address a variety of interesting problems in relativistic astrophysics. We present a new method for the solving the Einstein equations (BSSN formulation) using iterated interpolating wavelets. Wavelet coefficients provide a direct measure of the local approximation error for the solution and place collocation points that naturally adapt to features of the solution. Further, they exhibit exponential convergence on unevenly spaced collection points. The parallel implementation of the wavelet simulation framework presented here deviates from conventional practice in combining multi-threading with a form of message-driven computation sometimes referred to as asynchronous multitasking.
High performance Python for direct numerical simulations of turbulent flows
NASA Astrophysics Data System (ADS)
Mortensen, Mikael; Langtangen, Hans Petter
2016-06-01
Direct Numerical Simulations (DNS) of the Navier Stokes equations is an invaluable research tool in fluid dynamics. Still, there are few publicly available research codes and, due to the heavy number crunching implied, available codes are usually written in low-level languages such as C/C++ or Fortran. In this paper we describe a pure scientific Python pseudo-spectral DNS code that nearly matches the performance of C++ for thousands of processors and billions of unknowns. We also describe a version optimized through Cython, that is found to match the speed of C++. The solvers are written from scratch in Python, both the mesh, the MPI domain decomposition, and the temporal integrators. The solvers have been verified and benchmarked on the Shaheen supercomputer at the KAUST supercomputing laboratory, and we are able to show very good scaling up to several thousand cores. A very important part of the implementation is the mesh decomposition (we implement both slab and pencil decompositions) and 3D parallel Fast Fourier Transforms (FFT). The mesh decomposition and FFT routines have been implemented in Python using serial FFT routines (either NumPy, pyFFTW or any other serial FFT module), NumPy array manipulations and with MPI communications handled by MPI for Python (mpi4py). We show how we are able to execute a 3D parallel FFT in Python for a slab mesh decomposition using 4 lines of compact Python code, for which the parallel performance on Shaheen is found to be slightly better than similar routines provided through the FFTW library. For a pencil mesh decomposition 7 lines of code is required to execute a transform.
Impact of local diffusion on macroscopic dispersion in three-dimensional porous media
NASA Astrophysics Data System (ADS)
Dartois, Arthur; Beaudoin, Anthony; Huberson, Serge
2018-02-01
While macroscopic longitudinal and transverse dispersion in three-dimensional porous media has been simulated previously mostly under purely advective conditions, the impact of diffusion on macroscopic dispersion in 3D remains an open question. Furthermore, both in 2D and 3D, recurring difficulties have been encountered due to computer limitation or analytical approximation. In this work, we use the Lagrangian velocity covariance function and the temporal derivative of second-order moments to study the influence of diffusion on dispersion in highly heterogeneous 2D and 3D porous media. The first approach characterizes the correlation between the values of Eulerian velocity components sampled by particles undergoing diffusion at two times. The second approach allows the estimation of dispersion coefficients and the analysis of their behaviours as functions of diffusion. These two approaches allowed us to reach new results. The influence of diffusion on dispersion seems to be globally similar between highly heterogeneous 2D and 3D porous media. Diffusion induces a decrease in the dispersion in the direction parallel to the flow direction and an increase in the dispersion in the direction perpendicular to the flow direction. However, the amplification of these two effects with the permeability variance is clearly different between 2D and 3D. For the direction parallel to the flow direction, the amplification is more important in 3D than in 2D. It is reversed in the direction perpendicular to the flow direction.
Python Radiative Transfer Emission code (PyRaTE): non-LTE spectral lines simulations
NASA Astrophysics Data System (ADS)
Tritsis, A.; Yorke, H.; Tassis, K.
2018-05-01
We describe PyRaTE, a new, non-local thermodynamic equilibrium (non-LTE) line radiative transfer code developed specifically for post-processing astrochemical simulations. Population densities are estimated using the escape probability method. When computing the escape probability, the optical depth is calculated towards all directions with density, molecular abundance, temperature and velocity variations all taken into account. A very easy-to-use interface, capable of importing data from simulations outputs performed with all major astrophysical codes, is also developed. The code is written in PYTHON using an "embarrassingly parallel" strategy and can handle all geometries and projection angles. We benchmark the code by comparing our results with those from RADEX (van der Tak et al. 2007) and against analytical solutions and present case studies using hydrochemical simulations. The code will be released for public use.
Comparison of DAC and MONACO DSMC Codes with Flat Plate Simulation
NASA Technical Reports Server (NTRS)
Padilla, Jose F.
2010-01-01
Various implementations of the direct simulation Monte Carlo (DSMC) method exist in academia, government and industry. By comparing implementations, deficiencies and merits of each can be discovered. This document reports comparisons between DSMC Analysis Code (DAC) and MONACO. DAC is NASA's standard DSMC production code and MONACO is a research DSMC code developed in academia. These codes have various differences; in particular, they employ distinct computational grid definitions. In this study, DAC and MONACO are compared by having each simulate a blunted flat plate wind tunnel test, using an identical volume mesh. Simulation expense and DSMC metrics are compared. In addition, flow results are compared with available laboratory data. Overall, this study revealed that both codes, excluding grid adaptation, performed similarly. For parallel processing, DAC was generally more efficient. As expected, code accuracy was mainly dependent on physical models employed.
NASA Technical Reports Server (NTRS)
Nishikawa, K.-I.; Ganguli, G.; Lee, Y. C.; Palmadesso, P. J.
1989-01-01
A spatially two-dimensional electrostatic PIC simulation code was used to study the stability of a plasma equilibrium characterized by a localized transverse dc electric field and a field-aligned drift for L is much less than Lx, where Lx is the simulation length in the x direction and L is the scale length associated with the dc electric field. It is found that the dc electric field and the field-aligned current can together play a synergistic role to enable the excitation of electrostatic waves even when the threshold values of the field aligned drift and the E x B drift are individually subcritical. The simulation results show that the growing ion waves are associated with small vortices in the linear stage, which evolve to the nonlinear stage dominated by larger vortices with lower frequencies.
NASA Astrophysics Data System (ADS)
Bylaska, Eric J.; Weare, Jonathan Q.; Weare, John H.
2013-08-01
Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time ti (trajectory positions and velocities xi = (ri, vi)) to time ti + 1 (xi + 1) by xi + 1 = fi(xi), the dynamics problem spanning an interval from t0…tM can be transformed into a root finding problem, F(X) = [xi - f(x(i - 1)]i = 1, M = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H2O AIMD simulation at the MP2 level. The maximum speedup (serial execution time/parallel execution time) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a distributed computing environment using very slow transmission control protocol/Internet protocol networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl + 4H2O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. Using these algorithms, we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 s/time step to 6.9 s/time step.
Xyce Parallel Electronic Simulator : users' guide, version 2.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoekstra, Robert John; Waters, Lon J.; Rankin, Eric Lamont
2004-06-01
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator capable of simulating electrical circuits at a variety of abstraction levels. Primarily, Xyce has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability the current state-of-the-art in the following areas: {sm_bullet} Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. {sm_bullet} Improved performance for allmore » numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. {sm_bullet} Device models which are specifically tailored to meet Sandia's needs, including many radiation-aware devices. {sm_bullet} A client-server or multi-tiered operating model wherein the numerical kernel can operate independently of the graphical user interface (GUI). {sm_bullet} Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing of computing platforms. These include serial, shared-memory and distributed-memory parallel implementation - which allows it to run efficiently on the widest possible number parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. One feature required by designers is the ability to add device models, many specific to the needs of Sandia, to the code. To this end, the device package in the Xyce These input formats include standard analytical models, behavioral models look-up Parallel Electronic Simulator is designed to support a variety of device model inputs. tables, and mesh-level PDE device models. Combined with this flexible interface is an architectural design that greatly simplifies the addition of circuit models. One of the most important feature of Xyce is in providing a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia now has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods) research and development can be performed. Ultimately, these capabilities are migrated to end users.« less
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak
1996-01-01
Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.
Efficient parallelization of analytic bond-order potentials for large-scale atomistic simulations
NASA Astrophysics Data System (ADS)
Teijeiro, C.; Hammerschmidt, T.; Drautz, R.; Sutmann, G.
2016-07-01
Analytic bond-order potentials (BOPs) provide a way to compute atomistic properties with controllable accuracy. For large-scale computations of heterogeneous compounds at the atomistic level, both the computational efficiency and memory demand of BOP implementations have to be optimized. Since the evaluation of BOPs is a local operation within a finite environment, the parallelization concepts known from short-range interacting particle simulations can be applied to improve the performance of these simulations. In this work, several efficient parallelization methods for BOPs that use three-dimensional domain decomposition schemes are described. The schemes are implemented into the bond-order potential code BOPfox, and their performance is measured in a series of benchmarks. Systems of up to several millions of atoms are simulated on a high performance computing system, and parallel scaling is demonstrated for up to thousands of processors.
Parallel simulation of tsunami inundation on a large-scale supercomputer
NASA Astrophysics Data System (ADS)
Oishi, Y.; Imamura, F.; Sugawara, D.
2013-12-01
An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the finite difference calculation, (2) communication between adjacent layers for the calculations to connect each layer, and (3) global communication to obtain the time step which satisfies the CFL condition in the whole domain. A preliminary test on the K computer showed the parallel efficiency on 1024 cores was 57% relative to 64 cores. We estimate that the parallel efficiency will be considerably improved by applying a 2-D domain decomposition instead of the present 1-D domain decomposition in future work. The present parallel tsunami model was applied to the 2011 Great Tohoku tsunami. The coarsest resolution layer covers a 758 km × 1155 km region with a 405 m grid spacing. A nesting of five layers was used with the resolution ratio of 1/3 between nested layers. The finest resolution region has 5 m resolution and covers most of the coastal region of Sendai city. To complete 2 hours of simulation time, the serial (non-parallel) computation took approximately 4 days on a workstation. To complete the same simulation on 1024 cores of the K computer, it took 45 minutes which is more than two times faster than real-time. This presentation discusses the updated parallel computational performance and the efficient use of the K computer when considering the characteristics of the tsunami inundation simulation model in relation to the characteristics and capabilities of the K computer.
Methodology of modeling and measuring computer architectures for plasma simulations
NASA Technical Reports Server (NTRS)
Wang, L. P. T.
1977-01-01
A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.
Massively parallel quantum computer simulator
NASA Astrophysics Data System (ADS)
De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.
2007-01-01
We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.
PENTACLE: Parallelized particle-particle particle-tree code for planet formation
NASA Astrophysics Data System (ADS)
Iwasawa, Masaki; Oshino, Shoichi; Fujii, Michiko S.; Hori, Yasunori
2017-10-01
We have newly developed a parallelized particle-particle particle-tree code for planet formation, PENTACLE, which is a parallelized hybrid N-body integrator executed on a CPU-based (super)computer. PENTACLE uses a fourth-order Hermite algorithm to calculate gravitational interactions between particles within a cut-off radius and a Barnes-Hut tree method for gravity from particles beyond. It also implements an open-source library designed for full automatic parallelization of particle simulations, FDPS (Framework for Developing Particle Simulator), to parallelize a Barnes-Hut tree algorithm for a memory-distributed supercomputer. These allow us to handle 1-10 million particles in a high-resolution N-body simulation on CPU clusters for collisional dynamics, including physical collisions in a planetesimal disc. In this paper, we show the performance and the accuracy of PENTACLE in terms of \\tilde{R}_cut and a time-step Δt. It turns out that the accuracy of a hybrid N-body simulation is controlled through Δ t / \\tilde{R}_cut and Δ t / \\tilde{R}_cut ˜ 0.1 is necessary to simulate accurately the accretion process of a planet for ≥106 yr. For all those interested in large-scale particle simulations, PENTACLE, customized for planet formation, will be freely available from https://github.com/PENTACLE-Team/PENTACLE under the MIT licence.
Numerical characteristics of quantum computer simulation
NASA Astrophysics Data System (ADS)
Chernyavskiy, A.; Khamitov, K.; Teplov, A.; Voevodin, V.; Voevodin, Vl.
2016-12-01
The simulation of quantum circuits is significantly important for the implementation of quantum information technologies. The main difficulty of such modeling is the exponential growth of dimensionality, thus the usage of modern high-performance parallel computations is relevant. As it is well known, arbitrary quantum computation in circuit model can be done by only single- and two-qubit gates, and we analyze the computational structure and properties of the simulation of such gates. We investigate the fact that the unique properties of quantum nature lead to the computational properties of the considered algorithms: the quantum parallelism make the simulation of quantum gates highly parallel, and on the other hand, quantum entanglement leads to the problem of computational locality during simulation. We use the methodology of the AlgoWiki project (algowiki-project.org) to analyze the algorithm. This methodology consists of theoretical (sequential and parallel complexity, macro structure, and visual informational graph) and experimental (locality and memory access, scalability and more specific dynamic characteristics) parts. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia). We show that the simulation of quantum gates is a good base for the research and testing of the development methods for data intense parallel software, and considered methodology of the analysis can be successfully used for the improvement of the algorithms in quantum information science.
Visualization and Tracking of Parallel CFD Simulations
NASA Technical Reports Server (NTRS)
Vaziri, Arsi; Kremenetsky, Mark
1995-01-01
We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.
Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip
2014-02-28
In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations.
Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip
2015-01-01
In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations. PMID:26512230
Design of object-oriented distributed simulation classes
NASA Technical Reports Server (NTRS)
Schoeffler, James D. (Principal Investigator)
1995-01-01
Distributed simulation of aircraft engines as part of a computer aided design package is being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for 'Numerical Propulsion Simulation System'. NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT 'Actor' model of a concurrent object and uses 'connectors' to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.
Design of Object-Oriented Distributed Simulation Classes
NASA Technical Reports Server (NTRS)
Schoeffler, James D.
1995-01-01
Distributed simulation of aircraft engines as part of a computer aided design package being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for "Numerical Propulsion Simulation System". NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT "Actor" model of a concurrent object and uses "connectors" to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.
Partitioning and packing mathematical simulation models for calculation on parallel computers
NASA Technical Reports Server (NTRS)
Arpasi, D. J.; Milner, E. J.
1986-01-01
The development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system is described. Degrees of parallelism (i.e., coupling between the equations) and their impact on parallel processing are discussed. The problem of identifying computational parallelism within sets of closely coupled equations that require the exchange of current values of variables is described. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. An algorithm which packs the equations into a minimum number of processors is also described. The results of the packing algorithm when applied to a turbojet engine model are presented in terms of processor utilization.
Vectorization for Molecular Dynamics on Intel Xeon Phi Corpocessors
NASA Astrophysics Data System (ADS)
Yi, Hongsuk
2014-03-01
Many modern processors are capable of exploiting data-level parallelism through the use of single instruction multiple data (SIMD) execution. The new Intel Xeon Phi coprocessor supports 512 bit vector registers for the high performance computing. In this paper, we have developed a hierarchical parallelization scheme for accelerated molecular dynamics simulations with the Terfoff potentials for covalent bond solid crystals on Intel Xeon Phi coprocessor systems. The scheme exploits multi-level parallelism computing. We combine thread-level parallelism using a tightly coupled thread-level and task-level parallelism with 512-bit vector register. The simulation results show that the parallel performance of SIMD implementations on Xeon Phi is apparently superior to their x86 CPU architecture.
NASA Astrophysics Data System (ADS)
Stepanova, L. V.
2017-12-01
Atomistic simulations of the central crack growth process in an infinite plane medium under mixed-mode loading using Large-Scale Atomic/Molecular Massively Parallel Simulator (LAMMPS), a classical molecular dynamics code, are performed. The inter-atomic potential used in this investigation is the Embedded Atom Method (EAM) potential. Plane specimens with an initial central crack are subjected to mixed-mode loadings. The simulation cell contains 400,000 atoms. The crack propagation direction angles under different values of the mixity parameter in a wide range of values from pure tensile loading to pure shear loading in a wide range of temperatures (from 0.1 K to 800 K) are obtained and analyzed. It is shown that the crack propagation direction angles obtained by molecular dynamics coincide with the crack propagation direction angles given by the multi-parameter fracture criteria based on the strain energy density and the multi-parameter description of the crack-tip fields. The multi-parameter fracture criteria are based on the multi-parameter stress field description taking into account the higher order terms of the Williams series expansion of the crack tip fields.
Acceleration of Radiance for Lighting Simulation by Using Parallel Computing with OpenCL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuo, Wangda; McNeil, Andrew; Wetter, Michael
2011-09-06
We report on the acceleration of annual daylighting simulations for fenestration systems in the Radiance ray-tracing program. The algorithm was optimized to reduce both the redundant data input/output operations and the floating-point operations. To further accelerate the simulation speed, the calculation for matrix multiplications was implemented using parallel computing on a graphics processing unit. We used OpenCL, which is a cross-platform parallel programming language. Numerical experiments show that the combination of the above measures can speed up the annual daylighting simulations 101.7 times or 28.6 times when the sky vector has 146 or 2306 elements, respectively.
Streaming parallel GPU acceleration of large-scale filter-based spiking neural networks.
Slażyński, Leszek; Bohte, Sander
2012-01-01
The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises affordable large-scale neural network simulation previously only available at supercomputing facilities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of magnitude, the challenge is to develop fine-grained parallel algorithms to fully exploit the particulars of GPUs. Computation in a neural network is inherently parallel and thus a natural match for GPU architectures: given inputs, the internal state for each neuron can be updated in parallel. We show that for filter-based spiking neurons, like the Spike Response Model, the additive nature of membrane potential dynamics enables additional update parallelism. This also reduces the accumulation of numerical errors when using single precision computation, the native precision of GPUs. We further show that optimizing simulation algorithms and data structures to the GPU's architecture has a large pay-off: for example, matching iterative neural updating to the memory architecture of the GPU speeds up this simulation step by a factor of three to five. With such optimizations, we can simulate in better-than-realtime plausible spiking neural networks of up to 50 000 neurons, processing over 35 million spiking events per second.
Dewaraja, Yuni K; Ljungberg, Michael; Majumdar, Amitava; Bose, Abhijit; Koral, Kenneth F
2002-02-01
This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random Number Generator (SPRNG) to generate uncorrelated random number streams. These parallelization techniques are also applicable to other distributed memory architectures. A linear increase in computing speed with the number of processors is demonstrated for up to 32 processors. This speed-up is especially significant in Single Photon Emission Computed Tomography (SPECT) simulations involving higher energy photon emitters, where explicit modeling of the phantom and collimator is required. For (131)I, the accuracy of the parallel code is demonstrated by comparing simulated and experimental SPECT images from a heart/thorax phantom. Clinically realistic SPECT simulations using the voxel-man phantom are carried out to assess scatter and attenuation correction.
Dynamic load balance scheme for the DSMC algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jin; Geng, Xiangren; Jiang, Dingwu
The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, themore » total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.« less
Komarov, Ivan; D'Souza, Roshan M
2012-01-01
The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.
Traffic Simulations on Parallel Computers Using Domain Decomposition Techniques
DOT National Transportation Integrated Search
1995-01-01
Large scale simulations of Intelligent Transportation Systems (ITS) can only be acheived by using the computing resources offered by parallel computing architectures. Domain decomposition techniques are proposed which allow the performance of traffic...
Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji
2015-07-01
GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.
On extending parallelism to serial simulators
NASA Technical Reports Server (NTRS)
Nicol, David; Heidelberger, Philip
1994-01-01
This paper describes an approach to discrete event simulation modeling that appears to be effective for developing portable and efficient parallel execution of models of large distributed systems and communication networks. In this approach, the modeler develops submodels using an existing sequential simulation modeling tool, using the full expressive power of the tool. A set of modeling language extensions permit automatically synchronized communication between submodels; however, the automation requires that any such communication must take a nonzero amount off simulation time. Within this modeling paradigm, a variety of conservative synchronization protocols can transparently support conservative execution of submodels on potentially different processors. A specific implementation of this approach, U.P.S. (Utilitarian Parallel Simulator), is described, along with performance results on the Intel Paragon.
Global Magnetohydrodynamic Simulation Using High Performance FORTRAN on Parallel Computers
NASA Astrophysics Data System (ADS)
Ogino, T.
High Performance Fortran (HPF) is one of modern and common techniques to achieve high performance parallel computation. We have translated a 3-dimensional magnetohydrodynamic (MHD) simulation code of the Earth's magnetosphere from VPP Fortran to HPF/JA on the Fujitsu VPP5000/56 vector-parallel supercomputer and the MHD code was fully vectorized and fully parallelized in VPP Fortran. The entire performance and capability of the HPF MHD code could be shown to be almost comparable to that of VPP Fortran. A 3-dimensional global MHD simulation of the earth's magnetosphere was performed at a speed of over 400 Gflops with an efficiency of 76.5 VPP5000/56 in vector and parallel computation that permitted comparison with catalog values. We have concluded that fluid and MHD codes that are fully vectorized and fully parallelized in VPP Fortran can be translated with relative ease to HPF/JA, and a code in HPF/JA may be expected to perform comparably to the same code written in VPP Fortran.
The cost of conservative synchronization in parallel discrete event simulations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1990-01-01
The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor.
A New Parallel Boundary Condition for Turbulence Simulations in Stellarators
NASA Astrophysics Data System (ADS)
Martin, Mike F.; Landreman, Matt; Dorland, William; Xanthopoulos, Pavlos
2017-10-01
For gyrokinetic simulations of core turbulence, the ``twist-and-shift'' parallel boundary condition (Beer et al., PoP, 1995), which involves a shift in radial wavenumber proportional to the global shear and a quantization of the simulation domain's aspect ratio, is the standard choice. But as this condition was derived under the assumption of axisymmetry, ``twist-and-shift'' as it stands is formally incorrect for turbulence simulations in stellarators. Moreover, for low-shear stellarators like W7X and HSX, the use of a global shear in the traditional boundary condition places an inflexible constraint on the aspect ratio of the domain, requiring more grid points to fully resolve its extent. Here, we present a parallel boundary condition for ``stellarator-symmetric'' simulations that relies on the local shear along a field line. This boundary condition is similar to ``twist-and-shift'', but has an added flexibility in choosing the parallel length of the domain based on local shear consideration in order to optimize certain parameters such as the aspect ratio of the simulation domain.
The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations
Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka
2011-01-01
Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007
Parallel computing method for simulating hydrological processesof large rivers under climate change
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.
2016-12-01
Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, William Michael; Plimpton, Steven James; Wang, Peng
2010-03-01
LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. The code is designed to be easy to modify or extend with new functionality.
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Siegel, Andrew R.
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.« less
Measurements of Two-Fluid Relaxation in the Madison Symmetric Torus
NASA Astrophysics Data System (ADS)
Triana, Joseph C.
Recent measurements and extended MHD simulations expose the importance of two-fluid physics in the relaxation and self-organization of the current and momentum profiles in RFP plasmas. A hallmark of relaxation is that the inductive electric field is not balanced by resistive dissipation, prompting the study of fluctuation-induced emfs in the generalized Ohm's law, E ∥ - etaJ∥ = -
Two-fluid Magnetic Relaxation in the MST Reversed Field Pinch
NASA Astrophysics Data System (ADS)
Triana, Joseph; Almagri, Af; McCollam, Kj; Sarff, Js; Sovinec, Cr
2016-10-01
Recent measurements and extended MHD simulations expose the importance of two-fluid physics in the relaxation and self-organization of the current and momentum profiles in RFP plasmas. A hallmark of relaxation is that the inductive electric field is not balanced by resistive dissipation, prompting the study of fluctuation-induced emfs in the generalized Ohm's law, E- ηJ =-
Research in parallel computing
NASA Technical Reports Server (NTRS)
Ortega, James M.; Henderson, Charles
1994-01-01
This report summarizes work on parallel computations for NASA Grant NAG-1-1529 for the period 1 Jan. - 30 June 1994. Short summaries on highly parallel preconditioners, target-specific parallel reductions, and simulation of delta-cache protocols are provided.
Jali - Unstructured Mesh Infrastructure for Multi-Physics Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garimella, Rao V; Berndt, Markus; Coon, Ethan
2017-04-13
Jali is a parallel unstructured mesh infrastructure library designed for use by multi-physics simulations. It supports 2D and 3D arbitrary polyhedral meshes distributed over hundreds to thousands of nodes. Jali can read write Exodus II meshes along with fields and sets on the mesh and support for other formats is partially implemented or is (https://github.com/MeshToolkit/MSTK), an open source general purpose unstructured mesh infrastructure library from Los Alamos National Laboratory. While it has been made to work with other mesh frameworks such as MOAB and STKmesh in the past, support for maintaining the interface to these frameworks has been suspended formore » now. Jali supports distributed as well as on-node parallelism. Support of on-node parallelism is through direct use of the the mesh in multi-threaded constructs or through the use of "tiles" which are submeshes or sub-partitions of a partition destined for a compute node.« less
Optical interconnection using polyimide waveguide for multichip module
NASA Astrophysics Data System (ADS)
Koyanagi, Mitsumasa
1996-01-01
We have developed a parallel processor system with 152 RISC processor chips specific for Monte-Carlo analysis. This system has the ring-bus architecture. The performance of several Gflops is expected in this system according to the computer simulation. However, it was revealed that the data transfer speed of the bus has to be increased more dramatically in order to further increase the performance. Then, we propose to introduce the optical interconnection into the parallel processor system to increase the data transfer speed of the buses. The double ringbus architecture is employed in this new parallel processor system with optical interconnection. The free-space optical interconnection arid the optical waveguide are used for the optical ring-bus. Thin polyimide film was used to form the optical waveguide. A relatively low propagation loss was achieved in the polyimide optical waveguide. In addition, it was confirmed that the propagation direction of signal light can be easily changed by using a micro-mirror.
Optical interconnection using polyimide waveguide for multichip module
NASA Astrophysics Data System (ADS)
Koyanagi, Mitsumasa
1996-01-01
We have developed a parallel processor system with 152 RISC processor chips specific for Monte-Carlo analysis. This system has the ring-bus architecture. The performance of several Gflops is expected in this system according to the computer simulation. However, it was revealed that the data transfer speed of the bus has to be increased more dramatically in order to further increase the performance. Then, we propose to introduce the optical interconnection into the parallel processor system to increase the data transfer speed of the buses. The double ring-bus architecture is employed in this new parallel processor system with optical interconnection. The free-space optical interconnection and the optical waveguide are used for the optical ring-bus. Thin polyimide film was used to form the optical waveguide. A relatively low propagation loss was achieved in the polyimide optical waveguide. In addition, it was confirmed that the propagation direction of signal light can be easily changed by using a micro-mirror.
The energetic ion signature of an O-type neutral line in the geomagnetic tail
NASA Technical Reports Server (NTRS)
Martin, R. F., Jr.; Johnson, D. F.; Speiser, T. W.
1991-01-01
An energetic ion signature is presented which has the potential for remote sensing of an O-type neutral line embedded in a current sheet. A source plasma with a tailward flowing Kappa distribution yields a strongly non-Kappa distribution after interacting with the neutral line: sharp jumps, or ridges, occur in the velocity space distribution function f(nu-perpendicular, nu-parallel) associated with both increases and decreases in f. The jumps occur when orbits are reversed in the x-direction: a reversal causing initially earthward particles (low probability in the source distribution) to be observed results in a decrease in f, while a reversal causing initially tailward particles to be observed produces an increase in f. The reversals, and hence the jumps, occur at approximately constant values of perpendicular velocity in both the positive nu parallel and negative nu parallel half planes. The results were obtained using single particle simulations in a fixed magnetic field model.
Rideaux, Reuben; Apthorp, Deborah; Edwards, Mark
2015-02-12
Recent findings have indicated the capacity to consolidate multiple items into visual short-term memory in parallel varies as a function of the type of information. That is, while color can be consolidated in parallel, evidence suggests that orientation cannot. Here we investigated the capacity to consolidate multiple motion directions in parallel and reexamined this capacity using orientation. This was achieved by determining the shortest exposure duration necessary to consolidate a single item, then examining whether two items, presented simultaneously, could be consolidated in that time. The results show that parallel consolidation of direction and orientation information is possible, and that parallel consolidation of direction appears to be limited to two. Additionally, we demonstrate the importance of adequate separation between feature intervals used to define items when attempting to consolidate in parallel, suggesting that when multiple items are consolidated in parallel, as opposed to serially, the resolution of representations suffer. Finally, we used facilitation of spatial attention to show that the deterioration of item resolution occurs during parallel consolidation, as opposed to storage. © 2015 ARVO.
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry
1998-01-01
This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.
NASA Astrophysics Data System (ADS)
Novak, Brian; Astete, Carlos; Sabliov, Cristina; Moldovan, Dorel
2012-02-01
Poly(lactic-co-glycolic acid) (PLGA) is a biodegradable polymer. Nanoparticles of PLGA are commonly used for drug delivery applications. The interaction of the nanoparticles with the cell membrane may influence the rate of their uptake by cells. Both PLGA and cell membranes are negatively charged, so adding positively charged polymers such as trimethyl chitosan (TMC) which adheres to the PLGA particles improves their cellular uptake. The interaction of 3 nm PLGA and TMC-modified-PLGA nanoparticles with lipid bilayers composed of mixtures of phosphatidylcholine and phosphatidylserine lipids was studied using molecular dynamics simulations. The free energy profiles as function of nanoparticles position along the normal direction to the bilayers were calculated, the distribution of phosphatidylserine lipids as a function of distance of the particle from the bilayer was calculated, and the time scale for particle motion in the directions parallel to the bilayer surface was estimated.
Pure quasi-P wave equation and numerical solution in 3D TTI media
NASA Astrophysics Data System (ADS)
Zhang, Jian-Min; He, Bing-Shou; Tang, Huai-Gu
2017-03-01
Based on the pure quasi-P wave equation in transverse isotropic media with a vertical symmetry axis (VTI media), a quasi-P wave equation is obtained in transverse isotropic media with a tilted symmetry axis (TTI media). This is achieved using projection transformation, which rotates the direction vector in the coordinate system of observation toward the direction vector for the coordinate system in which the z-component is parallel to the symmetry axis of the TTI media. The equation has a simple form, is easily calculated, is not influenced by the pseudo-shear wave, and can be calculated reliably when δ is greater than ɛ. The finite difference method is used to solve the equation. In addition, a perfectly matched layer (PML) absorbing boundary condition is obtained for the equation. Theoretical analysis and numerical simulation results with forward modeling prove that the equation can accurately simulate a quasi-P wave in TTI medium.
NASA Technical Reports Server (NTRS)
Reinsch, K. G. (Editor); Schmidt, W. (Editor); Ecer, A. (Editor); Haeuser, Jochem (Editor); Periaux, J. (Editor)
1992-01-01
A conference was held on parallel computational fluid dynamics and produced related papers. Topics discussed in these papers include: parallel implicit and explicit solvers for compressible flow, parallel computational techniques for Euler and Navier-Stokes equations, grid generation techniques for parallel computers, and aerodynamic simulation om massively parallel systems.
NASA Astrophysics Data System (ADS)
Furuichi, M.; Nishiura, D.
2015-12-01
Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our approach is suitable for solving the particles with different calculation costs (e.g. boundary particles) as well as the heterogeneous computer architecture. We analyze the parallel efficiency and scalability on the super computer systems (K-computer, Earth simulator 3, etc.).
SIERRA Low Mach Module: Fuego User Manual Version 4.46.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sierra Thermal/Fluid Team
2017-09-01
The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the coremore » architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.« less
SIERRA Low Mach Module: Fuego Theory Manual Version 4.44
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sierra Thermal /Fluid Team
2017-04-01
The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the coremore » architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.« less
SIERRA Low Mach Module: Fuego Theory Manual Version 4.46.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sierra Thermal/Fluid Team
The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the coremore » architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.« less
NASA Astrophysics Data System (ADS)
Fu, S.-P.; Peng, Z.; Yuan, H.; Kfoury, R.; Young, Y.-N.
2017-01-01
Lipid bilayer membranes have been extensively studied by coarse-grained molecular dynamics simulations. Numerical efficiencies have been reported in the cases of aggressive coarse-graining, where several lipids are coarse-grained into a particle of size 4 ∼ 6 nm so that there is only one particle in the thickness direction. Yuan et al. proposed a pair-potential between these one-particle-thick coarse-grained lipid particles to capture the mechanical properties of a lipid bilayer membrane, such as gel-fluid-gas phase transitions of lipids, diffusion, and bending rigidity Yuan et al. (2010). In this work we implement such an interaction potential in LAMMPS to simulate large-scale lipid systems such as a giant unilamellar vesicle (GUV) and red blood cells (RBCs). We also consider the effect of cytoskeleton on the lipid membrane dynamics as a model for RBC dynamics, and incorporate coarse-grained water molecules to account for hydrodynamic interactions. The interaction between the coarse-grained water molecules (explicit solvent molecules) is modeled as a Lennard-Jones (L-J) potential. To demonstrate that the proposed methods do capture the observed dynamics of vesicles and RBCs, we focus on two sets of LAMMPS simulations: 1. Vesicle shape transitions with enclosed volume; 2. RBC shape transitions with different enclosed volume. Finally utilizing the parallel computing capability in LAMMPS, we provide some timing results for parallel coarse-grained simulations to illustrate that it is possible to use LAMMPS to simulate large-scale realistic complex biological membranes for more than 1 ms.
A tool for simulating parallel branch-and-bound methods
NASA Astrophysics Data System (ADS)
Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail
2016-01-01
The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.
Large-scale large eddy simulation of nuclear reactor flows: Issues and perspectives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merzari, Elia; Obabko, Aleks; Fischer, Paul
Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems.more » These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. Finally, the focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.« less
Large-scale large eddy simulation of nuclear reactor flows: Issues and perspectives
Merzari, Elia; Obabko, Aleks; Fischer, Paul; ...
2016-11-03
Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems.more » These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. Finally, the focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.« less
Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture
NASA Technical Reports Server (NTRS)
Jones, W. H.
1983-01-01
The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.
Electron and ion heating by whistler turbulence: Three-dimensional particle-in-cell simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, R. Scott; Gary, S. Peter; Wang, Joseph
2014-12-17
Three-dimensional particle-in-cell simulations of decaying whistler turbulence are carried out on a collisionless, homogeneous, magnetized, electron-ion plasma model. In addition, the simulations use an initial ensemble of relatively long wavelength whistler modes with a broad range of initial propagation directions with an initial electron beta β e = 0.05. The computations follow the temporal evolution of the fluctuations as they cascade into broadband turbulent spectra at shorter wavelengths. Three simulations correspond to successively larger simulation boxes and successively longer wavelengths of the initial fluctuations. The computations confirm previous results showing electron heating is preferentially parallel to the background magnetic fieldmore » B o, and ion heating is preferentially perpendicular to B o. The new results here are that larger simulation boxes and longer initial whistler wavelengths yield weaker overall dissipation, consistent with linear dispersion theory predictions of decreased damping, stronger ion heating, consistent with a stronger ion Landau resonance, and weaker electron heating.« less
An Object-Oriented Serial DSMC Simulation Package
NASA Astrophysics Data System (ADS)
Liu, Hongli; Cai, Chunpei
2011-05-01
A newly developed three-dimensional direct simulation Monte Carlo (DSMC) simulation package, named GRASP ("Generalized Rarefied gAs Simulation Package"), is reported in this paper. This package utilizes the concept of simulation engine, many C++ features and software design patterns. The package has an open architecture which can benefit further development and maintenance of the code. In order to reduce the engineering time for three-dimensional models, a hybrid grid scheme, combined with a flexible data structure compiled by C++ language, are implemented in this package. This scheme utilizes a local data structure based on the computational cell to achieve high performance on workstation processors. This data structure allows the DSMC algorithm to be very efficiently parallelized with domain decomposition and it provides much flexibility in terms of grid types. This package can utilize traditional structured, unstructured or hybrid grids within the framework of a single code to model arbitrarily complex geometries and to simulate rarefied gas flows. Benchmark test cases indicate that this package has satisfactory accuracy for complex rarefied gas flows.
A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nomura, K; Seymour, R; Wang, W
2009-02-17
A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based onmore » hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).« less
PRATHAM: Parallel Thermal Hydraulics Simulations using Advanced Mesoscopic Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joshi, Abhijit S; Jain, Prashant K; Mudrich, Jaime A
2012-01-01
At the Oak Ridge National Laboratory, efforts are under way to develop a 3D, parallel LBM code called PRATHAM (PaRAllel Thermal Hydraulic simulations using Advanced Mesoscopic Methods) to demonstrate the accuracy and scalability of LBM for turbulent flow simulations in nuclear applications. The code has been developed using FORTRAN-90, and parallelized using the message passing interface MPI library. Silo library is used to compact and write the data files, and VisIt visualization software is used to post-process the simulation data in parallel. Both the single relaxation time (SRT) and multi relaxation time (MRT) LBM schemes have been implemented in PRATHAM.more » To capture turbulence without prohibitively increasing the grid resolution requirements, an LES approach [5] is adopted allowing large scale eddies to be numerically resolved while modeling the smaller (subgrid) eddies. In this work, a Smagorinsky model has been used, which modifies the fluid viscosity by an additional eddy viscosity depending on the magnitude of the rate-of-strain tensor. In LBM, this is achieved by locally varying the relaxation time of the fluid.« less
Dust Dynamics in Protoplanetary Disks: Parallel Computing with PVM
NASA Astrophysics Data System (ADS)
de La Fuente Marcos, Carlos; Barge, Pierre; de La Fuente Marcos, Raúl
2002-03-01
We describe a parallel version of our high-order-accuracy particle-mesh code for the simulation of collisionless protoplanetary disks. We use this code to carry out a massively parallel, two-dimensional, time-dependent, numerical simulation, which includes dust particles, to study the potential role of large-scale, gaseous vortices in protoplanetary disks. This noncollisional problem is easy to parallelize on message-passing multicomputer architectures. We performed the simulations on a cache-coherent nonuniform memory access Origin 2000 machine, using both the parallel virtual machine (PVM) and message-passing interface (MPI) message-passing libraries. Our performance analysis suggests that, for our problem, PVM is about 25% faster than MPI. Using PVM and MPI made it possible to reduce CPU time and increase code performance. This allows for simulations with a large number of particles (N ~ 105-106) in reasonable CPU times. The performances of our implementation of the pa! rallel code on an Origin 2000 supercomputer are presented and discussed. They exhibit very good speedup behavior and low load unbalancing. Our results confirm that giant gaseous vortices can play a dominant role in giant planet formation.
Starn, J. Jeffrey; Stone, Janet Radway
2005-01-01
Generic ground-water-flow simulation models show that geohydrologic factors?fracture types, fracture geometry, and surficial materials?affect the size, shape, and location of source-water areas for bedrock wells. In this study, conducted by the U.S. Geological Survey in cooperation with the Connecticut Department of Public Health, ground-water flow was simulated to bedrock wells in three settings?on hilltops and hillsides with no surficial aquifer, in a narrow valley with a surficial aquifer, and in a broad valley with a surficial aquifer?to show how different combinations of geohydrologic factors in different topographic settings affect the dimensions and locations of source-water areas in Connecticut. Three principal types of fractures are present in bedrock in Connecticut?(1) Layer-parallel fractures, which developed as partings along bedding in sedimentary rock and compositional layering or foliation in metamorphic rock (dips of these fractures can be gentle or steep); (2) unroofing joints, which developed as strain-release fractures parallel to the land surface as overlying rock was removed by erosion through geologic time; and (3) cross fractures and joints, which developed as a result of tectonically generated stresses that produced typically near-vertical or steeply dipping fractures. Fracture geometry is defined primarily by the presence or absence of layering in the rock unit, and, if layered, by the angle of dip in the layering. Where layered rocks dip steeply, layer-parallel fracturing generally is dominant; unroofing joints also are typically well developed. Where layered rocks dip gently, layer-parallel fracturing also is dominant, and connections among these fractures are provided only by the cross fractures. In gently dipping rocks, unroofing joints generally do not form as a separate fracture set; instead, strain release from unroofing has occurred along gently dipping layer-parallel fractures, enhancing their aperture. In nonlayered and variably layered rocks, layer-parallel fracturing is absent or poorly developed; fracturing is dominated by well-developed subhorizontal unroofing joints and steeply dipping, tectonically generated fractures and (or) cooling joints. Cross fractures (or cooling joints) in nonlayered and variably layered rocks have more random orientations than in layered rocks. Overall, nonlayered or variably layered rocks do not have a strongly developed fracture direction. Generic ground-water-flow simulation models showed that fracture geometry and other geohydrologic factors affect the dimensions and locations of source-water areas for bedrock wells. In general, source-water areas to wells reflect the direction of ground-water flow, which mimics the land-surface topography. Source-water areas to wells in a hilltop setting were not affected greatly by simulated fracture zones, except for an extensive vertical fracture zone. Source-water areas to wells in a hillside setting were not affected greatly by simulated fracture zones, except for the combination of a subhorizontal fracture zone and low bedrock vertical hydraulic conductivity, as might be the case where an extensive subhorizontal fracture zone is not connected or is poorly connected to the surface through vertical fractures. Source-water areas to wells in a narrow valley setting reflect complex ground-water-flow paths. The typical flow path originates in the uplands and passes through either till or bedrock into the surficial aquifer, although only a small area of the surficial aquifer actually contributes water to the well. Source-water areas in uplands can include substantial areas on both sides of a river. Source-water areas for wells in this setting are affected mainly by the rate of ground-water recharge and by the degree of anisotropy. Source-water areas to wells in a broad valley setting (bedrock with a low angle of dip) are affected greatly by fracture properties. The effect of a given fracture is to channel the
Yager, Richard M.; Southworth, Scott C.; Voss, Clifford I.
2008-01-01
Ground-water flow was simulated using variable-direction anisotropy in hydraulic conductivity to represent the folded, fractured sedimentary rocks that underlie the Shenandoah Valley in Virginia and West Virginia. The anisotropy is a consequence of the orientations of fractures that provide preferential flow paths through the rock, such that the direction of maximum hydraulic conductivity is oriented within bedding planes, which generally strike N30 deg E; the direction of minimum hydraulic conductivity is perpendicular to the bedding. The finite-element model SUTRA was used to specify variable directions of the hydraulic-conductivity tensor in order to represent changes in the strike and dip of the bedding throughout the valley. The folded rocks in the valley are collectively referred to as the Massanutten synclinorium, which contains about a 5-km thick section of clastic and carbonate rocks. For the model, the bedrock was divided into four units: a 300-m thick top unit with 10 equally spaced layers through which most ground water is assumed to flow, and three lower units each containing 5 layers of increasing thickness that correspond to the three major rock units in the valley: clastic, carbonate and metamorphic rocks. A separate zone in the carbonate rocks that is overlain by colluvial gravel - called the western-toe carbonate unit - was also distinguished. Hydraulic-conductivity values were estimated through model calibration for each of the four rock units, using data from 354 wells and 23 streamflow-gaging stations. Conductivity tensors for metamorphic and western-toe carbonate rocks were assumed to be isotropic, while conductivity tensors for carbonate and clastic rocks were assumed to be anisotropic. The directions of the conductivity tensor for carbonate and clastic rocks were interpolated for each mesh element from a stack of 'form surfaces' that provided a three-dimensional representation of bedrock structure. Model simulations were run with (1) variable strike and dip, in which conductivity tensors were aligned with the strike and dip of the bedding, and (2) uniform strike in which conductivity tensors were assumed to be horizontally isotropic with the maximum conductivity direction parallel to the N30 deg E axis of the valley and the minimum conductivity direction perpendicular to the horizontal plane. Simulated flow penetrated deeper into the aquifer system with the uniform-strike tensor than with the variable-strike-and-dip tensor. Sensitivity analyses suggest that additional information on recharge rates would increase confidence in the estimated parameter values. Two applications of the model were conducted - the first, to determine depth of recent ground-water flow by simulating the distribution of ground-water ages, showed that most shallow ground water is less than 10 years old. Ground-water age distributions computed by variable-strike-and-dip and uniform-strike models were similar, but differed beneath Massanutten Mountain in the center of the valley. The variable-strike-and-dip model simulated flow from west to east parallel to the bedding of the carbonate rocks beneath Massanutten Mountain, while the uniform-strike model, in which flow was largely controlled by topography, simulated this same area as an east-west ground-water divide. The second application, which delineated capture zones for selected well fields in the valley, showed that capture zones delineated with both models were similar in plan view, but differed in vertical extent. Capture zones simulated by the variable-strike-and-dip model extended downdip with the bedding of carbonate rock and were relatively shallow, while those simulated by the uniform-strike model extended to the bottom of the flow system, which is unrealistic. These results suggest that simulations of ground-water flow through folded fractured rock can be constructed using SUTRA to represent variable orientations of the hydraulic-conductivity tensor and produce a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bylaska, Eric J., E-mail: Eric.Bylaska@pnnl.gov; Weare, Jonathan Q., E-mail: weare@uchicago.edu; Weare, John H., E-mail: jweare@ucsd.edu
2013-08-21
Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time t{sub i} (trajectory positions and velocities x{sub i} = (r{sub i}, v{sub i})) to time t{sub i+1} (x{sub i+1}) by x{sub i+1} = f{sub i}(x{sub i}), the dynamics problem spanning an interval from t{sub 0}…t{sub M} can be transformed into a root finding problem, F(X) = [x{sub i} − f(x{sub (i−1})]{sub i} {sub =1,M} = 0, for themore » trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H{sub 2}O AIMD simulation at the MP2 level. The maximum speedup ((serial execution time)/(parallel execution time) ) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a distributed computing environment using very slow transmission control protocol/Internet protocol networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl + 4H{sub 2}O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. Using these algorithms, we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 s/time step to 6.9 s/time step.« less
Qiu, Mingfeng; Bailey, Brian N.; Stoll, Rob
2014-01-01
The validity of the compressible Reynolds equation to predict the local pressure in a gas-lubricated, textured parallel slider bearing is investigated. The local bearing pressure is numerically simulated using the Reynolds equation and the Navier-Stokes equations for different texture geometries and operating conditions. The respective results are compared and the simplifying assumptions inherent in the application of the Reynolds equation are quantitatively evaluated. The deviation between the local bearing pressure obtained with the Reynolds equation and the Navier-Stokes equations increases with increasing texture aspect ratio, because a significant cross-film pressure gradient and a large velocity gradient in the sliding direction develop in the lubricant film. Inertia is found to be negligible throughout this study. PMID:25049440
NASA Astrophysics Data System (ADS)
Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.
2013-12-01
A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)
NASA Technical Reports Server (NTRS)
Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.
1991-01-01
A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.
Interfacial properties in a discrete model for tumor growth
NASA Astrophysics Data System (ADS)
Moglia, Belén; Guisoni, Nara; Albano, Ezequiel V.
2013-03-01
We propose and study, by means of Monte Carlo numerical simulations, a minimal discrete model for avascular tumor growth, which can also be applied for the description of cell cultures in vitro. The interface of the tumor is self-affine and its width can be characterized by the following exponents: (i) the growth exponent β=0.32(2) that governs the early time regime, (ii) the roughness exponent α=0.49(2) related to the fluctuations in the stationary regime, and (iii) the dynamic exponent z=α/β≃1.49(2), which measures the propagation of correlations in the direction parallel to the interface, e.g., ξ∝t1/z, where ξ is the parallel correlation length. Therefore, the interface belongs to the Kardar-Parisi-Zhang universality class, in agreement with recent experiments of cell cultures in vitro. Furthermore, density profiles of the growing cells are rationalized in terms of traveling waves that are solutions of the Fisher-Kolmogorov equation. In this way, we achieved excellent agreement between the simulation results of the discrete model and the continuous description of the growth front of the culture or tumor.
Particle-in-Cell laser-plasma simulation on Xeon Phi coprocessors
NASA Astrophysics Data System (ADS)
Surmin, I. A.; Bastrakov, S. I.; Efimenko, E. S.; Gonoskov, A. A.; Korzhimanov, A. V.; Meyerov, I. B.
2016-05-01
This paper concerns the development of a high-performance implementation of the Particle-in-Cell method for plasma simulation on Intel Xeon Phi coprocessors. We discuss the suitability of the method for Xeon Phi architecture and present our experience in the porting and optimization of the existing parallel Particle-in-Cell code PICADOR. Direct porting without code modification gives performance on Xeon Phi close to that of an 8-core CPU on a benchmark problem with 50 particles per cell. We demonstrate step-by-step optimization techniques, such as improving data locality, enhancing parallelization efficiency and vectorization leading to an overall 4.2 × speedup on CPU and 7.5 × on Xeon Phi compared to the baseline version. The optimized version achieves 16.9 ns per particle update on an Intel Xeon E5-2660 CPU and 9.3 ns per particle update on an Intel Xeon Phi 5110P. For a real problem of laser ion acceleration in targets with surface grating, where a large number of macroparticles per cell is required, the speedup of Xeon Phi compared to CPU is 1.6 ×.
PARALLEL HOP: A SCALABLE HALO FINDER FOR MASSIVE COSMOLOGICAL DATA SETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skory, Stephen; Turk, Matthew J.; Norman, Michael L.
2010-11-15
Modern N-body cosmological simulations contain billions (10{sup 9}) of dark matter particles. These simulations require hundreds to thousands of gigabytes of memory and employ hundreds to tens of thousands of processing cores on many compute nodes. In order to study the distribution of dark matter in a cosmological simulation, the dark matter halos must be identified using a halo finder, which establishes the halo membership of every particle in the simulation. The resources required for halo finding are similar to the requirements for the simulation itself. In particular, simulations have become too extensive to use commonly employed halo finders, suchmore » that the computational requirements to identify halos must now be spread across multiple nodes and cores. Here, we present a scalable-parallel halo finding method called Parallel HOP for large-scale cosmological simulation data. Based on the halo finder HOP, it utilizes message passing interface and domain decomposition to distribute the halo finding workload across multiple compute nodes, enabling analysis of much larger data sets than is possible with the strictly serial or previous parallel implementations of HOP. We provide a reference implementation of this method as a part of the toolkit {sup yt}, an analysis toolkit for adaptive mesh refinement data that include complementary analysis modules. Additionally, we discuss a suite of benchmarks that demonstrate that this method scales well up to several hundred tasks and data sets in excess of 2000{sup 3} particles. The Parallel HOP method and our implementation can be readily applied to any kind of N-body simulation data and is therefore widely applicable.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bylaska, Eric J.; Weare, Jonathan Q.; Weare, John H.
2013-08-21
Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f , (e.g. Verlet algorithm) is available to propagate the system from time ti (trajectory positions and velocities xi = (ri; vi)) to time ti+1 (xi+1) by xi+1 = fi(xi), the dynamics problem spanning an interval from t0 : : : tM can be transformed into a root finding problem, F(X) = [xi - f (x(i-1)]i=1;M = 0, for the trajectory variables. The root finding problem is solved using amore » variety of optimization techniques, including quasi-Newton and preconditioned quasi-Newton optimization schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed and the effectiveness of various approaches to solving the root finding problem are tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl+4H2O AIMD simulation at the MP2 level. The maximum speedup obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a distributed computing environment using very slow TCP/IP networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl+4H2O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. By using these algorithms we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 seconds per time step to 6.9 seconds per time step.« less
Bylaska, Eric J; Weare, Jonathan Q; Weare, John H
2013-08-21
Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time ti (trajectory positions and velocities xi = (ri, vi)) to time ti + 1 (xi + 1) by xi + 1 = fi(xi), the dynamics problem spanning an interval from t0[ellipsis (horizontal)]tM can be transformed into a root finding problem, F(X) = [xi - f(x(i - 1)]i = 1, M = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H2O AIMD simulation at the MP2 level. The maximum speedup (serial execution/timeparallel execution time) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a distributed computing environment using very slow transmission control protocol/Internet protocol networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl + 4H2O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. Using these algorithms, we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 s/time step to 6.9 s/time step.
NASA Astrophysics Data System (ADS)
Andrade, Xavier; Strubbe, David; De Giovannini, Umberto; Larsen, Ask Hjorth; Oliveira, Micael J. T.; Alberdi-Rodriguez, Joseba; Varas, Alejandro; Theophilou, Iris; Helbig, Nicole; Verstraete, Matthieu J.; Stella, Lorenzo; Nogueira, Fernando; Aspuru-Guzik, Alán; Castro, Alberto; Marques, Miguel A. L.; Rubio, Angel
Real-space grids are a powerful alternative for the simulation of electronic systems. One of the main advantages of the approach is the flexibility and simplicity of working directly in real space where the different fields are discretized on a grid, combined with competitive numerical performance and great potential for parallelization. These properties constitute a great advantage at the time of implementing and testing new physical models. Based on our experience with the Octopus code, in this article we discuss how the real-space approach has allowed for the recent development of new ideas for the simulation of electronic systems. Among these applications are approaches to calculate response properties, modeling of photoemission, optimal control of quantum systems, simulation of plasmonic systems, and the exact solution of the Schr\\"odinger equation for low-dimensionality systems.
NASA Astrophysics Data System (ADS)
Sheykina, Nadiia; Bogatina, Nina
The following variants of roots location relatively to static and alternative components of magnetic field were studied. At first variant the static magnetic field was directed parallel to the gravitation vector, the alternative magnetic field was directed perpendicular to static one; roots were directed perpendicular to both two fields’ components and gravitation vector. At the variant the negative gravitropysm for cress roots was observed. At second variant the static magnetic field was directed parallel to the gravitation vector, the alternative magnetic field was directed perpendicular to static one; roots were directed parallel to alternative magnetic field. At third variant the alternative magnetic field was directed parallel to the gravitation vector, the static magnetic field was directed perpendicular to the gravitation vector, roots were directed perpendicular to both two fields components and gravitation vector; At forth variant the alternative magnetic field was directed parallel to the gravitation vector, the static magnetic field was directed perpendicular to the gravitation vector, roots were directed parallel to static magnetic field. In all cases studied the alternative magnetic field frequency was equal to Ca ions cyclotron frequency. In 2, 3 and 4 variants gravitropism was positive. But the gravitropic reaction speeds were different. In second and forth variants the gravitropic reaction speed in error limits coincided with the gravitropic reaction speed under Earth’s conditions. At third variant the gravitropic reaction speed was slowed essentially.
The Fractional Step Method Applied to Simulations of Natural Convective Flows
NASA Technical Reports Server (NTRS)
Westra, Douglas G.; Heinrich, Juan C.; Saxon, Jeff (Technical Monitor)
2002-01-01
This paper describes research done to apply the Fractional Step Method to finite-element simulations of natural convective flows in pure liquids, permeable media, and in a directionally solidified metal alloy casting. The Fractional Step Method has been applied commonly to high Reynold's number flow simulations, but is less common for low Reynold's number flows, such as natural convection in liquids and in permeable media. The Fractional Step Method offers increased speed and reduced memory requirements by allowing non-coupled solution of the pressure and the velocity components. The Fractional Step Method has particular benefits for predicting flows in a directionally solidified alloy, since other methods presently employed are not very efficient. Previously, the most suitable method for predicting flows in a directionally solidified binary alloy was the penalty method. The penalty method requires direct matrix solvers, due to the penalty term. The Fractional Step Method allows iterative solution of the finite element stiffness matrices, thereby allowing more efficient solution of the matrices. The Fractional Step Method also lends itself to parallel processing, since the velocity component stiffness matrices can be built and solved independently of each other. The finite-element simulations of a directionally solidified casting are used to predict macrosegregation in directionally solidified castings. In particular, the finite-element simulations predict the existence of 'channels' within the processing mushy zone and subsequently 'freckles' within the fully processed solid, which are known to result from macrosegregation, or what is often referred to as thermo-solutal convection. These freckles cause material property non-uniformities in directionally solidified castings; therefore many of these castings are scrapped. The phenomenon of natural convection in an alloy under-going directional solidification, or thermo-solutal convection, will be explained. The development of the momentum and continuity equations for natural convection in a fluid, a permeable medium, and in a binary alloy undergoing directional solidification will be presented. Finally, results for natural convection in a pure liquid, natural convection in a medium with a constant permeability, and for directional solidification will be presented.
Fully Parallel MHD Stability Analysis Tool
NASA Astrophysics Data System (ADS)
Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang
2014-10-01
Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Initial results of the code parallelization will be reported. Work is supported by the U.S. DOE SBIR program.
Effect of q-profile structure on intrinsic torque reversals
NASA Astrophysics Data System (ADS)
Lu, Zhixin
2014-10-01
Intrinsic toroidal rotation plays an important role in mitigating macroinstability and regulating turbulent transport in ITER, where neutral beams are not sufficient to provide the requisite torque. Recent experiments on C-Mod with LHCD observed rotation reversal related to a change in the q profile. In this work, we focus on understanding the physics of intrinsic rotation reversals in LHCD plasmas, using nonlinear, global gyro-kinetic simulations and analysis of mode structure and spectrum symmetry breaking. The sensitive dependence of turbulent residual stress on magnetic shear is identified and characterized. The basic residual stress is non-vanishing when the k-parallel spectrum symmetry is broken, e.g., by E × B shear induced radial shift, non-uniformity in turbulence intensity, etc.. It is found that at low magnetic shear, the poloidal harmonics can shift strongly in the radial direction, as a feature of non-local effects, due to radial propagation and amplitude variation of the mode. This new symmetry breaking mechanism leads to a change in the sign of spectrum averaged parallel wave vector and thus the direction of intrinsic torque. Theoretical study shows that the competition between magnetic drift and ion kinetic effects determines the non-local effects and the structure of the asymmetry. Specifically, it is found that the direction of the intrinsic torque changes from counter- to co-current in the core, when magnetic shear decreases through a critical value. A critical shear ŝR = 0 . 2 ~ 0 . 5 for reversal of CTEM-induced intrinsic torque found by simulation is consistent with that from the LHCD C-Mod reversal experiments. In addition, simulations indicate ŝR = 1 ~ 2 for the reversal of ITG-induced torque, a prediction which can be tested by experiments. This work is supported by CER and CMTFO, UCSD and U.S. DOE-PPPL Contract DE-AC02-09CH11466.
NASA Astrophysics Data System (ADS)
Yang, Sheng-Chun; Lu, Zhong-Yuan; Qian, Hu-Jun; Wang, Yong-Lei; Han, Jie-Ping
2017-11-01
In this work, we upgraded the electrostatic interaction method of CU-ENUF (Yang, et al., 2016) which first applied CUNFFT (nonequispaced Fourier transforms based on CUDA) to the reciprocal-space electrostatic computation and made the computation of electrostatic interaction done thoroughly in GPU. The upgraded edition of CU-ENUF runs concurrently in a hybrid parallel way that enables the computation parallelizing on multiple computer nodes firstly, then further on the installed GPU in each computer. By this parallel strategy, the size of simulation system will be never restricted to the throughput of a single CPU or GPU. The most critical technical problem is how to parallelize a CUNFFT in the parallel strategy, which is conquered effectively by deep-seated research of basic principles and some algorithm skills. Furthermore, the upgraded method is capable of computing electrostatic interactions for both the atomistic molecular dynamics (MD) and the dissipative particle dynamics (DPD). Finally, the benchmarks conducted for validation and performance indicate that the upgraded method is able to not only present a good precision when setting suitable parameters, but also give an efficient way to compute electrostatic interactions for huge simulation systems. Program Files doi:http://dx.doi.org/10.17632/zncf24fhpv.1 Licensing provisions: GNU General Public License 3 (GPL) Programming language: C, C++, and CUDA C Supplementary material: The program is designed for effective electrostatic interactions of large-scale simulation systems, which runs on particular computers equipped with NVIDIA GPUs. It has been tested on (a) single computer node with Intel(R) Core(TM) i7-3770@ 3.40 GHz (CPU) and GTX 980 Ti (GPU), and (b) MPI parallel computer nodes with the same configurations. Nature of problem: For molecular dynamics simulation, the electrostatic interaction is the most time-consuming computation because of its long-range feature and slow convergence in simulation space, which approximately take up most of the total simulation time. Although the parallel method CU-ENUF (Yang et al., 2016) based on GPU has achieved a qualitative leap compared with previous methods in electrostatic interactions computation, the computation capability is limited to the throughput capacity of a single GPU for super-scale simulation system. Therefore, we should look for an effective method to handle the calculation of electrostatic interactions efficiently for a simulation system with super-scale size. Solution method: We constructed a hybrid parallel architecture, in which CPU and GPU are combined to accelerate the electrostatic computation effectively. Firstly, the simulation system is divided into many subtasks via domain-decomposition method. Then MPI (Message Passing Interface) is used to implement the CPU-parallel computation with each computer node corresponding to a particular subtask, and furthermore each subtask in one computer node will be executed in GPU in parallel efficiently. In this hybrid parallel method, the most critical technical problem is how to parallelize a CUNFFT (nonequispaced fast Fourier transform based on CUDA) in the parallel strategy, which is conquered effectively by deep-seated research of basic principles and some algorithm skills. Restrictions: The HP-ENUF is mainly oriented to super-scale system simulations, in which the performance superiority is shown adequately. However, for a small simulation system containing less than 106 particles, the mode of multiple computer nodes has no apparent efficiency advantage or even lower efficiency due to the serious network delay among computer nodes, than the mode of single computer node. References: (1) S.-C. Yang, H.-J. Qian, Z.-Y. Lu, Appl. Comput. Harmon. Anal. 2016, http://dx.doi.org/10.1016/j.acha.2016.04.009. (2) S.-C. Yang, Y.-L. Wang, G.-S. Jiao, H.-J. Qian, Z.-Y. Lu, J. Comput. Chem. 37 (2016) 378. (3) S.-C. Yang, Y.-L. Zhu, H.-J. Qian, Z.-Y. Lu, Appl. Chem. Res. Chin. Univ., 2017, http://dx.doi.org/10.1007/s40242-016-6354-5. (4) Y.-L. Zhu, H. Liu, Z.-W. Li, H.-J. Qian, G. Milano, Z.-Y. Lu, J. Comput. Chem. 34 (2013) 2197.
NASA Astrophysics Data System (ADS)
Spurzem, R.; Berczik, P.; Zhong, S.; Nitadori, K.; Hamada, T.; Berentzen, I.; Veles, A.
2012-07-01
Astrophysical Computer Simulations of Dense Star Clusters in Galactic Nuclei with Supermassive Black Holes are presented using new cost-efficient supercomputers in China accelerated by graphical processing cards (GPU). We use large high-accuracy direct N-body simulations with Hermite scheme and block-time steps, parallelised across a large number of nodes on the large scale and across many GPU thread processors on each node on the small scale. A sustained performance of more than 350 Tflop/s for a science run on using simultaneously 1600 Fermi C2050 GPUs is reached; a detailed performance model is presented and studies for the largest GPU clusters in China with up to Petaflop/s performance and 7000 Fermi GPU cards. In our case study we look at two supermassive black holes with equal and unequal masses embedded in a dense stellar cluster in a galactic nucleus. The hardening processes due to interactions between black holes and stars, effects of rotation in the stellar system and relativistic forces between the black holes are simultaneously taken into account. The simulation stops at the complete relativistic merger of the black holes.
Thermodynamic analysis of shark skin texture surfaces for microchannel flow
NASA Astrophysics Data System (ADS)
Yu, Hai-Yan; Zhang, Hao-Chun; Guo, Yang-Yu; Tan, He-Ping; Li, Yao; Xie, Gong-Nan
2016-09-01
The studies of shark skin textured surfaces in flow drag reduction provide inspiration to researchers overcoming technical challenges from actual production application. In this paper, three kinds of infinite parallel plate flow models with microstructure inspired by shark skin were established, namely blade model, wedge model and the smooth model, according to cross-sectional shape of microstructure. Simulation was carried out by using FLUENT, which simplified the computation process associated with direct numeric simulations. To get the best performance from simulation results, shear-stress transport k-omega turbulence model was chosen during the simulation. Since drag reduction mechanism is generally discussed from kinetics point of view, which cannot interpret the cause of these losses directly, a drag reduction rate was established based on the second law of thermodynamics. Considering abrasion and fabrication precision in practical applications, three kinds of abraded geometry models were constructed and tested, and the ideal microstructure was found to achieve best performance suited to manufacturing production on the basis of drag reduction rate. It was also believed that bionic shark skin surfaces with mechanical abrasion may draw more attention from industrial designers and gain wide applications with drag-reducing characteristics.
IShTAR ICRF antenna field characterization in vacuum and plasma by using probe diagnostic
NASA Astrophysics Data System (ADS)
Usoltceva, Mariia; Ochoukov, Roman; D'Inca, Rodolphe; Jacquot, Jonathan; Crombé, Kristel; Kostic, Ana; Heuraux, Stéphane; Faudot, Eric; Noterdaeme, Jean-Marie
2017-10-01
RF sheath physics is one of the key topics relevant for improvements of ICRF heating systems, which are present on nearly all modern magnetic fusion machines. This paper introduces developement and validation of a new approach to understanding general RF sheath physics. The presumed reason of enhanced plasma-antenna interactions, parallel electric field, is not measured directly, but proposed to be obtained from simulations in COMSOL Multiphysics® Modeling Software. Measurements of RF magnetic field components with B-dot probes are done on a linear device IShTAR (Ion cyclotron Sheath Test ARrangement) and then compared to simulations. Good resulting accordance is suggested to be the criterion for trustworthiness of parallel electric field estimation as a component of electromagnetic field in modeling. A comparison between simulation and experiment for one magnetic field component in vacuum has demonstrated a close match. An additional complication to this ICRF antenna field characterization study is imposed by the helicon antenna which is used as a plasma ignition tool in the test arrangement. The plasma case, in contrast to the vacuum case, must be approached carefully, since the overlapping of ICRF antenna and helicon antenna fields occurs. Distinguishing of the two fields is done by an analysis of correlation between measurements with both antennas together and with each one separately.
3D Resistive MHD Simulations of Formation, Compression, and Acceleration of Compact Tori
NASA Astrophysics Data System (ADS)
Woodruff, Simon; Meyer, Thomas; Stuber, James; Romero-Talamas, Carlos; Brown, Michael; Kaur, Manjit; Schaffner, David
2017-10-01
We present results from extended resistive 3D MHD simulations (NIMROD) pertaining to a new formation method for toroidal plasmas using a reconnection region that forms in a radial implosion, and results from the acceleration of CTs along a drift tube that are accelerated by a coil and are allowed to go tilt unstable and form a helical minimum energy state. The new formation method results from a reconnection region that is generated between two magnetic compression coils that are ramped to 320kV in 2 μs. When the compressing field is aligned anti-parallel to a pre-existing CT, a current sheet and reconnection region forms that accelerates plasma radially inwards up to 500km/s which stagnates and directed energy converts to thermal, raising temperatures to 500eV. When field is aligned parallel to the pre-existing CT, the configuration can be accelerated along a drift tube. For certain ratios of magnetic field to density, the CT goes tilt-unstable forming a twisted flux rope, which can also be accelerated and stagnated on an end wall, where temperature and field increases as the plasma compresses. We compare simulation results with adiabatic scaling relations. Work supported by ARPA-E ALPHA program and DARPA.
Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji
2015-01-01
GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310–323. doi: 10.1002/wcms.1220 PMID:26753008
Namani, R.; Feng, Y.; Okamoto, R. J.; Jesuraj, N.; Sakiyama-Elbert, S. E.; Genin, G. M.; Bayly, P. V.
2012-01-01
The mechanical characterization of soft anisotropic materials is a fundamental challenge because of difficulties in applying mechanical loads to soft matter and the need to combine information from multiple tests. A method to characterize the linear elastic properties of transversely isotropic soft materials is proposed, based on the combination of dynamic shear testing (DST) and asymmetric indentation. The procedure was demonstrated by characterizing a nearly incompressible transversely isotropic soft material. A soft gel with controlled anisotropy was obtained by polymerizing a mixture of fibrinogen and thrombin solutions in a high field magnet (B = 11.7 T); fibrils in the resulting gel were predominantly aligned parallel to the magnetic field. Aligned fibrin gels were subject to dynamic (20–40 Hz) shear deformation in two orthogonal directions. The shear storage modulus was 1.08 ± 0. 42 kPa (mean ± std. dev.) for shear in a plane parallel to the dominant fiber direction, and 0.58 ± 0.21 kPa for shear in the plane of isotropy. Gels were indented by a rectangular tip of a large aspect ratio, aligned either parallel or perpendicular to the normal to the plane of transverse isotropy. Aligned fibrin gels appeared stiffer when indented with the long axis of a rectangular tip perpendicular to the dominant fiber direction. Three-dimensional numerical simulations of asymmetric indentation were used to determine the relationship between direction-dependent differences in indentation stiffness and material parameters. This approach enables the estimation of a complete set of parameters for an incompressible, transversely isotropic, linear elastic material. PMID:22757501
The Integrated Plasma Simulator: A Flexible Python Framework for Coupled Multiphysics Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foley, Samantha S; Elwasif, Wael R; Bernholdt, David E
2011-11-01
High-fidelity coupled multiphysics simulations are an increasingly important aspect of computational science. In many domains, however, there has been very limited experience with simulations of this sort, therefore research in coupled multiphysics often requires computational frameworks with significant flexibility to respond to the changing directions of the physics and mathematics. This paper presents the Integrated Plasma Simulator (IPS), a framework designed for loosely coupled simulations of fusion plasmas. The IPS provides users with a simple component architecture into which a wide range of existing plasma physics codes can be inserted as components. Simulations can take advantage of multiple levels ofmore » parallelism supported in the IPS, and can be controlled by a high-level ``driver'' component, or by other coordination mechanisms, such as an asynchronous event service. We describe the requirements and design of the framework, and how they were implemented in the Python language. We also illustrate the flexibility of the framework by providing examples of different types of simulations that utilize various features of the IPS.« less
Multi-scale simulations of space problems with iPIC3D
NASA Astrophysics Data System (ADS)
Lapenta, Giovanni; Bettarini, Lapo; Markidis, Stefano
The implicit Particle-in-Cell method for the computer simulation of space plasma, and its im-plementation in a three-dimensional parallel code, called iPIC3D, are presented. The implicit integration in time of the Vlasov-Maxwell system removes the numerical stability constraints and enables kinetic plasma simulations at magnetohydrodynamics scales. Simulations of mag-netic reconnection in plasma are presented to show the effectiveness of the algorithm. In particular we will show a number of simulations done for large scale 3D systems using the physical mass ratio for Hydrogen. Most notably one simulation treats kinetically a box of tens of Earth radii in each direction and was conducted using about 16000 processors of the Pleiades NASA computer. The work is conducted in collaboration with the MMS-IDS theory team from University of Colorado (M. Goldman, D. Newman and L. Andersson). Reference: Stefano Markidis, Giovanni Lapenta, Rizwan-uddin Multi-scale simulations of plasma with iPIC3D Mathematics and Computers in Simulation, Available online 17 October 2009, http://dx.doi.org/10.1016/j.matcom.2009.08.038
NASA Technical Reports Server (NTRS)
Krosel, S. M.; Milner, E. J.
1982-01-01
The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.
Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong
2010-10-01
Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gupta, Sourendu; Majumdar, Pushan
2018-07-01
We present the results of an effort to accelerate a Rational Hybrid Monte Carlo (RHMC) program for lattice quantum chromodynamics (QCD) simulation for 2 flavors of staggered fermions on multiple Kepler K20X GPUs distributed on different nodes of a Cray XC30. We do not use CUDA but adopt a higher level directive based programming approach using the OpenACC platform. The lattice QCD algorithm is known to be bandwidth bound; our timing results illustrate this clearly, and we discuss how this limits the parallelization gains. We achieve more than a factor three speed-up compared to the CPU only MPI program.
Dependability analysis of parallel systems using a simulation-based approach. M.S. Thesis
NASA Technical Reports Server (NTRS)
Sawyer, Darren Charles
1994-01-01
The analysis of dependability in large, complex, parallel systems executing real applications or workloads is examined in this thesis. To effectively demonstrate the wide range of dependability problems that can be analyzed through simulation, the analysis of three case studies is presented. For each case, the organization of the simulation model used is outlined, and the results from simulated fault injection experiments are explained, showing the usefulness of this method in dependability modeling of large parallel systems. The simulation models are constructed using DEPEND and C++. Where possible, methods to increase dependability are derived from the experimental results. Another interesting facet of all three cases is the presence of some kind of workload of application executing in the simulation while faults are injected. This provides a completely new dimension to this type of study, not possible to model accurately with analytical approaches.
Griera, Albert; Steinbach, Florian; Bons, Paul D.; Jansen, Daniela; Roessiger, Jens; Lebensohn, Ricardo A.
2017-01-01
The flow of glaciers and polar ice sheets is controlled by the highly anisotropic rheology of ice crystals that have hexagonal symmetry (ice lh). To improve our knowledge of ice sheet dynamics, it is necessary to understand how dynamic recrystallization (DRX) controls ice microstructures and rheology at different boundary conditions that range from pure shear flattening at the top to simple shear near the base of the sheets. We present a series of two-dimensional numerical simulations that couple ice deformation with DRX of various intensities, paying special attention to the effect of boundary conditions. The simulations show how similar orientations of c-axis maxima with respect to the finite deformation direction develop regardless of the amount of DRX and applied boundary conditions. In pure shear this direction is parallel to the maximum compressional stress, while it rotates towards the shear direction in simple shear. This leads to strain hardening and increased activity of non-basal slip systems in pure shear and to strain softening in simple shear. Therefore, it is expected that ice is effectively weaker in the lower parts of the ice sheets than in the upper parts. Strain-rate localization occurs in all simulations, especially in simple shear cases. Recrystallization suppresses localization, which necessitates the activation of hard, non-basal slip systems. This article is part of the themed issue ‘Microdynamics of ice’. PMID:28025295
Llorens, Maria-Gema; Griera, Albert; Steinbach, Florian; Bons, Paul D; Gomez-Rivas, Enrique; Jansen, Daniela; Roessiger, Jens; Lebensohn, Ricardo A; Weikusat, Ilka
2017-02-13
The flow of glaciers and polar ice sheets is controlled by the highly anisotropic rheology of ice crystals that have hexagonal symmetry (ice lh). To improve our knowledge of ice sheet dynamics, it is necessary to understand how dynamic recrystallization (DRX) controls ice microstructures and rheology at different boundary conditions that range from pure shear flattening at the top to simple shear near the base of the sheets. We present a series of two-dimensional numerical simulations that couple ice deformation with DRX of various intensities, paying special attention to the effect of boundary conditions. The simulations show how similar orientations of c-axis maxima with respect to the finite deformation direction develop regardless of the amount of DRX and applied boundary conditions. In pure shear this direction is parallel to the maximum compressional stress, while it rotates towards the shear direction in simple shear. This leads to strain hardening and increased activity of non-basal slip systems in pure shear and to strain softening in simple shear. Therefore, it is expected that ice is effectively weaker in the lower parts of the ice sheets than in the upper parts. Strain-rate localization occurs in all simulations, especially in simple shear cases. Recrystallization suppresses localization, which necessitates the activation of hard, non-basal slip systems.This article is part of the themed issue 'Microdynamics of ice'. © 2016 The Author(s).
NASA Astrophysics Data System (ADS)
Sanan, P.; Tackley, P. J.; Gerya, T.; Kaus, B. J. P.; May, D.
2017-12-01
StagBL is an open-source parallel solver and discretization library for geodynamic simulation,encapsulating and optimizing operations essential to staggered-grid finite volume Stokes flow solvers.It provides a parallel staggered-grid abstraction with a high-level interface in C and Fortran.On top of this abstraction, tools are available to define boundary conditions and interact with particle systems.Tools and examples to efficiently solve Stokes systems defined on the grid are provided in small (direct solver), medium (simple preconditioners), and large (block factorization and multigrid) model regimes.By working directly with leading application codes (StagYY, I3ELVIS, and LaMEM) and providing an API and examples to integrate with others, StagBL aims to become a community tool supplying scalable, portable, reproducible performance toward novel science in regional- and planet-scale geodynamics and planetary science.By implementing kernels used by many research groups beneath a uniform abstraction layer, the library will enable optimization for modern hardware, thus reducing community barriers to large- or extreme-scale parallel simulation on modern architectures. In particular, the library will include CPU-, Manycore-, and GPU-optimized variants of matrix-free operators and multigrid components.The common layer provides a framework upon which to introduce innovative new tools.StagBL will leverage p4est to provide distributed adaptive meshes, and incorporate a multigrid convergence analysis tool.These options, in addition to a wealth of solver options provided by an interface to PETSc, will make the most modern solution techniques available from a common interface. StagBL in turn provides a PETSc interface, DMStag, to its central staggered grid abstraction.We present public version 0.5 of StagBL, including preliminary integration with application codes and demonstrations with its own demonstration application, StagBLDemo. Central to StagBL is the notion of an uninterrupted pipeline from toy/teaching codes to high-performance, extreme-scale solves. StagBLDemo replicates the functionality of an advanced MATLAB-style regional geodynamics code, thus providing users with a concrete procedure to exceed the performance and scalability limitations of smaller-scale tools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, W. J.; Zheng, Yue, E-mail: zhengy35@mail.sysu.edu.cn; Wu, C. M.
Thermodynamic calculation and phase-field simulation have been conducted to investigate the misfit strain-temperature phase diagrams, dielectric property, and domain stability of asymmetric ferroelectric capacitors (FCs), with considering the effects of dissimilar screening properties and work function steps at the two interfaces. The distinct features of asymmetric FCs from their symmetric counterparts have been revealed and discussed. Polar states with nonzero out-of-plane polarization in parallel with the built-in field are found preferential to form in asymmetric FCs. Meanwhile, the built-in field breaks the degeneracy of states with out-of-plane polarization in anti-directions. This leads to the necessity of redefining phases according tomore » the bistability of out-of-plane polarization. Moreover, the phase stability as well as the dielectric behavior can be significantly controlled by the properties of electrodes, misfit strain, and temperature. The phase-field simulation result also shows that polydomain instability would happen in asymmetric FCs as the equivalence of domain stability in anti-directions is destroyed.« less
Propulsion of helical flagella near boundaries
NASA Astrophysics Data System (ADS)
Rodenborn, Bruce; Giesbrecht, Grant; Ni, Katha; Vock, Isaac
The presence of nearby boundaries is known to have dramatic effects on the swimming behavior of microorganisms because of the no-slip condition at the boundary. Microorganisms that use a helical flagellum experience forces both along the axis of the helix and in the direction perpendicular to the axis. These low Reynolds number boundary effects have primarily been studied using live bacteria and using numerical simulations. However, small scale measurements give limited information about the forces and torques on the microorganisms. Furthermore, numerical studies are approximate because they have generally used Stokeslet-based simulations with image Stokeslets to represent the effects of the boundaries. Instead, we directly measure the propulsion of macroscopic helical flagella with diameter 12 mm using a fluid with viscosity 105 times that of water to ensure the Reynolds number in the experiments is much less than unity, just as for bacteria. We measure the parallel and perpendicular forces as a function of boundary distance to determine the nonzero elements of the propulsive matrix for axial rotation near a boundary. We then compare our results to the theory and simulations of Lauga et al. and to biological measurements.
NASA Astrophysics Data System (ADS)
Liakos, Anastasios; Malamataris, Nikolaos
2014-11-01
The topology and evolution of flow around a surface mounted cubical object in three dimensional channel flow is examined for low to moderate Reynolds numbers. Direct numerical simulations were performed via a home made parallel finite element code. The computational domain has been designed according to actual laboratory experimental conditions. Analysis of the results is performed using the three dimensional theory of separation. Our findings indicate that a tornado-like vortex by the side of the cube is present for all Reynolds numbers for which flow was simulated. A horse-shoe vortex upstream from the cube was formed at Reynolds number approximately 1266. Pressure distributions are shown along with three dimensional images of the tornado-like vortex and the horseshoe vortex at selected Reynolds numbers. Finally, and in accordance to previous work, our results indicate that the upper limit for the Reynolds number for which steady state results are physically realizable is roughly 2000. Financial support of author NM from the Office of Naval Research Global (ONRG-VSP, N62909-13-1-V016) is acknowledged.
Crystallographic Lattice Boltzmann Method
Namburi, Manjusha; Krithivasan, Siddharth; Ansumali, Santosh
2016-01-01
Current approaches to Direct Numerical Simulation (DNS) are computationally quite expensive for most realistic scientific and engineering applications of Fluid Dynamics such as automobiles or atmospheric flows. The Lattice Boltzmann Method (LBM), with its simplified kinetic descriptions, has emerged as an important tool for simulating hydrodynamics. In a heterogeneous computing environment, it is often preferred due to its flexibility and better parallel scaling. However, direct simulation of realistic applications, without the use of turbulence models, remains a distant dream even with highly efficient methods such as LBM. In LBM, a fictitious lattice with suitable isotropy in the velocity space is considered to recover Navier-Stokes hydrodynamics in macroscopic limit. The same lattice is mapped onto a cartesian grid for spatial discretization of the kinetic equation. In this paper, we present an inverted argument of the LBM, by making spatial discretization as the central theme. We argue that the optimal spatial discretization for LBM is a Body Centered Cubic (BCC) arrangement of grid points. We illustrate an order-of-magnitude gain in efficiency for LBM and thus a significant progress towards feasibility of DNS for realistic flows. PMID:27251098
2013-08-01
potential for HMX / RDX (3, 9). ...................................................................................8 1 1. Purpose This work...6 dispersion and electrostatic interactions. Constants for the SB potential are given in table 1. 8 Table 1. SB potential for HMX / RDX (3, 9...modeling dislocations in the energetic molecular crystal RDX using the Large-Scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) molecular
Parallel Implementation of the Discontinuous Galerkin Method
NASA Technical Reports Server (NTRS)
Baggag, Abdalkader; Atkins, Harold; Keyes, David
1999-01-01
This paper describes a parallel implementation of the discontinuous Galerkin method. Discontinuous Galerkin is a spatially compact method that retains its accuracy and robustness on non-smooth unstructured grids and is well suited for time dependent simulations. Several parallelization approaches are studied and evaluated. The most natural and symmetric of the approaches has been implemented in all object-oriented code used to simulate aeroacoustic scattering. The parallel implementation is MPI-based and has been tested on various parallel platforms such as the SGI Origin, IBM SP2, and clusters of SGI and Sun workstations. The scalability results presented for the SGI Origin show slightly superlinear speedup on a fixed-size problem due to cache effects.
Providing a parallel and distributed capability for JMASS using SPEEDES
NASA Astrophysics Data System (ADS)
Valinski, Maria; Driscoll, Jonathan; McGraw, Robert M.; Meyer, Bob
2002-07-01
The Joint Modeling And Simulation System (JMASS) is a Tri-Service simulation environment that supports engineering and engagement-level simulations. As JMASS is expanded to support other Tri-Service domains, the current set of modeling services must be expanded for High Performance Computing (HPC) applications by adding support for advanced time-management algorithms, parallel and distributed topologies, and high speed communications. By providing support for these services, JMASS can better address modeling domains requiring parallel computationally intense calculations such clutter, vulnerability and lethality calculations, and underwater-based scenarios. A risk reduction effort implementing some HPC services for JMASS using the SPEEDES (Synchronous Parallel Environment for Emulation and Discrete Event Simulation) Simulation Framework has recently concluded. As an artifact of the JMASS-SPEEDES integration, not only can HPC functionality be brought to the JMASS program through SPEEDES, but an additional HLA-based capability can be demonstrated that further addresses interoperability issues. The JMASS-SPEEDES integration provided a means of adding HLA capability to preexisting JMASS scenarios through an implementation of the standard JMASS port communication mechanism that allows players to communicate.
NASA Technical Reports Server (NTRS)
Ierotheou, C.; Johnson, S.; Leggett, P.; Cross, M.; Evans, E.; Jin, Hao-Qiang; Frumkin, M.; Yan, J.; Biegel, Bryan (Technical Monitor)
2001-01-01
The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. Historically, the lack of a programming standard for using directives and the rather limited performance due to scalability have affected the take-up of this programming model approach. Significant progress has been made in hardware and software technologies, as a result the performance of parallel programs with compiler directives has also made improvements. The introduction of an industrial standard for shared-memory programming with directives, OpenMP, has also addressed the issue of portability. In this study, we have extended the computer aided parallelization toolkit (developed at the University of Greenwich), to automatically generate OpenMP based parallel programs with nominal user assistance. We outline the way in which loop types are categorized and how efficient OpenMP directives can be defined and placed using the in-depth interprocedural analysis that is carried out by the toolkit. We also discuss the application of the toolkit on the NAS Parallel Benchmarks and a number of real-world application codes. This work not only demonstrates the great potential of using the toolkit to quickly parallelize serial programs but also the good performance achievable on up to 300 processors for hybrid message passing and directive-based parallelizations.
CUBE: Information-optimized parallel cosmological N-body simulation code
NASA Astrophysics Data System (ADS)
Yu, Hao-Ran; Pen, Ue-Li; Wang, Xin
2018-05-01
CUBE, written in Coarray Fortran, is a particle-mesh based parallel cosmological N-body simulation code. The memory usage of CUBE can approach as low as 6 bytes per particle. Particle pairwise (PP) force, cosmological neutrinos, spherical overdensity (SO) halofinder are included.
Efficient parallel simulation of CO2 geologic sequestration insaline aquifers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Keni; Doughty, Christine; Wu, Yu-Shu
2007-01-01
An efficient parallel simulator for large-scale, long-termCO2 geologic sequestration in saline aquifers has been developed. Theparallel simulator is a three-dimensional, fully implicit model thatsolves large, sparse linear systems arising from discretization of thepartial differential equations for mass and energy balance in porous andfractured media. The simulator is based on the ECO2N module of the TOUGH2code and inherits all the process capabilities of the single-CPU TOUGH2code, including a comprehensive description of the thermodynamics andthermophysical properties of H2O-NaCl- CO2 mixtures, modeling singleand/or two-phase isothermal or non-isothermal flow processes, two-phasemixtures, fluid phases appearing or disappearing, as well as saltprecipitation or dissolution. The newmore » parallel simulator uses MPI forparallel implementation, the METIS software package for simulation domainpartitioning, and the iterative parallel linear solver package Aztec forsolving linear equations by multiple processors. In addition, theparallel simulator has been implemented with an efficient communicationscheme. Test examples show that a linear or super-linear speedup can beobtained on Linux clusters as well as on supercomputers. Because of thesignificant improvement in both simulation time and memory requirement,the new simulator provides a powerful tool for tackling larger scale andmore complex problems than can be solved by single-CPU codes. Ahigh-resolution simulation example is presented that models buoyantconvection, induced by a small increase in brine density caused bydissolution of CO2.« less
Bivariate Gaussian bridges: directional factorization of diffusion in Brownian bridge models.
Kranstauber, Bart; Safi, Kamran; Bartumeus, Frederic
2014-01-01
In recent years high resolution animal tracking data has become the standard in movement ecology. The Brownian Bridge Movement Model (BBMM) is a widely adopted approach to describe animal space use from such high resolution tracks. One of the underlying assumptions of the BBMM is isotropic diffusive motion between consecutive locations, i.e. invariant with respect to the direction. Here we propose to relax this often unrealistic assumption by separating the Brownian motion variance into two directional components, one parallel and one orthogonal to the direction of the motion. Our new model, the Bivariate Gaussian bridge (BGB), tracks movement heterogeneity across time. Using the BGB and identifying directed and non-directed movement within a trajectory resulted in more accurate utilisation distributions compared to dynamic Brownian bridges, especially for trajectories with a non-isotropic diffusion, such as directed movement or Lévy like movements. We evaluated our model with simulated trajectories and observed tracks, demonstrating that the improvement of our model scales with the directional correlation of a correlated random walk. We find that many of the animal trajectories do not adhere to the assumptions of the BBMM. The proposed model improves accuracy when describing the space use both in simulated correlated random walks as well as observed animal tracks. Our novel approach is implemented and available within the "move" package for R.
Parallelization of NAS Benchmarks for Shared Memory Multiprocessors
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)
1998-01-01
This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.
Parallel computing of physical maps--a comparative study in SIMD and MIMD parallelism.
Bhandarkar, S M; Chirravuri, S; Arnold, J
1996-01-01
Ordering clones from a genomic library into physical maps of whole chromosomes presents a central computational problem in genetics. Chromosome reconstruction via clone ordering is usually isomorphic to the NP-complete Optimal Linear Arrangement problem. Parallel SIMD and MIMD algorithms for simulated annealing based on Markov chain distribution are proposed and applied to the problem of chromosome reconstruction via clone ordering. Perturbation methods and problem-specific annealing heuristics are proposed and described. The SIMD algorithms are implemented on a 2048 processor MasPar MP-2 system which is an SIMD 2-D toroidal mesh architecture whereas the MIMD algorithms are implemented on an 8 processor Intel iPSC/860 which is an MIMD hypercube architecture. A comparative analysis of the various SIMD and MIMD algorithms is presented in which the convergence, speedup, and scalability characteristics of the various algorithms are analyzed and discussed. On a fine-grained, massively parallel SIMD architecture with a low synchronization overhead such as the MasPar MP-2, a parallel simulated annealing algorithm based on multiple periodically interacting searches performs the best. For a coarse-grained MIMD architecture with high synchronization overhead such as the Intel iPSC/860, a parallel simulated annealing algorithm based on multiple independent searches yields the best results. In either case, distribution of clonal data across multiple processors is shown to exacerbate the tendency of the parallel simulated annealing algorithm to get trapped in a local optimum.
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Nelson, William; Davis, Christopher C.
2014-10-01
Plenoptic functions are functions that preserve all the necessary light field information of optical events. Theoretical work has demonstrated that geometric based plenoptic functions can serve equally well in the traditional wave propagation equation known as the "scalar stochastic Helmholtz equation". However, in addressing problems of 3D turbulence simulation, the dominant methods using phase screen models have limitations both in explaining the choice of parameters (on the transverse plane) in real-world measurements, and finding proper correlations between neighboring phase screens (the Markov assumption breaks down). Though possible corrections to phase screen models are still promising, the equivalent geometric approach based on plenoptic functions begins to show some advantages. In fact, in these geometric approaches, a continuous wave problem is reduced to discrete trajectories of rays. This allows for convenience in parallel computing and guarantees conservation of energy. Besides the pairwise independence of simulated rays, the assigned refractive index grids can be directly tested by temperature measurements with tiny thermoprobes combined with other parameters such as humidity level and wind speed. Furthermore, without loss of generality one can break the causal chain in phase screen models by defining regional refractive centers to allow rays that are less affected to propagate through directly. As a result, our work shows that the 3D geometric approach serves as an efficient and accurate method in assessing relevant turbulence problems with inputs of several environmental measurements and reasonable guesses (such as Cn 2 levels). This approach will facilitate analysis and possible corrections in lateral wave propagation problems, such as image de-blurring, prediction of laser propagation over long ranges, and improvement of free space optic communication systems. In this paper, the plenoptic function model and relevant parallel algorithm computing will be presented, and its primary results and applications are demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuo, Wangda; McNeil, Andrew; Wetter, Michael
2013-05-23
Building designers are increasingly relying on complex fenestration systems to reduce energy consumed for lighting and HVAC in low energy buildings. Radiance, a lighting simulation program, has been used to conduct daylighting simulations for complex fenestration systems. Depending on the configurations, the simulation can take hours or even days using a personal computer. This paper describes how to accelerate the matrix multiplication portion of a Radiance three-phase daylight simulation by conducting parallel computing on heterogeneous hardware of a personal computer. The algorithm was optimized and the computational part was implemented in parallel using OpenCL. The speed of new approach wasmore » evaluated using various daylighting simulation cases on a multicore central processing unit and a graphics processing unit. Based on the measurements and analysis of the time usage for the Radiance daylighting simulation, further speedups can be achieved by using fast I/O devices and storing the data in a binary format.« less
Automatic Generation of OpenMP Directives and Its Application to Computational Fluid Dynamics Codes
NASA Technical Reports Server (NTRS)
Yan, Jerry; Jin, Haoqiang; Frumkin, Michael; Yan, Jerry (Technical Monitor)
2000-01-01
The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate OpenMP-based parallel programs with nominal user assistance. We outline techniques used in the implementation of the tool and discuss the application of this tool on the NAS Parallel Benchmarks and several computational fluid dynamics codes. This work demonstrates the great potential of using the tool to quickly port parallel programs and also achieve good performance that exceeds some of the commercial tools.
NASA Astrophysics Data System (ADS)
Warrier, M.; Bhardwaj, U.; Hemani, H.; Schneider, R.; Mutzke, A.; Valsakumar, M. C.
2015-12-01
We report on molecular Dynamics (MD) simulations carried out in fcc Cu and bcc W using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) code to study (i) the statistical variations in the number of interstitials and vacancies produced by energetic primary knock-on atoms (PKA) (0.1-5 keV) directed in random directions and (ii) the in-cascade cluster size distributions. It is seen that around 60-80 random directions have to be explored for the average number of displaced atoms to become steady in the case of fcc Cu, whereas for bcc W around 50-60 random directions need to be explored. The number of Frenkel pairs produced in the MD simulations are compared with that from the Binary Collision Approximation Monte Carlo (BCA-MC) code SDTRIM-SP and the results from the NRT model. It is seen that a proper choice of the damage energy, i.e. the energy required to create a stable interstitial, is essential for the BCA-MC results to match the MD results. On the computational front it is seen that in-situ processing saves the need to input/output (I/O) atomic position data of several tera-bytes when exploring a large number of random directions and there is no difference in run-time because the extra run-time in processing data is offset by the time saved in I/O.
Using parallel computing for the display and simulation of the space debris environment
NASA Astrophysics Data System (ADS)
Möckel, M.; Wiedemann, C.; Flegel, S.; Gelhaus, J.; Vörsmann, P.; Klinkrad, H.; Krag, H.
2011-07-01
Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction to OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.
Using parallel computing for the display and simulation of the space debris environment
NASA Astrophysics Data System (ADS)
Moeckel, Marek; Wiedemann, Carsten; Flegel, Sven Kevin; Gelhaus, Johannes; Klinkrad, Heiner; Krag, Holger; Voersmann, Peter
Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction of OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.
Xyce Parallel Electronic Simulator Users' Guide Version 6.7.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Aadithya, Karthik Venkatraman; Mei, Ting
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel com- puting platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one tomore » develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The information herein is subject to change without notice. Copyright c 2002-2017 Sandia Corporation. All rights reserved. Trademarks Xyce TM Electronic Simulator and Xyce TM are trademarks of Sandia Corporation. Orcad, Orcad Capture, PSpice and Probe are registered trademarks of Cadence Design Systems, Inc. Microsoft, Windows and Windows 7 are registered trademarks of Microsoft Corporation. Medici, DaVinci and Taurus are registered trademarks of Synopsys Corporation. Amtec and TecPlot are trademarks of Amtec Engineering, Inc. All other trademarks are property of their respective owners. Contacts World Wide Web http://xyce.sandia.gov https://info.sandia.gov/xyce (Sandia only) Email xyce@sandia.gov (outside Sandia) xyce-sandia@sandia.gov (Sandia only) Bug Reports (Sandia only) http://joseki-vm.sandia.gov/bugzilla http://morannon.sandia.gov/bugzilla« less
Parallelizing alternating direction implicit solver on GPUs
USDA-ARS?s Scientific Manuscript database
We present a parallel Alternating Direction Implicit (ADI) solver on GPUs. Our implementation significantly improves existing implementations in two aspects. First, we address the scalability issue of existing Parallel Cyclic Reduction (PCR) implementations by eliminating their hardware resource con...
NASA Astrophysics Data System (ADS)
Pinto, Thiago M.; Schilstra, Maria J.; Steuber, Volker; Roque, Antonio C.
2015-12-01
Long-term plasticity at parallel fibre (PF)-Purkinje cell (PC) synapses is thought to mediate cerebellar motor learning. It is known that calcium-calmodulin dependent protein kinase II (CaMKII) is essential for plasticity in the cerebellum. Recently, Van Woerden et al. demonstrated that the β isoform of CaMKII regulates the bidirectional inversion of PF-PC plasticity. Because the cellular events that underlie these experimental findings are still poorly understood, our work aims at unravelling how β CaMKII controls the direction of plasticity at PF-PC synapses. We developed a bidirectional plasticity model that replicates the experimental observations by Van Woerden et al. Simulation results obtained from this model indicate the mechanisms that underlie the bidirectional inversion of cerebellar plasticity. As suggested by Van Woerden et al., the filamentous actin binding enables β CaMKII to regulate the bidirectional plasticity at PF-PC synapses. Our model suggests that the reversal of long-term plasticity in PCs is based on a combination of mechanisms that occur at different calcium concentrations.
Radiofrequency pulse design in parallel transmission under strict temperature constraints.
Boulant, Nicolas; Massire, Aurélien; Amadon, Alexis; Vignaud, Alexandre
2014-09-01
To gain radiofrequency (RF) pulse performance by directly addressing the temperature constraints, as opposed to the specific absorption rate (SAR) constraints, in parallel transmission at ultra-high field. The magnitude least-squares RF pulse design problem under hard SAR constraints was solved repeatedly by using the virtual observation points and an active-set algorithm. The SAR constraints were updated at each iteration based on the result of a thermal simulation. The numerical study was performed for an SAR-demanding and simplified time of flight sequence using B1 and ΔB0 maps obtained in vivo on a human brain at 7T. The proposed adjustment of the SAR constraints combined with an active-set algorithm provided higher flexibility in RF pulse design within a reasonable time. The modifications of those constraints acted directly upon the thermal response as desired. Although further confidence in the thermal models is needed, this study shows that RF pulse design under strict temperature constraints is within reach, allowing better RF pulse performance and faster acquisitions at ultra-high fields at the cost of higher sequence complexity. Copyright © 2013 Wiley Periodicals, Inc.
Developing parallel GeoFEST(P) using the PYRAMID AMR library
NASA Technical Reports Server (NTRS)
Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Tisdale, Robert E.
2004-01-01
The PYRAMID parallel unstructured adaptive mesh refinement (AMR) library has been coupled with the GeoFEST geophysical finite element simulation tool to support parallel active tectonics simulations. Specifically, we have demonstrated modeling of coseismic and postseismic surface displacement due to a simulated Earthquake for the Landers system of interacting faults in Southern California. The new software demonstrated a 25-times resolution improvement and a 4-times reduction in time to solution over the sequential baseline milestone case. Simulations on workstations using a few tens of thousands of stress displacement finite elements can now be expanded to multiple millions of elements with greater than 98% scaled efficiency on various parallel platforms over many hundreds of processors. Our most recent work has demonstrated that we can dynamically adapt the computational grid as stress grows on a fault. In this paper, we will describe the major issues and challenges associated with coupling these two programs to create GeoFEST(P). Performance and visualization results will also be described.
Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment*†
Khan, Md. Ashfaquzzaman; Herbordt, Martin C.
2011-01-01
Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations. PMID:21822327
Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment.
Khan, Md Ashfaquzzaman; Herbordt, Martin C
2011-07-20
Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations.
NASA Technical Reports Server (NTRS)
Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke
1989-01-01
Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.
A parallel implementation of an off-lattice individual-based model of multicellular populations
NASA Astrophysics Data System (ADS)
Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe
2015-07-01
As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.
NASA Astrophysics Data System (ADS)
Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Gadea, R.; Sato, K.
2011-03-01
Presently, dynamic surface-based models are required to contain increasingly larger numbers of points and to propagate them over longer time periods. For large numbers of surface points, the octree data structure can be used as a balance between low memory occupation and relatively rapid access to the stored data. For evolution rules that depend on neighborhood states, extended simulation periods can be obtained by using simplified atomistic propagation models, such as the Cellular Automata (CA). This method, however, has an intrinsic parallel updating nature and the corresponding simulations are highly inefficient when performed on classical Central Processing Units (CPUs), which are designed for the sequential execution of tasks. In this paper, a series of guidelines is presented for the efficient adaptation of octree-based, CA simulations of complex, evolving surfaces into massively parallel computing hardware. A Graphics Processing Unit (GPU) is used as a cost-efficient example of the parallel architectures. For the actual simulations, we consider the surface propagation during anisotropic wet chemical etching of silicon as a computationally challenging process with a wide-spread use in microengineering applications. A continuous CA model that is intrinsically parallel in nature is used for the time evolution. Our study strongly indicates that parallel computations of dynamically evolving surfaces simulated using CA methods are significantly benefited by the incorporation of octrees as support data structures, substantially decreasing the overall computational time and memory usage.
Tough2{_}MP: A parallel version of TOUGH2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Keni; Wu, Yu-Shu; Ding, Chris
2003-04-09
TOUGH2{_}MP is a massively parallel version of TOUGH2. It was developed for running on distributed-memory parallel computers to simulate large simulation problems that may not be solved by the standard, single-CPU TOUGH2 code. The new code implements an efficient massively parallel scheme, while preserving the full capacity and flexibility of the original TOUGH2 code. The new software uses the METIS software package for grid partitioning and AZTEC software package for linear-equation solving. The standard message-passing interface is adopted for communication among processors. Numerical performance of the current version code has been tested on CRAY-T3E and IBM RS/6000 SP platforms. Inmore » addition, the parallel code has been successfully applied to real field problems of multi-million-cell simulations for three-dimensional multiphase and multicomponent fluid and heat flow, as well as solute transport. In this paper, we will review the development of the TOUGH2{_}MP, and discuss the basic features, modules, and their applications.« less
Parallel VLSI architecture emulation and the organization of APSA/MPP
NASA Technical Reports Server (NTRS)
Odonnell, John T.
1987-01-01
The Applicative Programming System Architecture (APSA) combines an applicative language interpreter with a novel parallel computer architecture that is well suited for Very Large Scale Integration (VLSI) implementation. The Massively Parallel Processor (MPP) can simulate VLSI circuits by allocating one processing element in its square array to an area on a square VLSI chip. As long as there are not too many long data paths, the MPP can simulate a VLSI clock cycle very rapidly. The APSA circuit contains a binary tree with a few long paths and many short ones. A skewed H-tree layout allows every processing element to simulate a leaf cell and up to four tree nodes, with no loss in parallelism. Emulation of a key APSA algorithm on the MPP resulted in performance 16,000 times faster than a Vax. This speed will make it possible for the APSA language interpreter to run fast enough to support research in parallel list processing algorithms.
NASA Astrophysics Data System (ADS)
Lojacono, Xavier; Richard, Marie-Hélène; Ley, Jean-Luc; Testa, Etienne; Ray, Cédric; Freud, Nicolas; Létang, Jean Michel; Dauvergne, Denis; Maxim, Voichiţa; Prost, Rémy
2013-10-01
The Compton camera is a relevant imaging device for the detection of prompt photons produced by nuclear fragmentation in hadrontherapy. It may allow an improvement in detection efficiency compared to a standard gamma-camera but requires more sophisticated image reconstruction techniques. In this work, we simulate low statistics acquisitions from a point source having a broad energy spectrum compatible with hadrontherapy. We then reconstruct the image of the source with a recently developed filtered backprojection algorithm, a line-cone approach and an iterative List Mode Maximum Likelihood Expectation Maximization algorithm. Simulated data come from a Compton camera prototype designed for hadrontherapy online monitoring. Results indicate that the achievable resolution in directions parallel to the detector, that may include the beam direction, is compatible with the quality control requirements. With the prototype under study, the reconstructed image is elongated in the direction orthogonal to the detector. However this direction is of less interest in hadrontherapy where the first requirement is to determine the penetration depth of the beam in the patient. Additionally, the resolution may be recovered using a second camera.
Numerical Studies of Boundary-Layer Receptivity
NASA Technical Reports Server (NTRS)
Reed, Helen L.
1995-01-01
Direct numerical simulations (DNS) of the acoustic receptivity process on a semi-infinite flat plate with a modified-super-elliptic (MSE) leading edge are performed. The incompressible Navier-Stokes equations are solved in stream-function/vorticity form in a general curvilinear coordinate system. The steady basic-state solution is found by solving the governing equations using an alternating direction implicit (ADI) procedure which takes advantage of the parallelism present in line-splitting techniques. Time-harmonic oscillations of the farfield velocity are applied as unsteady boundary conditions to the unsteady disturbance equations. An efficient time-harmonic scheme is used to produce the disturbance solutions. Buffer-zone techniques have been applied to eliminate wave reflection from the outflow boundary. The spatial evolution of Tollmien-Schlichting (T-S) waves is analyzed and compared with experiment and theory. The effects of nose-radius, frequency, Reynolds number, angle of attack, and amplitude of the acoustic wave are investigated. This work is being performed in conjunction with the experiments at the Arizona State University Unsteady Wind Tunnel under the direction of Professor William Saric. The simulations are of the same configuration and parameters used in the wind-tunnel experiments.
Research on parallel algorithm for sequential pattern mining
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao
2008-03-01
Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vydyanathan, Naga; Krishnamoorthy, Sriram; Sabin, Gerald M.
2009-08-01
Complex parallel applications can often be modeled as directed acyclic graphs of coarse-grained application-tasks with dependences. These applications exhibit both task- and data-parallelism, and combining these two (also called mixedparallelism), has been shown to be an effective model for their execution. In this paper, we present an algorithm to compute the appropriate mix of task- and data-parallelism required to minimize the parallel completion time (makespan) of these applications. In other words, our algorithm determines the set of tasks that should be run concurrently and the number of processors to be allocated to each task. The processor allocation and scheduling decisionsmore » are made in an integrated manner and are based on several factors such as the structure of the taskgraph, the runtime estimates and scalability characteristics of the tasks and the inter-task data communication volumes. A locality conscious scheduling strategy is used to improve inter-task data reuse. Evaluation through simulations and actual executions of task graphs derived from real applications as well as synthetic graphs shows that our algorithm consistently generates schedules with lower makespan as compared to CPR and CPA, two previously proposed scheduling algorithms. Our algorithm also produces schedules that have lower makespan than pure taskand data-parallel schedules. For task graphs with known optimal schedules or lower bounds on the makespan, our algorithm generates schedules that are closer to the optima than other scheduling approaches.« less
A real-time, dual processor simulation of the rotor system research aircraft
NASA Technical Reports Server (NTRS)
Mackie, D. B.; Alderete, T. S.
1977-01-01
A real-time, man-in-the loop, simulation of the rotor system research aircraft (RSRA) was conducted. The unique feature of this simulation was that two digital computers were used in parallel to solve the equations of the RSRA mathematical model. The design, development, and implementation of the simulation are documented. Program validation was discussed, and examples of data recordings are given. This simulation provided an important research tool for the RSRA project in terms of safe and cost-effective design analysis. In addition, valuable knowledge concerning parallel processing and a powerful simulation hardware and software system was gained.
The effect of size, orientation and alloying on the deformation of AZ31 nanopillars
NASA Astrophysics Data System (ADS)
Aitken, Zachary H.; Fan, Haidong; El-Awady, Jaafar A.; Greer, Julia R.
2015-03-01
We conducted uniaxial compression of single crystalline Mg alloy, AZ31 (Al 3 wt% and Zn 1 wt%) nanopillars with diameters between 300 and 5000 nm with two distinct crystallographic orientations: (1) along the [0001] c-axis and (2) at an acute angle away from the c-axis, nominally oriented for basal slip. We observe single slip deformation for sub-micron samples nominally oriented for basal slip with the deformation commencing via a single set of parallel shear offsets. Samples compressed along the c-axis display an increase in yield strength compared to basal samples as well as significant hardening with the deformation being mostly homogeneous. We find that the "smaller is stronger" size effect in single crystals dominates any improvement in strength that may have arisen from solid solution strengthening. We employ 3D-discrete dislocation dynamics (DDD) to simulate compression along the [0001] and [ 11 2 bar 2 ] directions to elucidate the mechanisms of slip and evolution of dislocation microstructure. These simulations show qualitatively similar stress-strain signatures to the experimentally obtained stress-strain data. Simulations of compression parallel to the [ 11 2 bar 2 ] direction reveal the activation and motion of only -type dislocations and virtually no dislocation junction formation. Computations of compression along [0001] show the activation and motion of both
Particle tracking for selected groundwater wells in the lower Yakima River Basin, Washington
Bachmann, Matthew P.
2015-10-21
Generalized groundwater-flow directions in unconsolidated basin-fill deposits were towards the Yakima River, which acts as a local sink for shallow groundwater, and roughly parallel to topographic gradients. Particles backtracked from more shallow aquifer locations traveled shorter distances before reaching the water table than particles from deeper locations. Flowpaths for particles starting at wells completed in the basalt units underlying the basin-fill deposits sometimes were different than for wells with similar lateral locations but more shallow depths. In cases where backtracking particles reached geologic structures simulated as flow barriers, abrupt changes in direction in some particle pathlines suggest significant changes in simulated hydraulic gradients that may not accurately reflect actual conditions. Most groundwater wells sampled had associated zones of contribution within the Toppenish/Benton subbasin between the well and the nearest subbasin margin, but interpretation of these results for any specific well is likely to be complicated by the assumptions and simplifications inherent in the model construction process. Delineated zones of contribution for individual wells are sensitive to the depths assigned to the screened interval of the well, resulting in simulated areal extents of the zones of contribution to a discharging well that are elongated in the direction of groundwater flow.
Parallel Plate System for Collecting Data Used to Determine Viscosity
NASA Technical Reports Server (NTRS)
Ethridge, Edwin C. (Inventor); Kaukler, William (Inventor)
2013-01-01
A parallel-plate system collects data used to determine viscosity. A first plate is coupled to a translator so that the first plate can be moved along a first direction. A second plate has a pendulum device coupled thereto such that the second plate is suspended above and parallel to the first plate. The pendulum device constrains movement of the second plate to a second direction that is aligned with the first direction and is substantially parallel thereto. A force measuring device is coupled to the second plate for measuring force along the second direction caused by movement of the second plate.
Capabilities of Fully Parallelized MHD Stability Code MARS
NASA Astrophysics Data System (ADS)
Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang
2016-10-01
Results of full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. Parallel version of MARS, named PMARS, has been recently developed at FAR-TECH. Parallelized MARS is an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, implemented in MARS. Parallelization of the code included parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse vector iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the MARS algorithm using parallel libraries and procedures. Parallelized MARS is capable of calculating eigenmodes with significantly increased spatial resolution: up to 5,000 adapted radial grid points with up to 500 poloidal harmonics. Such resolution is sufficient for simulation of kink, tearing and peeling-ballooning instabilities with physically relevant parameters. Work is supported by the U.S. DOE SBIR program.
Fully Parallel MHD Stability Analysis Tool
NASA Astrophysics Data System (ADS)
Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang
2015-11-01
Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Results of MARS parallelization and of the development of a new fix boundary equilibrium code adapted for MARS input will be reported. Work is supported by the U.S. DOE SBIR program.
ASC-ATDM Performance Portability Requirements for 2015-2019
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, Harold C.; Trott, Christian Robert
This report outlines the research, development, and support requirements for the Advanced Simulation and Computing (ASC ) Advanced Technology, Development, and Mitigation (ATDM) Performance Portability (a.k.a., Kokkos) project for 2015 - 2019 . The research and development (R&D) goal for Kokkos (v2) has been to create and demonstrate a thread - parallel programming model a nd standard C++ library - based implementation that enables performance portability across diverse manycore architectures such as multicore CPU, Intel Xeon Phi, and NVIDIA Kepler GPU. This R&D goal has been achieved for algorithms that use data parallel pat terns including parallel - for, parallelmore » - reduce, and parallel - scan. Current R&D is focusing on hierarchical parallel patterns such as a directed acyclic graph (DAG) of asynchronous tasks where each task contain s nested data parallel algorithms. This five y ear plan includes R&D required to f ully and performance portably exploit thread parallelism across current and anticipated next generation platforms (NGP). The Kokkos library is being evaluated by many projects exploring algorithm s and code design for NGP. Some production libraries and applications such as Trilinos and LAMMPS have already committed to Kokkos as their foundation for manycore parallelism an d performance portability. These five year requirements includes support required for current and antic ipated ASC projects to be effective and productive in their use of Kokkos on NGP. The greatest risk to the success of Kokkos and ASC projects relying upon Kokkos is a lack of staffing resources to support Kokkos to the degree needed by these ASC projects. This support includes up - to - date tutorials, documentation, multi - platform (hardware and software stack) testing, minor feature enhancements, thread - scalable algorithm consulting, and managing collaborative R&D.« less
NASA Astrophysics Data System (ADS)
Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian
2018-01-01
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.
Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.; ...
2016-09-18
This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.
This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.
Laplace-domain waveform modeling and inversion for the 3D acoustic-elastic coupled media
NASA Astrophysics Data System (ADS)
Shin, Jungkyun; Shin, Changsoo; Calandra, Henri
2016-06-01
Laplace-domain waveform inversion reconstructs long-wavelength subsurface models by using the zero-frequency component of damped seismic signals. Despite the computational advantages of Laplace-domain waveform inversion over conventional frequency-domain waveform inversion, an acoustic assumption and an iterative matrix solver have been used to invert 3D marine datasets to mitigate the intensive computing cost. In this study, we develop a Laplace-domain waveform modeling and inversion algorithm for 3D acoustic-elastic coupled media by using a parallel sparse direct solver library (MUltifrontal Massively Parallel Solver, MUMPS). We precisely simulate a real marine environment by coupling the 3D acoustic and elastic wave equations with the proper boundary condition at the fluid-solid interface. In addition, we can extract the elastic properties of the Earth below the sea bottom from the recorded acoustic pressure datasets. As a matrix solver, the parallel sparse direct solver is used to factorize the non-symmetric impedance matrix in a distributed memory architecture and rapidly solve the wave field for a number of shots by using the lower and upper matrix factors. Using both synthetic datasets and real datasets obtained by a 3D wide azimuth survey, the long-wavelength component of the P-wave and S-wave velocity models is reconstructed and the proposed modeling and inversion algorithm are verified. A cluster of 80 CPU cores is used for this study.
Parallel ALLSPD-3D: Speeding Up Combustor Analysis Via Parallel Processing
NASA Technical Reports Server (NTRS)
Fricker, David M.
1997-01-01
The ALLSPD-3D Computational Fluid Dynamics code for reacting flow simulation was run on a set of benchmark test cases to determine its parallel efficiency. These test cases included non-reacting and reacting flow simulations with varying numbers of processors. Also, the tests explored the effects of scaling the simulation with the number of processors in addition to distributing a constant size problem over an increasing number of processors. The test cases were run on a cluster of IBM RS/6000 Model 590 workstations with ethernet and ATM networking plus a shared memory SGI Power Challenge L workstation. The results indicate that the network capabilities significantly influence the parallel efficiency, i.e., a shared memory machine is fastest and ATM networking provides acceptable performance. The limitations of ethernet greatly hamper the rapid calculation of flows using ALLSPD-3D.
Applying Parallel Processing Techniques to Tether Dynamics Simulation
NASA Technical Reports Server (NTRS)
Wells, B. Earl
1996-01-01
The focus of this research has been to determine the effectiveness of applying parallel processing techniques to a sizable real-world problem, the simulation of the dynamics associated with a tether which connects two objects in low earth orbit, and to explore the degree to which the parallelization process can be automated through the creation of new software tools. The goal has been to utilize this specific application problem as a base to develop more generally applicable techniques.
Understanding of impurity poloidal distribution in the edge pedestal by modelling
NASA Astrophysics Data System (ADS)
Rozhansky, V.; Kaveeva, E.; Molchanov, P.; Veselova, I.; Voskoboynikov, S.; Coster, D.; Fable, E.; Puetterich, T.; Viezzer, E.; Kukushkin, A. S.; Kirk, A.; the ASDEX Upgrade Team
2015-07-01
Simulation of an H-mode ASDEX Upgrade shot with boron impurity was done with the B2SOLPS5.2 transport code. Simulation results were compared with the unique experimental data available for the chosen shot: radial density, electron and ion temperature profiles in the equatorial midplanes, radial electric field profile, radial profiles of the parallel velocity of impurities at the low-field side (LFS) and high-field side (HFS), radial density profiles of impurity ions at LHS and HFS. Simulation results reproduce all available experimental data simultaneously. In particular strong poloidal HFS-LFS asymmetry of B5+ ions was predicted in accordance with the experiment. The simulated HFS B5+ density inside the edge transport barrier is twice larger than that at LFS. This is consistent with the experimental observations where even larger impurity density asymmetry was observed. A similar effect was predicted in the simulation done for the MAST H-mode. Here the HFS density of He2+ is predicted to be 4 times larger than that at LHS. Such a large predicted asymmetry is connected with a larger ratio of HFS and LFS magnetic fields which is typical for spherical tokamaks. The HFS/LFS asymmetry was not measured in the experiment, however modelling qualitatively reproduces the observed change of sign of He+parallel velocity to the counter-current direction at LFS. The understanding of the asymmetry is based on neoclassical effects in plasma with strong gradients. It is demonstrated that simulation results obtained with account of sources of ionization, realistic geometry and turbulent transport are consistent with the simplified analytical approach. Difference from the standard neoclassical theory is emphasized.
Parallel discrete event simulation using shared memory
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.
1988-01-01
With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.
The cost of parallel consolidation into visual working memory.
Rideaux, Reuben; Edwards, Mark
2016-01-01
A growing body of evidence indicates that information can be consolidated into visual working memory in parallel. Initially, it was suggested that color information could be consolidated in parallel while orientation was strictly limited to serial consolidation (Liu & Becker, 2013). However, we recently found evidence suggesting that both orientation and motion direction items can be consolidated in parallel, with different levels of accuracy (Rideaux, Apthorp, & Edwards, 2015). Here we examine whether there is a cost associated with parallel consolidation of orientation and direction information by comparing performance, in terms of precision and guess rate, on a target recall task where items are presented either sequentially or simultaneously. The results compellingly indicate that motion direction can be consolidated in parallel, but the evidence for orientation is less conclusive. Further, we find that there is a twofold cost associated with parallel consolidation of direction: Both the probability of failing to consolidate one (or both) item/s increases and the precision at which representations are encoded is reduced. Additionally, we find evidence indicating that the increased consolidation failure may be due to interference between items presented simultaneously, and is moderated by item similarity. These findings suggest that a biased competition model may explain differences in parallel consolidation between features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tran, Thanh Thuy; Nguyen, Phuong H., E-mail: phuong.nguyen@ibpc.fr; Derreumaux, Philippe, E-mail: philippe.derreumaux@ibpc.fr
Coarse-grained protein lattice models approximate atomistic details and keep the essential interactions. They are, therefore, suitable for capturing generic features of protein folding and amyloid formation at low computational cost. As our aim is to study the critical nucleus sizes of two experimentally well-characterized peptide fragments Aβ{sub 16−22} and Aβ{sub 37−42} of the full length Aβ{sub 1−42} Alzheimer’s peptide, it is important that simulations with the lattice model reproduce all-atom simulations. In this study, we present a comprehensive force field parameterization based on the OPEP (Optimized Potential for Efficient protein structure Prediction) force field for an on-lattice protein model, whichmore » incorporates explicitly the formation of hydrogen bonds and directions of side-chains. Our bottom-up approach starts with the determination of the best lattice force parameters for the Aβ{sub 16−22} dimer by fitting its equilibrium parallel and anti-parallel β-sheet populations to all-atom simulation results. Surprisingly, the calibrated force field is transferable to the trimer of Aβ{sub 16−22} and the dimer and trimer of Aβ{sub 37−42}. Encouraged by this finding, we characterized the free energy landscapes of the two decamers. The dominant structure of the Aβ{sub 16−22} decamer matches the microcrystal structure. Pushing the simulations for aggregates between 4-mer and 12-mer suggests a nucleus size for fibril formation of 10 chains. In contrast, the Aβ{sub 37−42} decamer is largely disordered with mixed by parallel and antiparallel chains, suggesting that the nucleus size is >10 peptides. Our refined force field coupled to this on-lattice model should provide useful insights into the critical nucleation number associated with neurodegenerative diseases.« less
User's Guide for TOUGH2-MP - A Massively Parallel Version of the TOUGH2 Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Earth Sciences Division; Zhang, Keni; Zhang, Keni
TOUGH2-MP is a massively parallel (MP) version of the TOUGH2 code, designed for computationally efficient parallel simulation of isothermal and nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. In recent years, computational requirements have become increasingly intensive in large or highly nonlinear problems for applications in areas such as radioactive waste disposal, CO2 geological sequestration, environmental assessment and remediation, reservoir engineering, and groundwater hydrology. The primary objective of developing the parallel-simulation capability is to significantly improve the computational performance of the TOUGH2 family of codes. The particular goal for the parallel simulator ismore » to achieve orders-of-magnitude improvement in computational time for models with ever-increasing complexity. TOUGH2-MP is designed to perform parallel simulation on multi-CPU computational platforms. An earlier version of TOUGH2-MP (V1.0) was based on the TOUGH2 Version 1.4 with EOS3, EOS9, and T2R3D modules, a software previously qualified for applications in the Yucca Mountain project, and was designed for execution on CRAY T3E and IBM SP supercomputers. The current version of TOUGH2-MP (V2.0) includes all fluid property modules of the standard version TOUGH2 V2.0. It provides computationally efficient capabilities using supercomputers, Linux clusters, or multi-core PCs, and also offers many user-friendly features. The parallel simulator inherits all process capabilities from V2.0 together with additional capabilities for handling fractured media from V1.4. This report provides a quick starting guide on how to set up and run the TOUGH2-MP program for users with a basic knowledge of running the (standard) version TOUGH2 code, The report also gives a brief technical description of the code, including a discussion of parallel methodology, code structure, as well as mathematical and numerical methods used. To familiarize users with the parallel code, illustrative sample problems are presented.« less
Direct Numerical Simulation of Turbulent Flow Over Complex Bathymetry
NASA Astrophysics Data System (ADS)
Yue, L.; Hsu, T. J.
2017-12-01
Direct numerical simulation (DNS) is regarded as a powerful tool in the investigation of turbulent flow featured with a wide range of time and spatial scales. With the application of coordinate transformation in a pseudo-spectral scheme, a parallelized numerical modeling system was created aiming at simulating flow over complex bathymetry with high numerical accuracy and efficiency. The transformed governing equations were integrated in time using a third-order low-storage Runge-Kutta method. For spatial discretization, the discrete Fourier expansion was adopted in the streamwise and spanwise direction, enforcing the periodic boundary condition in both directions. The Chebyshev expansion on Chebyshev-Gauss-Lobatto points was used in the wall-normal direction, assuming there is no-slip on top and bottom walls. The diffusion terms were discretized with a Crank-Nicolson scheme, while the advection terms dealiased with the 2/3 rule were discretized with an Adams-Bashforth scheme. In the prediction step, the velocity was calculated in physical domain by solving the resulting linear equation directly. However, the extra terms introduced by coordinate transformation impose a strict limitation to time step and an iteration method was applied to overcome this restriction in the correction step for pressure by solving the Helmholtz equation. The numerical solver is written in object-oriented C++ programing language utilizing Armadillo linear algebra library for matrix computation. Several benchmarking cases in laminar and turbulent flow were carried out to verify/validate the numerical model and very good agreements are achieved. Ongoing work focuses on implementing sediment transport capability for multiple sediment classes and parameterizations for flocculation processes.
A gyrokinetic one-dimensional scrape-off layer model of an edge-localized mode heat pulse
Shi, E. L.; Hakim, A. H.; Hammett, G. W.
2015-02-03
An electrostatic gyrokinetic-based model is applied to simulate parallel plasma transport in the scrape-off layer to a divertor plate. We focus on a test problem that has been studied previously, using parameters chosen to model a heat pulse driven by an edge-localized mode in JET. Previous work has used direct particle-in-cellequations with full dynamics, or Vlasov or fluid equations with only parallel dynamics. With the use of the gyrokinetic quasineutrality equation and logical sheathboundary conditions, spatial and temporal resolution requirements are no longer set by the electron Debye length and plasma frequency, respectively. Finally, this test problem also helps illustratemore » some of the physics contained in the Hamiltonian form of the gyrokineticequations and some of the numerical challenges in developing an edge gyrokinetic code.« less
Incremental Parallelization of Non-Data-Parallel Programs Using the Charon Message-Passing Library
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.
2000-01-01
Message passing is among the most popular techniques for parallelizing scientific programs on distributed-memory architectures. The reasons for its success are wide availability (MPI), efficiency, and full tuning control provided to the programmer. A major drawback, however, is that incremental parallelization, as offered by compiler directives, is not generally possible, because all data structures have to be changed throughout the program simultaneously. Charon remedies this situation through mappings between distributed and non-distributed data. It allows breaking up the parallelization into small steps, guaranteeing correctness at every stage. Several tools are available to help convert legacy codes into high-performance message-passing programs. They usually target data-parallel applications, whose loops carrying most of the work can be distributed among all processors without much dependency analysis. Others do a full dependency analysis and then convert the code virtually automatically. Even more toolkits are available that aid construction from scratch of message passing programs. None, however, allows piecemeal translation of codes with complex data dependencies (i.e. non-data-parallel programs) into message passing codes. The Charon library (available in both C and Fortran) provides incremental parallelization capabilities by linking legacy code arrays with distributed arrays. During the conversion process, non-distributed and distributed arrays exist side by side, and simple mapping functions allow the programmer to switch between the two in any location in the program. Charon also provides wrapper functions that leave the structure of the legacy code intact, but that allow execution on truly distributed data. Finally, the library provides a rich set of communication functions that support virtually all patterns of remote data demands in realistic structured grid scientific programs, including transposition, nearest-neighbor communication, pipelining, gather/scatter, and redistribution. At the end of the conversion process most intermediate Charon function calls will have been removed, the non-distributed arrays will have been deleted, and virtually the only remaining Charon functions calls are the high-level, highly optimized communications. Distribution of the data is under complete control of the programmer, although a wide range of useful distributions is easily available through predefined functions. A crucial aspect of the library is that it does not allocate space for distributed arrays, but accepts programmer-specified memory. This has two major consequences. First, codes parallelized using Charon do not suffer from encapsulation; user data is always directly accessible. This provides high efficiency, and also retains the possibility of using message passing directly for highly irregular communications. Second, non-distributed arrays can be interpreted as (trivial) distributions in the Charon sense, which allows them to be mapped to truly distributed arrays, and vice versa. This is the mechanism that enables incremental parallelization. In this paper we provide a brief introduction of the library and then focus on the actual steps in the parallelization process, using some representative examples from, among others, the NAS Parallel Benchmarks. We show how a complicated two-dimensional pipeline-the prototypical non-data-parallel algorithm- can be constructed with ease. To demonstrate the flexibility of the library, we give examples of the stepwise, efficient parallel implementation of nonlocal boundary conditions common in aircraft simulations, as well as the construction of the sequence of grids required for multigrid.
Parallel-plate transmission line type of EMP simulators: Systematic review and recommendations
NASA Astrophysics Data System (ADS)
Giri, D. V.; Liu, T. K.; Tesche, F. M.; King, R. W. P.
1980-05-01
This report presents various aspects of the two-parallel-plate transmission line type of EMP simulator. Much of the work is the result of research efforts conducted during the last two decades at the Air Force Weapons Laboratory, and in industries/universities as well. The principal features of individual simulator components are discussed. The report also emphasizes that it is imperative to hybridize our understanding of individual components so that we can draw meaningful conclusions of simulator performance as a whole.
MMS Observations and Hybrid Simulations of Surface Ripples at a Marginally Quasi-Parallel Shock
NASA Astrophysics Data System (ADS)
Gingell, Imogen; Schwartz, Steven J.; Burgess, David; Johlander, Andreas; Russell, Christopher T.; Burch, James L.; Ergun, Robert E.; Fuselier, Stephen; Gershman, Daniel J.; Giles, Barbara L.; Goodrich, Katherine A.; Khotyaintsev, Yuri V.; Lavraud, Benoit; Lindqvist, Per-Arne; Strangeway, Robert J.; Trattner, Karlheinz; Torbert, Roy B.; Wei, Hanying; Wilder, Frederick
2017-11-01
Simulations and observations of collisionless shocks have shown that deviations of the nominal local shock normal orientation, that is, surface waves or ripples, are expected to propagate in the ramp and overshoot of quasi-perpendicular shocks. Here we identify signatures of a surface ripple propagating during a crossing of Earth's marginally quasi-parallel (θBn˜45∘) or quasi-parallel bow shock on 27 November 2015 06:01:44 UTC by the Magnetospheric Multiscale (MMS) mission and determine the ripple's properties using multispacecraft methods. Using two-dimensional hybrid simulations, we confirm that surface ripples are a feature of marginally quasi-parallel and quasi-parallel shocks under the observed solar wind conditions. In addition, since these marginally quasi-parallel and quasi-parallel shocks are expected to undergo a cyclic reformation of the shock front, we discuss the impact of multiple sources of nonstationarity on shock structure. Importantly, ripples are shown to be transient phenomena, developing faster than an ion gyroperiod and only during the period of the reformation cycle when a newly developed shock ramp is unaffected by turbulence in the foot. We conclude that the change in properties of the ripple observed by MMS is consistent with the reformation of the shock front over a time scale of an ion gyroperiod.
Fin field effect transistor directionality impacts printing of implantation shapes
NASA Astrophysics Data System (ADS)
Wang, Xiren; Granik, Yuri
2018-01-01
In modern integrated circuit (IC) fabrication processes, the photoresist receives considerable illumination energy that is reflected by underlying topography during optical lithography of implantation layers. Bottom antireflective coating (BARC) is helpful to mitigate the reflection. Often, however, BARC is not used, because its removal is technically challenging, in addition to its relatively high economic cost. Furthermore, the advanced technology nodes, such as 14/10-nm nodes, have introduced fin field effect transistor (FinFET), which makes reflection from nonuniform silicon substrates exceptionally complicated. Therefore, modeling reflection from topography becomes obligatory to accurately predict printing of implantation shapes. Typically, FinFET is always fixed in one direction in realistic designs. However, the same implantation rectangle may be oriented in either horizontal or vertical direction. Then, there are two types of relations between the critical dimension (CD) and FinFET, namely a parallel-to and a perpendicular-to relation. We examine the fin directionality impact on CD. We found that this impact may be considerable in some cases. We use our in-house rigorous optical topography simulator to reveal underlining physical reasons. One of the major causes of the CD differences is that in the parallel orientation, the solid sidewalls of the fins conduct considerable light reflections unlike for the perpendicular orientation. This finding can aid the compact modeling in optical proximity correction of implantation masks.
NASA Astrophysics Data System (ADS)
Dou, Ruifeng; Phillion, A. B.
2016-08-01
Hot tearing susceptibility is commonly assessed using a pressure drop equation in the mushy zone that includes the effects of both tensile deformation perpendicular to the thermal gradient as well as shrinkage feeding. In this study, a Pore Fraction hot tearing model, recently developed by Monroe and Beckermann (JOM 66:1439-1445, 2014), is extended to additionally include the effect of strain rate parallel to the thermal gradient. The deformation and shrinkage pore fractions are obtained on the basis of the dimensionless Niyama criterion and a scaling variable method. First, the model is applied to the binary Al-Cu system under conditions of directional solidification. It is shown that for the same Niyama criterion, a decrease in the cooling rate increases both the deformation and shrinkage pore fractions because of an increase in the time spent in the brittle temperature region. Second, the model is applied to the industrial aluminum alloy AA5182 as part of a finite element simulation of the Direct Chill (DC) casting process. It is shown that an increase in the casting speed during DC casting increases the deformation and shrinkage pore fractions, causing the maximum point of pore fraction to move towards the base of the casting. These results demonstrate that including the strain rate parallel to the thermal gradient significantly improves the predictive quality of hot tearing criteria based on the pressure drop equation.
Magnetosheath Filamentary Structures Formed by Ion Acceleration at the Quasi-Parallel Bow Shock
NASA Technical Reports Server (NTRS)
Omidi, N.; Sibeck, D.; Gutynska, O.; Trattner, K. J.
2014-01-01
Results from 2.5-D electromagnetic hybrid simulations show the formation of field-aligned, filamentary plasma structures in the magnetosheath. They begin at the quasi-parallel bow shock and extend far into the magnetosheath. These structures exhibit anticorrelated, spatial oscillations in plasma density and ion temperature. Closer to the bow shock, magnetic field variations associated with density and temperature oscillations may also be present. Magnetosheath filamentary structures (MFS) form primarily in the quasi-parallel sheath; however, they may extend to the quasi-perpendicular magnetosheath. They occur over a wide range of solar wind Alfvénic Mach numbers and interplanetary magnetic field directions. At lower Mach numbers with lower levels of magnetosheath turbulence, MFS remain highly coherent over large distances. At higher Mach numbers, magnetosheath turbulence decreases the level of coherence. Magnetosheath filamentary structures result from localized ion acceleration at the quasi-parallel bow shock and the injection of energetic ions into the magnetosheath. The localized nature of ion acceleration is tied to the generation of fast magnetosonic waves at and upstream of the quasi-parallel shock. The increased pressure in flux tubes containing the shock accelerated ions results in the depletion of the thermal plasma in these flux tubes and the enhancement of density in flux tubes void of energetic ions. This results in the observed anticorrelation between ion temperature and plasma density.
Computational Challenges of 3D Radiative Transfer in Atmospheric Models
NASA Astrophysics Data System (ADS)
Jakub, Fabian; Bernhard, Mayer
2017-04-01
The computation of radiative heating and cooling rates is one of the most expensive components in todays atmospheric models. The high computational cost stems not only from the laborious integration over a wide range of the electromagnetic spectrum but also from the fact that solving the integro-differential radiative transfer equation for monochromatic light is already rather involved. This lead to the advent of numerous approximations and parameterizations to reduce the cost of the solver. One of the most prominent one is the so called independent pixel approximations (IPA) where horizontal energy transfer is neglected whatsoever and radiation may only propagate in the vertical direction (1D). Recent studies implicate that the IPA introduces significant errors in high resolution simulations and affects the evolution and development of convective systems. However, using fully 3D solvers such as for example MonteCarlo methods is not even on state of the art supercomputers feasible. The parallelization of atmospheric models is often realized by a horizontal domain decomposition, and hence, horizontal transfer of energy necessitates communication. E.g. a cloud's shadow at a low zenith angle will cast a long shadow and potentially needs to communication through a multitude of processors. Especially light in the solar spectral range may travel long distances through the atmosphere. Concerning highly parallel simulations, it is vital that 3D radiative transfer solvers put a special emphasis on parallel scalability. We will present an introduction to intricacies computing 3D radiative heating and cooling rates as well as report on the parallel performance of the TenStream solver. The TenStream is a 3D radiative transfer solver using the PETSc framework to iteratively solve a set of partial differential equation. We investigate two matrix preconditioners, (a) geometric algebraic multigrid preconditioning(MG+GAMG) and (b) block Jacobi incomplete LU (ILU) factorization. The TenStream solver is tested for up to 4096 cores and shows a parallel scaling efficiency of 80-90% on various supercomputers.
NASA Astrophysics Data System (ADS)
Bao, Jian; Lau, Calvin; Kuley, Animesh; Lin, Zhihong; Fulton, Daniel; Tajima, Toshiki; Tri Alpha Energy, Inc. Team
2017-10-01
Collisional and turbulent transport in a field reversed configuration (FRC) is studied in global particle simulation by using GTC (gyrokinetic toroidal code). The global FRC geometry is incorporated in GTC by using a field-aligned mesh in cylindrical coordinates, which enables global simulation coupling core and scrape-off layer (SOL) across the separatrix. Furthermore, fully kinetic ions are implemented in GTC to treat magnetic-null point in FRC core. Both global simulation coupling core and SOL regions and independent SOL region simulation have been carried out to study turbulence. In this work, the ``logical sheath boundary condition'' is implemented to study parallel transport in the SOL. This method helps to relax time and spatial steps without resolving electron plasma frequency and Debye length, which enables turbulent transports simulation with sheath effects. We will study collisional and turbulent SOL parallel transport with mirror geometry and sheath boundary condition in C2-W divertor.
Simulated parallel annealing within a neighborhood for optimization of biomechanical systems.
Higginson, J S; Neptune, R R; Anderson, F C
2005-09-01
Optimization problems for biomechanical systems have become extremely complex. Simulated annealing (SA) algorithms have performed well in a variety of test problems and biomechanical applications; however, despite advances in computer speed, convergence to optimal solutions for systems of even moderate complexity has remained prohibitive. The objective of this study was to develop a portable parallel version of a SA algorithm for solving optimization problems in biomechanics. The algorithm for simulated parallel annealing within a neighborhood (SPAN) was designed to minimize interprocessor communication time and closely retain the heuristics of the serial SA algorithm. The computational speed of the SPAN algorithm scaled linearly with the number of processors on different computer platforms for a simple quadratic test problem and for a more complex forward dynamic simulation of human pedaling.
Simulation of Hypervelocity Impact on Aluminum-Nextel-Kevlar Orbital Debris Shields
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.
2000-01-01
An improved hybrid particle-finite element method has been developed for hypervelocity impact simulation. The method combines the general contact-impact capabilities of particle codes with the true Lagrangian kinematics of large strain finite element formulations. Unlike some alternative schemes which couple Lagrangian finite element models with smooth particle hydrodynamics, the present formulation makes no use of slidelines or penalty forces. The method has been implemented in a parallel, three dimensional computer code. Simulations of three dimensional orbital debris impact problems using this parallel hybrid particle-finite element code, show good agreement with experiment and good speedup in parallel computation. The simulations included single and multi-plate shields as well as aluminum and composite shielding materials. at an impact velocity of eleven kilometers per second.
Parallel Grand Canonical Monte Carlo (ParaGrandMC) Simulation Code
NASA Technical Reports Server (NTRS)
Yamakov, Vesselin I.
2016-01-01
This report provides an overview of the Parallel Grand Canonical Monte Carlo (ParaGrandMC) simulation code. This is a highly scalable parallel FORTRAN code for simulating the thermodynamic evolution of metal alloy systems at the atomic level, and predicting the thermodynamic state, phase diagram, chemical composition and mechanical properties. The code is designed to simulate multi-component alloy systems, predict solid-state phase transformations such as austenite-martensite transformations, precipitate formation, recrystallization, capillary effects at interfaces, surface absorption, etc., which can aid the design of novel metallic alloys. While the software is mainly tailored for modeling metal alloys, it can also be used for other types of solid-state systems, and to some degree for liquid or gaseous systems, including multiphase systems forming solid-liquid-gas interfaces.
An Empirical Development of Parallelization Guidelines for Time-Driven Simulation
1989-12-01
wives, who though not Cub fans, put on a good show during our trip, to waich some games . I would also like to recognize the help of my professors at...program parallelization. in this research effort a Ballistic Missile Defense (BMD) time driven simulation program, developed by DESE Research and...continuously, or continuously with discrete changes superimposed. The distinguishing feature of these simulations is the interaction between discretely
Xyce Parallel Electronic Simulator : reference guide, version 2.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoekstra, Robert John; Waters, Lon J.; Rankin, Eric Lamont
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.
Predicting Protein Structure Using Parallel Genetic Algorithms.
1994-12-01
Molecular dynamics attempts to simulate the protein folding process. However, the time steps required for this simulation are on the order of one...harmonics. These two factors have limited molecular dynamics simulations to less than a few nanoseconds (10-9 sec), even on today’s fastest supercomputers...By " Predicting rotein Structure D istribticfiar.. ................ Using Parallel Genetic Algorithms ,Avaiu " ’ •"... Dist THESIS I IGeorge H
Xyce™ Parallel Electronic Simulator Reference Guide Version 6.8
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Aadithya, Karthik Venkatraman; Mei, Ting
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce . This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.
VizieR Online Data Catalog: Solar wind 3D magnetohydrodynamic simulation (Chhiber+, 2017)
NASA Astrophysics Data System (ADS)
Chhiber, R.; Subedi, P.; Usmanov, A. V.; Matthaeus, W. H.; Ruffolo, D.; Goldstein, M. L.; Parashar, T. N.
2017-08-01
We use a three-dimensional magnetohydrodynamic simulation of the solar wind to calculate cosmic-ray diffusion coefficients throughout the inner heliosphere (2Rȯ-3au). The simulation resolves large-scale solar wind flow, which is coupled to small-scale fluctuations through a turbulence model. Simulation results specify background solar wind fields and turbulence parameters, which are used to compute diffusion coefficients and study their behavior in the inner heliosphere. The parallel mean free path (mfp) is evaluated using quasi-linear theory, while the perpendicular mfp is determined from nonlinear guiding center theory with the random ballistic interpretation. Several runs examine varying turbulent energy and different solar source dipole tilts. We find that for most of the inner heliosphere, the radial mfp is dominated by diffusion parallel to the mean magnetic field; the parallel mfp remains at least an order of magnitude larger than the perpendicular mfp, except in the heliospheric current sheet, where the perpendicular mfp may be a few times larger than the parallel mfp. In the ecliptic region, the perpendicular mfp may influence the radial mfp at heliocentric distances larger than 1.5au; our estimations of the parallel mfp in the ecliptic region at 1 au agree well with the Palmer "consensus" range of 0.08-0.3au. Solar activity increases perpendicular diffusion and reduces parallel diffusion. The parallel mfp mostly varies with rigidity (P) as P.33, and the perpendicular mfp is weakly dependent on P. The mfps are weakly influenced by the choice of long-wavelength power spectra. (2 data files).
Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations.
Dematté, Lorenzo
2012-01-01
Space is a very important aspect in the simulation of biochemical systems; recently, the need for simulation algorithms able to cope with space is becoming more and more compelling. Complex and detailed models of biochemical systems need to deal with the movement of single molecules and particles, taking into consideration localized fluctuations, transportation phenomena, and diffusion. A common drawback of spatial models lies in their complexity: models can become very large, and their simulation could be time consuming, especially if we want to capture the systems behavior in a reliable way using stochastic methods in conjunction with a high spatial resolution. In order to deliver the promise done by systems biology to be able to understand a system as whole, we need to scale up the size of models we are able to simulate, moving from sequential to parallel simulation algorithms. In this paper, we analyze Smoldyn, a widely diffused algorithm for stochastic simulation of chemical reactions with spatial resolution and single molecule detail, and we propose an alternative, innovative implementation that exploits the parallelism of Graphics Processing Units (GPUs). The implementation executes the most computational demanding steps (computation of diffusion, unimolecular, and bimolecular reaction, as well as the most common cases of molecule-surface interaction) on the GPU, computing them in parallel on each molecule of the system. The implementation offers good speed-ups and real time, high quality graphics output
NASA Astrophysics Data System (ADS)
Zhang, Q.; Drake, J. F.; Swisdak, M.
2017-12-01
How ions and electrons are energized in magnetic reconnection outflows is an essential topic throughout the heliosphere. Here we carry out guide field PIC Riemann simulations to explore the ion and electron energization mechanisms far downstream of the x-line. Riemann simulations, with their simple magnetic geometry, facilitate the study of the reconnection outflow far downstream of the x-line in much more detail than is possible with conventional reconnection simulations. We find that the ions get accelerated at rotational discontinuities, counter stream, and give rise to two slow shocks. We demonstrate that the energization mechanism at the slow shocks is essentially the same as that of parallel electrostatic shocks. Also, the electron confining electric potential at the slow shocks is driven by the counterstreaming beams, which tend to break the quasi-neutrality. Based on this picture, we build a kinetic model to self consistently predict the downstream ion and electron temperatures. Additional explorations using parallel shock simulations also imply that in a very low beta(0.001 0.01 for a modest guide field) regime, electron energization will be insignificant compared to the ion energization. Our model and the parallel shock simulations might be used as simple tools to understand and estimate the energization of ions and electrons and the energy partition far downstream of the x-line.
The OpenMP Implementation of NAS Parallel Benchmarks and its Performance
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry
1999-01-01
As the new ccNUMA architecture became popular in recent years, parallel programming with compiler directives on these machines has evolved to accommodate new needs. In this study, we examine the effectiveness of OpenMP directives for parallelizing the NAS Parallel Benchmarks. Implementation details will be discussed and performance will be compared with the MPI implementation. We have demonstrated that OpenMP can achieve very good results for parallelization on a shared memory system, but effective use of memory and cache is very important.
Parallel Tensor Compression for Large-Scale Scientific Data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan
As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memorymore » parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.« less
Synchronous parallel system for emulation and discrete event simulation
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S. (Inventor)
1992-01-01
A synchronous parallel system for emulation and discrete event simulation having parallel nodes responds to received messages at each node by generating event objects having individual time stamps, stores only the changes to state variables of the simulation object attributable to the event object, and produces corresponding messages. The system refrains from transmitting the messages and changing the state variables while it determines whether the changes are superseded, and then stores the unchanged state variables in the event object for later restoral to the simulation object if called for. This determination preferably includes sensing the time stamp of each new event object and determining which new event object has the earliest time stamp as the local event horizon, determining the earliest local event horizon of the nodes as the global event horizon, and ignoring the events whose time stamps are less than the global event horizon. Host processing between the system and external terminals enables such a terminal to query, monitor, command or participate with a simulation object during the simulation process.
Synchronous Parallel System for Emulation and Discrete Event Simulation
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S. (Inventor)
2001-01-01
A synchronous parallel system for emulation and discrete event simulation having parallel nodes responds to received messages at each node by generating event objects having individual time stamps, stores only the changes to the state variables of the simulation object attributable to the event object and produces corresponding messages. The system refrains from transmitting the messages and changing the state variables while it determines whether the changes are superseded, and then stores the unchanged state variables in the event object for later restoral to the simulation object if called for. This determination preferably includes sensing the time stamp of each new event object and determining which new event object has the earliest time stamp as the local event horizon, determining the earliest local event horizon of the nodes as the global event horizon, and ignoring events whose time stamps are less than the global event horizon. Host processing between the system and external terminals enables such a terminal to query, monitor, command or participate with a simulation object during the simulation process.
Parallelization of Program to Optimize Simulated Trajectories (POST3D)
NASA Technical Reports Server (NTRS)
Hammond, Dana P.; Korte, John J. (Technical Monitor)
2001-01-01
This paper describes the parallelization of the Program to Optimize Simulated Trajectories (POST3D). POST3D uses a gradient-based optimization algorithm that reaches an optimum design point by moving from one design point to the next. The gradient calculations required to complete the optimization process, dominate the computational time and have been parallelized using a Single Program Multiple Data (SPMD) on a distributed memory NUMA (non-uniform memory access) architecture. The Origin2000 was used for the tests presented.
Design of a real-time wind turbine simulator using a custom parallel architecture
NASA Technical Reports Server (NTRS)
Hoffman, John A.; Gluck, R.; Sridhar, S.
1995-01-01
The design of a new parallel-processing digital simulator is described. The new simulator has been developed specifically for analysis of wind energy systems in real time. The new processor has been named: the Wind Energy System Time-domain simulator, version 3 (WEST-3). Like previous WEST versions, WEST-3 performs many computations in parallel. The modules in WEST-3 are pure digital processors, however. These digital processors can be programmed individually and operated in concert to achieve real-time simulation of wind turbine systems. Because of this programmability, WEST-3 is very much more flexible and general than its two predecessors. The design features of WEST-3 are described to show how the system produces high-speed solutions of nonlinear time-domain equations. WEST-3 has two very fast Computational Units (CU's) that use minicomputer technology plus special architectural features that make them many times faster than a microcomputer. These CU's are needed to perform the complex computations associated with the wind turbine rotor system in real time. The parallel architecture of the CU causes several tasks to be done in each cycle, including an IO operation and the combination of a multiply, add, and store. The WEST-3 simulator can be expanded at any time for additional computational power. This is possible because the CU's interfaced to each other and to other portions of the simulation using special serial buses. These buses can be 'patched' together in essentially any configuration (in a manner very similar to the programming methods used in analog computation) to balance the input/ output requirements. CU's can be added in any number to share a given computational load. This flexible bus feature is very different from many other parallel processors which usually have a throughput limit because of rigid bus architecture.
Accelerating Wright–Fisher Forward Simulations on the Graphics Processing Unit
Lawrie, David S.
2017-01-01
Forward Wright–Fisher simulations are powerful in their ability to model complex demography and selection scenarios, but suffer from slow execution on the Central Processor Unit (CPU), thus limiting their usefulness. However, the single-locus Wright–Fisher forward algorithm is exceedingly parallelizable, with many steps that are so-called “embarrassingly parallel,” consisting of a vast number of individual computations that are all independent of each other and thus capable of being performed concurrently. The rise of modern Graphics Processing Units (GPUs) and programming languages designed to leverage the inherent parallel nature of these processors have allowed researchers to dramatically speed up many programs that have such high arithmetic intensity and intrinsic concurrency. The presented GPU Optimized Wright–Fisher simulation, or “GO Fish” for short, can be used to simulate arbitrary selection and demographic scenarios while running over 250-fold faster than its serial counterpart on the CPU. Even modest GPU hardware can achieve an impressive speedup of over two orders of magnitude. With simulations so accelerated, one can not only do quick parametric bootstrapping of previously estimated parameters, but also use simulated results to calculate the likelihoods and summary statistics of demographic and selection models against real polymorphism data, all without restricting the demographic and selection scenarios that can be modeled or requiring approximations to the single-locus forward algorithm for efficiency. Further, as many of the parallel programming techniques used in this simulation can be applied to other computationally intensive algorithms important in population genetics, GO Fish serves as an exciting template for future research into accelerating computation in evolution. GO Fish is part of the Parallel PopGen Package available at: http://dl42.github.io/ParallelPopGen/. PMID:28768689
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
Doll, J.; Dupuis, P.; Nyquist, P.
2017-02-08
Parallel tempering, or replica exchange, is a popular method for simulating complex systems. The idea is to run parallel simulations at different temperatures, and at a given swap rate exchange configurations between the parallel simulations. From the perspective of large deviations it is optimal to let the swap rate tend to infinity and it is possible to construct a corresponding simulation scheme, known as infinite swapping. In this paper we propose a novel use of large deviations for empirical measures for a more detailed analysis of the infinite swapping limit in the setting of continuous time jump Markov processes. Usingmore » the large deviations rate function and associated stochastic control problems we consider a diagnostic based on temperature assignments, which can be easily computed during a simulation. We show that the convergence of this diagnostic to its a priori known limit is a necessary condition for the convergence of infinite swapping. The rate function is also used to investigate the impact of asymmetries in the underlying potential landscape, and where in the state space poor sampling is most likely to occur.« less
Monte Carlo modelling the dosimetric effects of electrode material on diamond detectors.
Baluti, Florentina; Deloar, Hossain M; Lansley, Stuart P; Meyer, Juergen
2015-03-01
Diamond detectors for radiation dosimetry were modelled using the EGSnrc Monte Carlo code to investigate the influence of electrode material and detector orientation on the absorbed dose. The small dimensions of the electrode/diamond/electrode detector structure required very thin voxels and the use of non-standard DOSXYZnrc Monte Carlo model parameters. The interface phenomena was investigated by simulating a 6 MV beam and detectors with different electrode materials, namely Al, Ag, Cu and Au, with thickens of 0.1 µm for the electrodes and 0.1 mm for the diamond, in both perpendicular and parallel detector orientation with regards to the incident beam. The smallest perturbations were observed for the parallel detector orientation and Al electrodes (Z = 13). In summary, EGSnrc Monte Carlo code is well suited for modelling small detector geometries. The Monte Carlo model developed is a useful tool to investigate the dosimetric effects caused by different electrode materials. To minimise perturbations cause by the detector electrodes, it is recommended that the electrodes should be made from a low-atomic number material and placed parallel to the beam direction.
Proteus: a reconfigurable computational network for computer vision
NASA Astrophysics Data System (ADS)
Haralick, Robert M.; Somani, Arun K.; Wittenbrink, Craig M.; Johnson, Robert; Cooper, Kenneth; Shapiro, Linda G.; Phillips, Ihsin T.; Hwang, Jenq N.; Cheung, William; Yao, Yung H.; Chen, Chung-Ho; Yang, Larry; Daugherty, Brian; Lorbeski, Bob; Loving, Kent; Miller, Tom; Parkins, Larye; Soos, Steven L.
1992-04-01
The Proteus architecture is a highly parallel MIMD, multiple instruction, multiple-data machine, optimized for large granularity tasks such as machine vision and image processing The system can achieve 20 Giga-flops (80 Giga-flops peak). It accepts data via multiple serial links at a rate of up to 640 megabytes/second. The system employs a hierarchical reconfigurable interconnection network with the highest level being a circuit switched Enhanced Hypercube serial interconnection network for internal data transfers. The system is designed to use 256 to 1,024 RISC processors. The processors use one megabyte external Read/Write Allocating Caches for reduced multiprocessor contention. The system detects, locates, and replaces faulty subsystems using redundant hardware to facilitate fault tolerance. The parallelism is directly controllable through an advanced software system for partitioning, scheduling, and development. System software includes a translator for the INSIGHT language, a parallel debugger, low and high level simulators, and a message passing system for all control needs. Image processing application software includes a variety of point operators neighborhood, operators, convolution, and the mathematical morphology operations of binary and gray scale dilation, erosion, opening, and closing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pingenot, J; Rieben, R; White, D
2004-12-06
We present a computational study of signal propagation and attenuation of a 200 MHz dipole antenna in a cave environment. The cave is modeled as a straight and lossy random rough wall. To simulate a broad frequency band, the full wave Maxwell equations are solved directly in the time domain via a high order vector finite element discretization using the massively parallel CEM code EMSolve. The simulation is performed for a series of random meshes in order to generate statistical data for the propagation and attenuation properties of the cave environment. Results for the power spectral density and phase ofmore » the electric field vector components are presented and discussed.« less
NASA Astrophysics Data System (ADS)
Yasumoto, M.; Ohta, M.; Kawamura, Y.; Hatayama, A.
2014-02-01
Numerical simulations become useful for the developing RF-ICP (Radio Frequency Inductively Coupled Plasma) negative ion sources. We are developing and parallelizing a two-dimensional three velocity electromagnetic Particle-In-Cell code. The result shows rapid increase in the electron density during the density ramp-up phase. A radial electric field due to the space charge is produced with increase in the electron density and the electron transport in the radial direction is suppressed. As a result, electrons stay for a long period in the region where the inductive electric field is strong, and this leads efficient electron acceleration and a rapid increasing of the electron density.
Data-driven train set crash dynamics simulation
NASA Astrophysics Data System (ADS)
Tang, Zhao; Zhu, Yunrui; Nie, Yinyu; Guo, Shihui; Liu, Fengjia; Chang, Jian; Zhang, Jianjun
2017-02-01
Traditional finite element (FE) methods are arguably expensive in computation/simulation of the train crash. High computational cost limits their direct applications in investigating dynamic behaviours of an entire train set for crashworthiness design and structural optimisation. On the contrary, multi-body modelling is widely used because of its low computational cost with the trade-off in accuracy. In this study, a data-driven train crash modelling method is proposed to improve the performance of a multi-body dynamics simulation of train set crash without increasing the computational burden. This is achieved by the parallel random forest algorithm, which is a machine learning approach that extracts useful patterns of force-displacement curves and predicts a force-displacement relation in a given collision condition from a collection of offline FE simulation data on various collision conditions, namely different crash velocities in our analysis. Using the FE simulation results as a benchmark, we compared our method with traditional multi-body modelling methods and the result shows that our data-driven method improves the accuracy over traditional multi-body models in train crash simulation and runs at the same level of efficiency.
Parallel programming with Easy Java Simulations
NASA Astrophysics Data System (ADS)
Esquembre, F.; Christian, W.; Belloni, M.
2018-01-01
Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.
A parallel computational model for GATE simulations.
Rannou, F R; Vega-Acevedo, N; El Bitar, Z
2013-12-01
GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Global MHD simulation of magnetosphere using HPF
NASA Astrophysics Data System (ADS)
Ogino, T.
We have translated a 3-dimensional magnetohydrodynamic (MHD) simulation code of the Earth's magnetosphere from VPP Fortran to HPF/JA on the Fujitsu VPP5000/56 vector-parallel supercomputer and the MHD code was fully vectorized and fully parallelized in VPP Fortran. The entire performance and capability of the HPF MHD code could be shown to be almost comparable to that of VPP Fortran. A 3-dimensional global MHD simulation of the earth's magnetosphere was performed at a speed of over 400 Gflops with an efficiency of 76.5% using 56 PEs of Fujitsu VPP5000/56 in vector and parallel computation that permitted comparison with catalog values. We have concluded that fluid and MHD codes that are fully vectorized and fully parallelized in VPP Fortran can be translated with relative ease to HPF/JA, and a code in HPF/JA may be expected to perform comparably to the same code written in VPP Fortran.
DOE Office of Scientific and Technical Information (OSTI.GOV)
G.A. Pope; K. Sephernoori; D.C. McKinney
1996-03-15
This report describes the application of distributed-memory parallel programming techniques to a compositional simulator called UTCHEM. The University of Texas Chemical Flooding reservoir simulator (UTCHEM) is a general-purpose vectorized chemical flooding simulator that models the transport of chemical species in three-dimensional, multiphase flow through permeable media. The parallel version of UTCHEM addresses solving large-scale problems by reducing the amount of time that is required to obtain the solution as well as providing a flexible and portable programming environment. In this work, the original parallel version of UTCHEM was modified and ported to CRAY T3D and CRAY T3E, distributed-memory, multiprocessor computersmore » using CRAY-PVM as the interprocessor communication library. Also, the data communication routines were modified such that the portability of the original code across different computer architectures was mad possible.« less
Parallel Simulation of Unsteady Turbulent Flames
NASA Technical Reports Server (NTRS)
Menon, Suresh
1996-01-01
Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.
Exact simulation of polarized light reflectance by particle deposits
NASA Astrophysics Data System (ADS)
Ramezan Pour, B.; Mackowski, D. W.
2015-12-01
The use of polarimetric light reflection measurements as a means of identifying the physical and chemical characteristics of particulate materials obviously relies on an accurate model of predicting the effects of particle size, shape, concentration, and refractive index on polarized reflection. The research examines two methods for prediction of reflection from plane parallel layers of wavelength—sized particles. The first method is based on an exact superposition solution to Maxwell's time harmonic wave equations for a deposit of spherical particles that are exposed to a plane incident wave. We use a FORTRAN-90 implementation of this solution (the Multiple Sphere T Matrix (MSTM) code), coupled with parallel computational platforms, to directly simulate the reflection from particle layers. The second method examined is based upon the vector radiative transport equation (RTE). Mie theory is used in our RTE model to predict the extinction coefficient, albedo, and scattering phase function of the particles, and the solution of the RTE is obtained from adding—doubling method applied to a plane—parallel configuration. Our results show that the MSTM and RTE predictions of the Mueller matrix elements converge when particle volume fraction in the particle layer decreases below around five percent. At higher volume fractions the RTE can yield results that, depending on the particle size and refractive index, significantly depart from the exact predictions. The particle regimes which lead to dependent scattering effects, and the application of methods to correct the vector RTE for particle interaction, will be discussed.
NASA Astrophysics Data System (ADS)
Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.
2016-01-01
Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.