WinHPC System | High-Performance Computing | NREL
System WinHPC System NREL's WinHPC system is a computing cluster running the Microsoft Windows operating system. It allows users to run jobs requiring a Windows environment such as ANSYS and MATLAB
NASA Astrophysics Data System (ADS)
Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey
2018-02-01
At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.
Integration of High-Performance Computing into Cloud Computing Services
NASA Astrophysics Data System (ADS)
Vouk, Mladen A.; Sills, Eric; Dreher, Patrick
High-Performance Computing (HPC) projects span a spectrum of computer hardware implementations ranging from peta-flop supercomputers, high-end tera-flop facilities running a variety of operating systems and applications, to mid-range and smaller computational clusters used for HPC application development, pilot runs and prototype staging clusters. What they all have in common is that they operate as a stand-alone system rather than a scalable and shared user re-configurable resource. The advent of cloud computing has changed the traditional HPC implementation. In this article, we will discuss a very successful production-level architecture and policy framework for supporting HPC services within a more general cloud computing infrastructure. This integrated environment, called Virtual Computing Lab (VCL), has been operating at NC State since fall 2004. Nearly 8,500,000 HPC CPU-Hrs were delivered by this environment to NC State faculty and students during 2009. In addition, we present and discuss operational data that show that integration of HPC and non-HPC (or general VCL) services in a cloud can substantially reduce the cost of delivering cloud services (down to cents per CPU hour).
High-Performance Computing Systems and Operations | Computational Science |
NREL Systems and Operations High-Performance Computing Systems and Operations NREL operates high-performance computing (HPC) systems dedicated to advancing energy efficiency and renewable energy technologies. Capabilities NREL's HPC capabilities include: High-Performance Computing Systems We operate
Pinthong, Watthanai; Muangruen, Panya
2016-01-01
Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC) is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC) as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST) to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software. PMID:27547555
Mixing HTC and HPC Workloads with HTCondor and Slurm
NASA Astrophysics Data System (ADS)
Hollowell, C.; Barnett, J.; Caramarcu, C.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, A.
2017-10-01
Traditionally, the RHIC/ATLAS Computing Facility (RACF) at Brookhaven National Laboratory (BNL) has only maintained High Throughput Computing (HTC) resources for our HEP/NP user community. We’ve been using HTCondor as our batch system for many years, as this software is particularly well suited for managing HTC processor farm resources. Recently, the RACF has also begun to design/administrate some High Performance Computing (HPC) systems for a multidisciplinary user community at BNL. In this paper, we’ll discuss our experiences using HTCondor and Slurm in an HPC context, and our facility’s attempts to allow our HTC and HPC processing farms/clusters to make opportunistic use of each other’s computing resources.
WinHPC System User Basics | High-Performance Computing | NREL
guidance for starting to use this high-performance computing (HPC) system at NREL. Also see WinHPC policies ) when you are finished. Simply quitting Remote Desktop will keep your session active and using resources node). 2. Log in with your NREL.gov username/password. Remember to log out when finished. Mac 1. If you
DOE Office of Scientific and Technical Information (OSTI.GOV)
Connor, Carolyn Marie; Jacobson, Andree Lars; Bonnie, Amanda Marie
Sustainable and effective computing infrastructure depends critically on the skills and expertise of domain scientists and of committed and well-trained advanced computing professionals. But, in its ongoing High Performance Computing (HPC) work, Los Alamos National Laboratory noted a persistent shortage of well-prepared applicants, particularly for entry-level cluster administration, file systems administration, and high speed networking positions. Further, based upon recruiting efforts and interactions with universities graduating students in related majors of interest (e.g., computer science (CS)), there has been a long standing skillset gap, as focused training in HPC topics is typically lacking or absent in undergraduate and in evenmore » many graduate programs. Given that the effective operation and use of HPC systems requires specialized and often advanced training, that there is a recognized HPC skillset gap, and that there is intense global competition for computing and computational science talent, there is a long-standing and critical need for innovative approaches to help bridge the gap and create a well-prepared, next generation HPC workforce. Our paper places this need in the context of the HPC work and workforce requirements at Los Alamos National Laboratory (LANL) and presents one such innovative program conceived to address the need, bridge the gap, and grow an HPC workforce pipeline at LANL. The Computer System, Cluster, and Networking Summer Institute (CSCNSI) completed its 10th year in 2016. The story of the CSCNSI and its evolution is detailed below with a description of the design of its Boot Camp, and a summary of its success and some key factors that have enabled that success.« less
Connor, Carolyn Marie; Jacobson, Andree Lars; Bonnie, Amanda Marie; ...
2016-11-01
Sustainable and effective computing infrastructure depends critically on the skills and expertise of domain scientists and of committed and well-trained advanced computing professionals. But, in its ongoing High Performance Computing (HPC) work, Los Alamos National Laboratory noted a persistent shortage of well-prepared applicants, particularly for entry-level cluster administration, file systems administration, and high speed networking positions. Further, based upon recruiting efforts and interactions with universities graduating students in related majors of interest (e.g., computer science (CS)), there has been a long standing skillset gap, as focused training in HPC topics is typically lacking or absent in undergraduate and in evenmore » many graduate programs. Given that the effective operation and use of HPC systems requires specialized and often advanced training, that there is a recognized HPC skillset gap, and that there is intense global competition for computing and computational science talent, there is a long-standing and critical need for innovative approaches to help bridge the gap and create a well-prepared, next generation HPC workforce. Our paper places this need in the context of the HPC work and workforce requirements at Los Alamos National Laboratory (LANL) and presents one such innovative program conceived to address the need, bridge the gap, and grow an HPC workforce pipeline at LANL. The Computer System, Cluster, and Networking Summer Institute (CSCNSI) completed its 10th year in 2016. The story of the CSCNSI and its evolution is detailed below with a description of the design of its Boot Camp, and a summary of its success and some key factors that have enabled that success.« less
High-Performance Computing User Facility | Computational Science | NREL
User Facility High-Performance Computing User Facility The High-Performance Computing User Facility technologies. Photo of the Peregrine supercomputer The High Performance Computing (HPC) User Facility provides Gyrfalcon Mass Storage System. Access Our HPC User Facility Learn more about these systems and how to access
System Resource Allocations | High-Performance Computing | NREL
Allocations System Resource Allocations To use NREL's high-performance computing (HPC) resources : Compute hours on NREL HPC Systems including Peregrine and Eagle Storage space (in Terabytes) on Peregrine , Eagle and Gyrfalcon. Allocations are principally done in response to an annual call for allocation
NASA Astrophysics Data System (ADS)
Demenev, A. G.
2018-02-01
The present work is devoted to analyze high-performance computing (HPC) infrastructure capabilities for aircraft engine aeroacoustics problems solving at Perm State University. We explore here the ability to develop new computational aeroacoustics methods/solvers for computer-aided engineering (CAE) systems to handle complicated industrial problems of engine noise prediction. Leading aircraft engine engineering company, including “UEC-Aviadvigatel” JSC (our industrial partners in Perm, Russia), require that methods/solvers to optimize geometry of aircraft engine for fan noise reduction. We analysed Perm State University HPC-hardware resources and software services to use efficiently. The performed results demonstrate that Perm State University HPC-infrastructure are mature enough to face out industrial-like problems of development CAE-system with HPC-method and CFD-solvers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Renke; Jin, Shuangshuang; Chen, Yousu
This paper presents a faster-than-real-time dynamic simulation software package that is designed for large-size power system dynamic simulation. It was developed on the GridPACKTM high-performance computing (HPC) framework. The key features of the developed software package include (1) faster-than-real-time dynamic simulation for a WECC system (17,000 buses) with different types of detailed generator, controller, and relay dynamic models, (2) a decoupled parallel dynamic simulation algorithm with optimized computation architecture to better leverage HPC resources and technologies, (3) options for HPC-based linear and iterative solvers, (4) hidden HPC details, such as data communication and distribution, to enable development centered on mathematicalmore » models and algorithms rather than on computational details for power system researchers, and (5) easy integration of new dynamic models and related algorithms into the software package.« less
Training | High-Performance Computing | NREL
Training Training Find training resources for using NREL's high-performance computing (HPC) systems as well as related online tutorials. Upcoming Training HPC User Workshop - June 12th We will be Conference, a group meets to discuss Best Practices in HPC Training. This group developed a list of resources
Running ANSYS Fluent on the WinHPC System | High-Performance Computing |
. If you don't have one, see WinHPC system user basics. Check License Use Status Start > All Jason Lustbader. Run Using Fluent Launcher Start Fluent launcher by opening: Start > All Programs > . Available node groups can be found from HPC Job Manager. Start > All Programs > Microsoft HPC Pack
Data Security Policy | High-Performance Computing | NREL
to use its high-performance computing (HPC) systems. NREL HPC systems are operated as research systems and may only contain data related to scientific research. These systems are categorized as low per sensitive or non-sensitive. One example of sensitive data would be personally identifiable information (PII
System-Level Virtualization Research at Oak Ridge National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Stephen L; Vallee, Geoffroy R; Naughton, III, Thomas J
2010-01-01
System-level virtualization is today enjoying a rebirth as a technique to effectively share what were then considered large computing resources to subsequently fade from the spotlight as individual workstations gained in popularity with a one machine - one user approach. One reason for this resurgence is that the simple workstation has grown in capability to rival that of anything available in the past. Thus, computing centers are again looking at the price/performance benefit of sharing that single computing box via server consolidation. However, industry is only concentrating on the benefits of using virtualization for server consolidation (enterprise computing) whereas ourmore » interest is in leveraging virtualization to advance high-performance computing (HPC). While these two interests may appear to be orthogonal, one consolidating multiple applications and users on a single machine while the other requires all the power from many machines to be dedicated solely to its purpose, we propose that virtualization does provide attractive capabilities that may be exploited to the benefit of HPC interests. This does raise the two fundamental questions of: is the concept of virtualization (a machine sharing technology) really suitable for HPC and if so, how does one go about leveraging these virtualization capabilities for the benefit of HPC. To address these questions, this document presents ongoing studies on the usage of system-level virtualization in a HPC context. These studies include an analysis of the benefits of system-level virtualization for HPC, a presentation of research efforts based on virtualization for system availability, and a presentation of research efforts for the management of virtual systems. The basis for this document was material presented by Stephen L. Scott at the Collaborative and Grid Computing Technologies meeting held in Cancun, Mexico on April 12-14, 2007.« less
Modular HPC I/O characterization with Darshan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, Shane; Carns, Philip; Harms, Kevin
2016-11-13
Contemporary high-performance computing (HPC) applications encompass a broad range of distinct I/O strategies and are often executed on a number of different compute platforms in their lifetime. These large-scale HPC platforms employ increasingly complex I/O subsystems to provide a suitable level of I/O performance to applications. Tuning I/O workloads for such a system is nontrivial, and the results generally are not portable to other HPC systems. I/O profiling tools can help to address this challenge, but most existing tools only instrument specific components within the I/O subsystem that provide a limited perspective on I/O performance. The increasing diversity of scientificmore » applications and computing platforms calls for greater flexibililty and scope in I/O characterization.« less
High-performance computing with quantum processing units
Britt, Keith A.; Oak Ridge National Lab.; Humble, Travis S.; ...
2017-03-01
The prospects of quantum computing have driven efforts to realize fully functional quantum processing units (QPUs). Recent success in developing proof-of-principle QPUs has prompted the question of how to integrate these emerging processors into modern high-performance computing (HPC) systems. We examine how QPUs can be integrated into current and future HPC system architectures by accounting for func- tional and physical design requirements. We identify two integration pathways that are differentiated by infrastructure constraints on the QPU and the use cases expected for the HPC system. This includes a tight integration that assumes infrastructure bottlenecks can be overcome as well asmore » a loose integration that as- sumes they cannot. We find that the performance of both approaches is likely to depend on the quantum interconnect that serves to entangle multiple QPUs. As a result, we also identify several challenges in assessing QPU performance for HPC, and we consider new metrics that capture the interplay between system architecture and the quantum parallelism underlying computational performance.« less
High-performance computing with quantum processing units
DOE Office of Scientific and Technical Information (OSTI.GOV)
Britt, Keith A.; Oak Ridge National Lab.; Humble, Travis S.
The prospects of quantum computing have driven efforts to realize fully functional quantum processing units (QPUs). Recent success in developing proof-of-principle QPUs has prompted the question of how to integrate these emerging processors into modern high-performance computing (HPC) systems. We examine how QPUs can be integrated into current and future HPC system architectures by accounting for func- tional and physical design requirements. We identify two integration pathways that are differentiated by infrastructure constraints on the QPU and the use cases expected for the HPC system. This includes a tight integration that assumes infrastructure bottlenecks can be overcome as well asmore » a loose integration that as- sumes they cannot. We find that the performance of both approaches is likely to depend on the quantum interconnect that serves to entangle multiple QPUs. As a result, we also identify several challenges in assessing QPU performance for HPC, and we consider new metrics that capture the interplay between system architecture and the quantum parallelism underlying computational performance.« less
One-Time Password Tokens | High-Performance Computing | NREL
One-Time Password Tokens One-Time Password Tokens For connecting to NREL's high-performance computing (HPC) systems, learn how to set up a one-time password (OTP) token for remote and privileged a one-time pass code from the HPC Operations team. At the sign-in screen Enter your HPC Username in
The Case for Modular Redundancy in Large-Scale High Performance Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L
2009-01-01
Recent investigations into resilience of large-scale high-performance computing (HPC) systems showed a continuous trend of decreasing reliability and availability. Newly installed systems have a lower mean-time to failure (MTTF) and a higher mean-time to recover (MTTR) than their predecessors. Modular redundancy is being used in many mission critical systems today to provide for resilience, such as for aerospace and command \\& control systems. The primary argument against modular redundancy for resilience in HPC has always been that the capability of a HPC system, and respective return on investment, would be significantly reduced. We argue that modular redundancy can significantly increasemore » compute node availability as it removes the impact of scale from single compute node MTTR. We further argue that single compute nodes can be much less reliable, and therefore less expensive, and still be highly available, if their MTTR/MTTF ratio is maintained.« less
RELIABILITY, AVAILABILITY, AND SERVICEABILITY FOR PETASCALE HIGH-END COMPUTING AND BEYOND
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chokchai "Box" Leangsuksun
2011-05-31
Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.
WinHPC System Software | High-Performance Computing | NREL
Software WinHPC System Software Learn about the software applications, tools, toolchains, and for industrial applications. Intel Compilers Development Tool, Toolchain Suite featuring an industry
Low latency network and distributed storage for next generation HPC systems: the ExaNeSt project
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Cretaro, P.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Pisani, F.; Simula, F.; Vicini, P.; Navaridas, J.; Chaix, F.; Chrysos, N.; Katevenis, M.; Papaeustathiou, V.
2017-10-01
With processor architecture evolution, the HPC market has undergone a paradigm shift. The adoption of low-cost, Linux-based clusters extended the reach of HPC from its roots in modelling and simulation of complex physical systems to a broader range of industries, from biotechnology, cloud computing, computer analytics and big data challenges to manufacturing sectors. In this perspective, the near future HPC systems can be envisioned as composed of millions of low-power computing cores, densely packed — meaning cooling by appropriate technology — with a tightly interconnected, low latency and high performance network and equipped with a distributed storage architecture. Each of these features — dense packing, distributed storage and high performance interconnect — represents a challenge, made all the harder by the need to solve them at the same time. These challenges lie as stumbling blocks along the road towards Exascale-class systems; the ExaNeSt project acknowledges them and tasks itself with investigating ways around them.
Comparative Implementation of High Performance Computing for Power System Dynamic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng
Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less
2012-09-30
platform (HPC) was developed, called the HPC-Acoustic Data Accelerator, or HPC-ADA for short. The HPC-ADA was designed based on fielded systems [1-4...software (Detection cLassificaiton for MAchine learning - High Peformance Computing). The software package was designed to utilize parallel and...Sedna [7] and is designed using a parallel architecture2, allowing existing algorithms to distribute to the various processing nodes with minimal changes
-275-4303 Kevin Regimbal oversees NREL's High Performance Computing (HPC) Systems & Operations , engineering, and operations. Kevin is interested in data center design and computing as well as data center integration and optimization. Professional Experience HPC oversight: program manager, project manager, center
NASA Astrophysics Data System (ADS)
Filipcic, A.; Haug, S.; Hostettler, M.; Walker, R.; Weber, M.
2015-12-01
The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was the highest ranked European system on TOP500 in 2014, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, a partial GPU acceleration of the Geant4 detector simulations has been implemented.
On the Impact of Execution Models: A Case Study in Computational Chemistry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Halappanavar, Mahantesh; Krishnamoorthy, Sriram
2015-05-25
Efficient utilization of high-performance computing (HPC) platforms is an important and complex problem. Execution models, abstract descriptions of the dynamic runtime behavior of the execution stack, have significant impact on the utilization of HPC systems. Using a computational chemistry kernel as a case study and a wide variety of execution models combined with load balancing techniques, we explore the impact of execution models on the utilization of an HPC system. We demonstrate a 50 percent improvement in performance by using work stealing relative to a more traditional static scheduling approach. We also use a novel semi-matching technique for load balancingmore » that has comparable performance to a traditional hypergraph-based partitioning implementation, which is computationally expensive. Using this study, we found that execution model design choices and assumptions can limit critical optimizations such as global, dynamic load balancing and finding the correct balance between available work units and different system and runtime overheads. With the emergence of multi- and many-core architectures and the consequent growth in the complexity of HPC platforms, we believe that these lessons will be beneficial to researchers tuning diverse applications on modern HPC platforms, especially on emerging dynamic platforms with energy-induced performance variability.« less
Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.; ...
2017-02-11
In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of completemore » redundancy incurs significant overhead to the application performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.
In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of completemore » redundancy incurs significant overhead to the application performance.« less
Prediction and characterization of application power use in a high-performance computing environment
Bugbee, Bruce; Phillips, Caleb; Egan, Hilary; ...
2017-02-27
Power use in data centers and high-performance computing (HPC) facilities has grown in tandem with increases in the size and number of these facilities. Substantial innovation is needed to enable meaningful reduction in energy footprints in leadership-class HPC systems. In this paper, we focus on characterizing and investigating application-level power usage. We demonstrate potential methods for predicting power usage based on a priori and in situ characteristics. Lastly, we highlight a potential use case of this method through a simulated power-aware scheduler using historical jobs from a real scientific HPC system.
Connecting to HPC Systems | High-Performance Computing | NREL
one of the following methods, which use multi-factor authentication. First, you will need to set up If you just need access to a command line on an HPC system, use one of the following methods
Chip-scale integrated optical interconnects: a key enabler for future high-performance computing
NASA Astrophysics Data System (ADS)
Haney, Michael; Nair, Rohit; Gu, Tian
2012-01-01
High Performance Computing (HPC) systems are putting ever-increasing demands on the throughput efficiency of their interconnection fabrics. In this paper, the limits of conventional metal trace-based inter-chip interconnect fabrics are examined in the context of state-of-the-art HPC systems, which currently operate near the 1 GFLOPS/W level. The analysis suggests that conventional metal trace interconnects will limit performance to approximately 6 GFLOPS/W in larger HPC systems that require many computer chips to be interconnected in parallel processing architectures. As the HPC communications bottlenecks push closer to the processing chips, integrated Optical Interconnect (OI) technology may provide the ultra-high bandwidths needed at the inter- and intra-chip levels. With inter-chip photonic link energies projected to be less than 1 pJ/bit, integrated OI is projected to enable HPC architecture scaling to the 50 GFLOPS/W level and beyond - providing a path to Peta-FLOPS-level HPC within a single rack, and potentially even Exa-FLOPSlevel HPC for large systems. A new hybrid integrated chip-scale OI approach is described and evaluated. The concept integrates a high-density polymer waveguide fabric directly on top of a multiple quantum well (MQW) modulator array that is area-bonded to the Silicon computing chip. Grayscale lithography is used to fabricate 5 μm x 5 μm polymer waveguides and associated novel small-footprint total internal reflection-based vertical input/output couplers directly onto a layer containing an array of GaAs MQW devices configured to be either absorption modulators or photodetectors. An external continuous wave optical "power supply" is coupled into the waveguide links. Contrast ratios were measured using a test rider chip in place of a Silicon processing chip. The results suggest that sub-pJ/b chip-scale communication is achievable with this concept. When integrated into high-density integrated optical interconnect fabrics, it could provide a seamless interconnect fabric spanning the intra-
Using Performance Tools to Support Experiments in HPC Resilience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naughton, III, Thomas J; Boehm, Swen; Engelmann, Christian
2014-01-01
The high performance computing (HPC) community is working to address fault tolerance and resilience concerns for current and future large scale computing platforms. This is driving enhancements in the programming environ- ments, specifically research on enhancing message passing libraries to support fault tolerant computing capabilities. The community has also recognized that tools for resilience experimentation are greatly lacking. However, we argue that there are several parallels between performance tools and resilience tools . As such, we believe the rich set of HPC performance-focused tools can be extended (repurposed) to benefit the resilience community. In this paper, we describe the initialmore » motivation to leverage standard HPC per- formance analysis techniques to aid in developing diagnostic tools to assist fault tolerance experiments for HPC applications. These diagnosis procedures help to provide context for the system when the errors (failures) occurred. We describe our initial work in leveraging an MPI performance trace tool to assist in provid- ing global context during fault injection experiments. Such tools will assist the HPC resilience community as they extend existing and new application codes to support fault tolerances.« less
User Account Passwords | High-Performance Computing | NREL
Account Passwords User Account Passwords For NREL's high-performance computing (HPC) systems, learn about user account password requirements and how to set up, log in, and change passwords. Password Logging In the First Time After you request an HPC user account, you'll receive a temporary password. Set
Shared Storage Usage Policy | High-Performance Computing | NREL
Shared Storage Usage Policy Shared Storage Usage Policy To use NREL's high-performance computing (HPC) systems, you must abide by the Shared Storage Usage Policy. /projects NREL HPC allocations include storage space in the /projects filesystem. However, /projects is a shared resource and project
Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers
NASA Astrophysics Data System (ADS)
Dreher, Patrick; Scullin, William; Vouk, Mladen
2015-09-01
Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers.
High Productivity Computing Systems and Competitiveness Initiative
2007-07-01
planning committee for the annual, international Supercomputing Conference in 2004 and 2005. This is the leading HPC industry conference in the world. It...sector partnerships. Partnerships will form a key part of discussions at the 2nd High Performance Computing Users Conference, planned for July 13, 2005...other things an interagency roadmap for high-end computing core technologies and an accessibility improvement plan . Improving HPC Education and
What Physicists Should Know About High Performance Computing - Circa 2002
NASA Astrophysics Data System (ADS)
Frederick, Donald
2002-08-01
High Performance Computing (HPC) is a dynamic, cross-disciplinary field that traditionally has involved applied mathematicians, computer scientists, and others primarily from the various disciplines that have been major users of HPC resources - physics, chemistry, engineering, with increasing use by those in the life sciences. There is a technological dynamic that is powered by economic as well as by technical innovations and developments. This talk will discuss practical ideas to be considered when developing numerical applications for research purposes. Even with the rapid pace of development in the field, the author believes that these concepts will not become obsolete for a while, and will be of use to scientists who either are considering, or who have already started down the HPC path. These principles will be applied in particular to current parallel HPC systems, but there will also be references of value to desktop users. The talk will cover such topics as: computing hardware basics, single-cpu optimization, compilers, timing, numerical libraries, debugging and profiling tools and the emergence of Computational Grids.
WinHPC System Programming | High-Performance Computing | NREL
Programming WinHPC System Programming Learn how to build and run an MPI (message passing interface (mpi.h) and library (msmpi.lib) are. To build from the command line, run... Start > Intel Software Development Tools > Intel C++ Compiler Professional... > C++ Build Environment for applications running
NASA Astrophysics Data System (ADS)
Gerhardt, Lisa; Bhimji, Wahid; Canon, Shane; Fasel, Markus; Jacobsen, Doug; Mustafa, Mustafa; Porter, Jeff; Tsulaia, Vakho
2017-10-01
Bringing HEP computing to HPC can be difficult. Software stacks are often very complicated with numerous dependencies that are difficult to get installed on an HPC system. To address this issue, NERSC has created Shifter, a framework that delivers Docker-like functionality to HPC. It works by extracting images from native formats and converting them to a common format that is optimally tuned for the HPC environment. We have used Shifter to deliver the CVMFS software stack for ALICE, ATLAS, and STAR on the supercomputers at NERSC. As well as enabling the distribution multi-TB sized CVMFS stacks to HPC, this approach also offers performance advantages. Software startup times are significantly reduced and load times scale with minimal variation to 1000s of nodes. We profile several successful examples of scientists using Shifter to make scientific analysis easily customizable and scalable. We will describe the Shifter framework and several efforts in HEP and NP to use Shifter to deliver their software on the Cori HPC system.
P43-S Computational Biology Applications Suite for High-Performance Computing (BioHPC.net)
Pillardy, J.
2007-01-01
One of the challenges of high-performance computing (HPC) is user accessibility. At the Cornell University Computational Biology Service Unit, which is also a Microsoft HPC institute, we have developed a computational biology application suite that allows researchers from biological laboratories to submit their jobs to the parallel cluster through an easy-to-use Web interface. Through this system, we are providing users with popular bioinformatics tools including BLAST, HMMER, InterproScan, and MrBayes. The system is flexible and can be easily customized to include other software. It is also scalable; the installation on our servers currently processes approximately 8500 job submissions per year, many of them requiring massively parallel computations. It also has a built-in user management system, which can limit software and/or database access to specified users. TAIR, the major database of the plant model organism Arabidopsis, and SGN, the international tomato genome database, are both using our system for storage and data analysis. The system consists of a Web server running the interface (ASP.NET C#), Microsoft SQL server (ADO.NET), compute cluster running Microsoft Windows, ftp server, and file server. Users can interact with their jobs and data via a Web browser, ftp, or e-mail. The interface is accessible at http://cbsuapps.tc.cornell.edu/.
Mining Software Usage with the Automatic Library Tracking Database (ALTD)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadri, Bilel; Fahey, Mark R
2013-01-01
Tracking software usage is important for HPC centers, computer vendors, code developers and funding agencies to provide more efficient and targeted software support, and to forecast needs and guide HPC software effort towards the Exascale era. However, accurately tracking software usage on HPC systems has been a challenging task. In this paper, we present a tool called Automatic Library Tracking Database (ALTD) that has been developed and put in production on several Cray systems. The ALTD infrastructure prototype automatically and transparently stores information about libraries linked into an application at compilation time and also the executables launched in a batchmore » job. We will illustrate the usage of libraries, compilers and third party software applications on a system managed by the National Institute for Computational Sciences.« less
Integrating the Apache Big Data Stack with HPC for Big Data
NASA Astrophysics Data System (ADS)
Fox, G. C.; Qiu, J.; Jha, S.
2014-12-01
There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development. However, the same is not so true for data intensive computing, even though commercially clouds devote much more resources to data analytics than supercomputers devote to simulations. We look at a sample of over 50 big data applications to identify characteristics of data intensive applications and to deduce needed runtime and architectures. We suggest a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks and use these to identify a few key classes of hardware/software architectures. Our analysis builds on combining HPC and ABDS the Apache big data software stack that is well used in modern cloud computing. Initial results on clouds and HPC systems are encouraging. We propose the development of SPIDAL - Scalable Parallel Interoperable Data Analytics Library -- built on system aand data abstractions suggested by the HPC-ABDS architecture. We discuss how it can be used in several application areas including Polar Science.
Algorithm for fast event parameters estimation on GEM acquired data
NASA Astrophysics Data System (ADS)
Linczuk, Paweł; Krawczyk, Rafał D.; Poźniak, Krzysztof T.; Kasprowicz, Grzegorz; Wojeński, Andrzej; Chernyshova, Maryna; Czarski, Tomasz
2016-09-01
We present study of a software-hardware environment for developing fast computation with high throughput and low latency methods, which can be used as back-end in High Energy Physics (HEP) and other High Performance Computing (HPC) systems, based on high amount of input from electronic sensor based front-end. There is a parallelization possibilities discussion and testing on Intel HPC solutions with consideration of applications with Gas Electron Multiplier (GEM) measurement systems presented in this paper.
Data Services in Support of High Performance Computing-Based Distributed Hydrologic Models
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Horsburgh, J. S.; Dash, P. K.; Gichamo, T.; Yildirim, A. A.; Jones, N.
2014-12-01
We have developed web-based data services to support the application of hydrologic models on High Performance Computing (HPC) systems. The purposes of these services are to provide hydrologic researchers, modelers, water managers, and users access to HPC resources without requiring them to become HPC experts and understanding the intrinsic complexities of the data services, so as to reduce the amount of time and effort spent in finding and organizing the data required to execute hydrologic models and data preprocessing tools on HPC systems. These services address some of the data challenges faced by hydrologic models that strive to take advantage of HPC. Needed data is often not in the form needed by such models, requiring researchers to spend time and effort on data preparation and preprocessing that inhibits or limits the application of these models. Another limitation is the difficult to use batch job control and queuing systems used by HPC systems. We have developed a REST-based gateway application programming interface (API) for authenticated access to HPC systems that abstracts away many of the details that are barriers to HPC use and enhances accessibility from desktop programming and scripting languages such as Python and R. We have used this gateway API to establish software services that support the delineation of watersheds to define a modeling domain, then extract terrain and land use information to automatically configure the inputs required for hydrologic models. These services support the Terrain Analysis Using Digital Elevation Model (TauDEM) tools for watershed delineation and generation of hydrology-based terrain information such as wetness index and stream networks. These services also support the derivation of inputs for the Utah Energy Balance snowmelt model used to address questions such as how climate, land cover and land use change may affect snowmelt inputs to runoff generation. To enhance access to the time varying climate data used to drive hydrologic models, we have developed services to downscale and re-grid nationally available climate analysis data from systems such as NLDAS and MERRA. These cases serve as examples for how this approach can be extended to other models to enhance the use of HPC for hydrologic modeling.
Quantifying Scheduling Challenges for Exascale System Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mondragon, Oscar; Bridges, Patrick G.; Jones, Terry R
2015-01-01
The move towards high-performance computing (HPC) ap- plications comprised of coupled codes and the need to dra- matically reduce data movement is leading to a reexami- nation of time-sharing vs. space-sharing in HPC systems. In this paper, we discuss and begin to quantify the perfor- mance impact of a move away from strict space-sharing of nodes for HPC applications. Specifically, we examine the po- tential performance cost of time-sharing nodes between ap- plication components, we determine whether a simple coor- dinated scheduling mechanism can address these problems, and we research how suitable simple constraint-based opti- mization techniques are for solvingmore » scheduling challenges in this regime. Our results demonstrate that current general- purpose HPC system software scheduling and resource al- location systems are subject to significant performance de- ciencies which we quantify for six representative applica- tions. Based on these results, we discuss areas in which ad- ditional research is needed to meet the scheduling challenges of next-generation HPC systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hussain, Hameed; Malik, Saif Ur Rehman; Hameed, Abdul
An efficient resource allocation is a fundamental requirement in high performance computing (HPC) systems. Many projects are dedicated to large-scale distributed computing systems that have designed and developed resource allocation mechanisms with a variety of architectures and services. In our study, through analysis, a comprehensive survey for describing resource allocation in various HPCs is reported. The aim of the work is to aggregate under a joint framework, the existing solutions for HPC to provide a thorough analysis and characteristics of the resource management and allocation strategies. Resource allocation mechanisms and strategies play a vital role towards the performance improvement ofmore » all the HPCs classifications. Therefore, a comprehensive discussion of widely used resource allocation strategies deployed in HPC environment is required, which is one of the motivations of this survey. Moreover, we have classified the HPC systems into three broad categories, namely: (a) cluster, (b) grid, and (c) cloud systems and define the characteristics of each class by extracting sets of common attributes. All of the aforementioned systems are cataloged into pure software and hybrid/hardware solutions. The system classification is used to identify approaches followed by the implementation of existing resource allocation strategies that are widely presented in the literature.« less
A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayo, Jackson R.; Kegelmeyer, W. Philip, Jr.; Wong, Matthew H.
The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in jobmore » queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.« less
System Resource Allocation Requests | High-Performance Computing | NREL
Account to utilize the online allocation request system. If you need a HPC User Account, please request one online: Visit User Accounts. Click the green "Request Account" Button - this will direct . Follow the online instructions provided in the DocuSign form. Write "Need HPC User Account to use
Towards New Metrics for High-Performance Computing Resilience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Ashraf, Rizwan A; Engelmann, Christian
Ensuring the reliability of applications is becoming an increasingly important challenge as high-performance computing (HPC) systems experience an ever-growing number of faults, errors and failures. While the HPC community has made substantial progress in developing various resilience solutions, it continues to rely on platform-based metrics to quantify application resiliency improvements. The resilience of an HPC application is concerned with the reliability of the application outcome as well as the fault handling efficiency. To understand the scope of impact, effective coverage and performance efficiency of existing and emerging resilience solutions, there is a need for new metrics. In this paper, wemore » develop new ways to quantify resilience that consider both the reliability and the performance characteristics of the solutions from the perspective of HPC applications. As HPC systems continue to evolve in terms of scale and complexity, it is expected that applications will experience various types of faults, errors and failures, which will require applications to apply multiple resilience solutions across the system stack. The proposed metrics are intended to be useful for understanding the combined impact of these solutions on an application's ability to produce correct results and to evaluate their overall impact on an application's performance in the presence of various modes of faults.« less
Trends in data locality abstractions for HPC systems
Unat, Didem; Dubey, Anshu; Hoefler, Torsten; ...
2017-05-10
The cost of data movement has always been an important concern in high performance computing (HPC) systems. It has now become the dominant factor in terms of both energy consumption and performance. Support for expression of data locality has been explored in the past, but those efforts have had only modest success in being adopted in HPC applications for various reasons. However, with the increasing complexity of the memory hierarchy and higher parallelism in emerging HPC systems, locality management has acquired a new urgency. Developers can no longer limit themselves to low-level solutions and ignore the potential for productivity andmore » performance portability obtained by using locality abstractions. Fortunately, the trend emerging in recent literature on the topic alleviates many of the concerns that got in the way of their adoption by application developers. Data locality abstractions are available in the forms of libraries, data structures, languages and runtime systems; a common theme is increasing productivity without sacrificing performance. Furthermore, this paper examines these trends and identifies commonalities that can combine various locality concepts to develop a comprehensive approach to expressing and managing data locality on future large-scale high-performance computing systems.« less
Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary
2015-01-01
Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis.
On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers
NASA Astrophysics Data System (ADS)
Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.
2017-10-01
This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Etingov, Pavel V.; Ren, Huiying
This paper describes a probabilistic look-ahead contingency analysis application that incorporates smart sampling and high-performance computing (HPC) techniques. Smart sampling techniques are implemented to effectively represent the structure and statistical characteristics of uncertainty introduced by different sources in the power system. They can significantly reduce the data set size required for multiple look-ahead contingency analyses, and therefore reduce the time required to compute them. High-performance-computing (HPC) techniques are used to further reduce computational time. These two techniques enable a predictive capability that forecasts the impact of various uncertainties on potential transmission limit violations. The developed package has been tested withmore » real world data from the Bonneville Power Administration. Case study results are presented to demonstrate the performance of the applications developed.« less
2007-06-08
Lightwave VDE /200 KVM-over-Fiber (Keyboard, Video and Mouse) devices installed throughout the TARDEC campus. Implementation of this system required...development effort through the pursuit of an Army-funded Phase-II Small Business Innovative Research (SBIR) effort with IP Video Systems (formerly known as...visualization capabilities of a DoD High- Performance Computing facility, many advanced features are necessary. TARDEC-HPC’s SBIR with IP Video Systems
NASA Astrophysics Data System (ADS)
Huang, Qian
2014-09-01
Scientific computing often requires the availability of a massive number of computers for performing large-scale simulations, and computing in mineral physics is no exception. In order to investigate physical properties of minerals at extreme conditions in computational mineral physics, parallel computing technology is used to speed up the performance by utilizing multiple computer resources to process a computational task simultaneously thereby greatly reducing computation time. Traditionally, parallel computing has been addressed by using High Performance Computing (HPC) solutions and installed facilities such as clusters and super computers. Today, it has been seen that there is a tremendous growth in cloud computing. Infrastructure as a Service (IaaS), the on-demand and pay-as-you-go model, creates a flexible and cost-effective mean to access computing resources. In this paper, a feasibility report of HPC on a cloud infrastructure is presented. It is found that current cloud services in IaaS layer still need to improve performance to be useful to research projects. On the other hand, Software as a Service (SaaS), another type of cloud computing, is introduced into an HPC system for computing in mineral physics, and an application of which is developed. In this paper, an overall description of this SaaS application is presented. This contribution can promote cloud application development in computational mineral physics, and cross-disciplinary studies.
Active Flash: Out-of-core Data Analytics on Flash Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S
2012-01-01
Next generation science will increasingly come to rely on the ability to perform efficient, on-the-fly analytics of data generated by high-performance computing (HPC) simulations, modeling complex physical phenomena. Scientific computing workflows are stymied by the traditional chaining of simulation and data analysis, creating multiple rounds of redundant reads and writes to the storage system, which grows in cost with the ever-increasing gap between compute and storage speeds in HPC clusters. Recent HPC acquisitions have introduced compute node-local flash storage as a means to alleviate this I/O bottleneck. We propose a novel approach, Active Flash, to expedite data analysis pipelines bymore » migrating to the location of the data, the flash device itself. We argue that Active Flash has the potential to enable true out-of-core data analytics by freeing up both the compute core and the associated main memory. By performing analysis locally, dependence on limited bandwidth to a central storage system is reduced, while allowing this analysis to proceed in parallel with the main application. In addition, offloading work from the host to the more power-efficient controller reduces peak system power usage, which is already in the megawatt range and poses a major barrier to HPC system scalability. We propose an architecture for Active Flash, explore energy and performance trade-offs in moving computation from host to storage, demonstrate the ability of appropriate embedded controllers to perform data analysis and reduction tasks at speeds sufficient for this application, and present a simulation study of Active Flash scheduling policies. These results show the viability of the Active Flash model, and its capability to potentially have a transformative impact on scientific data analysis.« less
Fingerprinting Communication and Computation on HPC Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peisert, Sean
2010-06-02
How do we identify what is actually running on high-performance computing systems? Names of binaries, dynamic libraries loaded, or other elements in a submission to a batch queue can give clues, but binary names can be changed, and libraries provide limited insight and resolution on the code being run. In this paper, we present a method for"fingerprinting" code running on HPC machines using elements of communication and computation. We then discuss how that fingerprint can be used to determine if the code is consistent with certain other types of codes, what a user usually runs, or what the user requestedmore » an allocation to do. In some cases, our techniques enable us to fingerprint HPC codes using runtime MPI data with a high degree of accuracy.« less
Improving User Notification on Frequently Changing HPC Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuson, Christopher B; Renaud, William A
2016-01-01
Today s HPC centers user environments can be very complex. Centers often contain multiple large complicated computational systems each with their own user environment. Changes to a system s environment can be very impactful; however, a center s user environment is, in one-way or another, frequently changing. Because of this, it is vital for centers to notify users of change. For users, untracked changes can be costly, resulting in unnecessary debug time as well as wasting valuable compute allocations and research time. Communicating frequent change to diverse user communities is a common and ongoing task for HPC centers. This papermore » will cover the OLCF s current processes and methods used to communicate change to users of the center s large Cray systems and supporting resources. The paper will share lessons learned and goals as well as practices, tools, and methods used to continually improve and reach members of the OLCF user community.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dennig, Yasmin
Sandia National Laboratories has a long history of significant contributions to the high performance community and industry. Our innovative computer architectures allowed the United States to become the first to break the teraFLOP barrier—propelling us to the international spotlight. Our advanced simulation and modeling capabilities have been integral in high consequence US operations such as Operation Burnt Frost. Strong partnerships with industry leaders, such as Cray, Inc. and Goodyear, have enabled them to leverage our high performance computing (HPC) capabilities to gain a tremendous competitive edge in the marketplace. As part of our continuing commitment to providing modern computing infrastructuremore » and systems in support of Sandia missions, we made a major investment in expanding Building 725 to serve as the new home of HPC systems at Sandia. Work is expected to be completed in 2018 and will result in a modern facility of approximately 15,000 square feet of computer center space. The facility will be ready to house the newest National Nuclear Security Administration/Advanced Simulation and Computing (NNSA/ASC) Prototype platform being acquired by Sandia, with delivery in late 2019 or early 2020. This new system will enable continuing advances by Sandia science and engineering staff in the areas of operating system R&D, operation cost effectiveness (power and innovative cooling technologies), user environment and application code performance.« less
HPC in a HEP lab: lessons learned from setting up cost-effective HPC clusters
NASA Astrophysics Data System (ADS)
Husejko, Michal; Agtzidis, Ioannis; Baehler, Pierre; Dul, Tadeusz; Evans, John; Himyr, Nils; Meinhard, Helge
2015-12-01
In this paper we present our findings gathered during the evaluation and testing of Windows Server High-Performance Computing (Windows HPC) in view of potentially using it as a production HPC system for engineering applications. The Windows HPC package, an extension of Microsofts Windows Server product, provides all essential interfaces, utilities and management functionality for creating, operating and monitoring a Windows-based HPC cluster infrastructure. The evaluation and test phase was focused on verifying the functionalities of Windows HPC, its performance, support of commercial tools and the integration with the users work environment. We describe constraints imposed by the way the CERN Data Centre is operated, licensing for engineering tools and scalability and behaviour of the HPC engineering applications used at CERN. We will present an initial set of requirements, which were created based on the above constraints and requests from the CERN engineering user community. We will explain how we have configured Windows HPC clusters to provide job scheduling functionalities required to support the CERN engineering user community, quality of service, user- and project-based priorities, and fair access to limited resources. Finally, we will present several performance tests we carried out to verify Windows HPC performance and scalability.
High-Performance Computing and Visualization | Energy Systems Integration
Facility | NREL High-Performance Computing and Visualization High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a . Capabilities High-Performance Computing NREL is home to Peregrine-the largest high-performance computing system
NASA Astrophysics Data System (ADS)
Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang
2010-11-01
This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.
Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary
2015-01-01
Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis. PMID:25806784
birgHPC: creating instant computing clusters for bioinformatics and molecular dynamics.
Chew, Teong Han; Joyce-Tan, Kwee Hong; Akma, Farizuwana; Shamsir, Mohd Shahir
2011-05-01
birgHPC, a bootable Linux Live CD has been developed to create high-performance clusters for bioinformatics and molecular dynamics studies using any Local Area Network (LAN)-networked computers. birgHPC features automated hardware and slots detection as well as provides a simple job submission interface. The latest versions of GROMACS, NAMD, mpiBLAST and ClustalW-MPI can be run in parallel by simply booting the birgHPC CD or flash drive from the head node, which immediately positions the rest of the PCs on the network as computing nodes. Thus, a temporary, affordable, scalable and high-performance computing environment can be built by non-computing-based researchers using low-cost commodity hardware. The birgHPC Live CD and relevant user guide are available for free at http://birg1.fbb.utm.my/birghpc.
NASA Astrophysics Data System (ADS)
Valasek, Lukas; Glasa, Jan
2017-12-01
Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.
Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wucherl; Koo, Michelle; Cao, Yu
Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe-more » art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.« less
2013 R&D 100 Award: âMiniappsâ Bolster High Performance Computing
Belak, Jim; Richards, David
2018-06-12
Two Livermore computer scientists served on a Sandia National Laboratories-led team that developed Mantevo Suite 1.0, the first integrated suite of small software programs, also called "miniapps," to be made available to the high performance computing (HPC) community. These miniapps facilitate the development of new HPC systems and the applications that run on them. Miniapps (miniature applications) serve as stripped down surrogates for complex, full-scale applications that can require a great deal of time and effort to port to a new HPC system because they often consist of hundreds of thousands of lines of code. The miniapps are a prototype that contains some or all of the essentials of the real application but with many fewer lines of code, making the miniapp more versatile for experimentation. This allows researchers to more rapidly explore options and optimize system design, greatly improving the chances the full-scale application will perform successfully. These miniapps have become essential tools for exploring complex design spaces because they can reliably predict the performance of full applications.
An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform
NASA Technical Reports Server (NTRS)
Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak
2012-01-01
The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
High-performance computing — an overview
NASA Astrophysics Data System (ADS)
Marksteiner, Peter
1996-08-01
An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.
Running Jobs on the Peregrine System | High-Performance Computing | NREL
on the Peregrine high-performance computing (HPC) system. Running Different Types of Jobs Batch jobs scheduling policies - queue names, limits, etc. Requesting different node types Sample batch scripts
High Performance Proactive Digital Forensics
NASA Astrophysics Data System (ADS)
Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa
2012-10-01
With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Richard; Allcock, William; Beggio, Chris
2014-10-17
U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at themore » DOE national laboratories. The report contains findings from that review.« less
Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan
While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less
The Convergence of High Performance Computing and Large Scale Data Analytics
NASA Astrophysics Data System (ADS)
Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.
2015-12-01
As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.
Research | Computational Science | NREL
Research Research NREL's computational science experts use advanced high-performance computing (HPC technologies, thereby accelerating the transformation of our nation's energy system. Enabling High-Impact Research NREL's computational science capabilities enable high-impact research. Some recent examples
ERIC Educational Resources Information Center
Fredette, Michelle
2012-01-01
"Rent or buy?" is a question people ask about everything from housing to textbooks. It is also a question universities must consider when it comes to high-performance computing (HPC). With the advent of Amazon's Elastic Compute Cloud (EC2), Microsoft Windows HPC Server, Rackspace's OpenStack, and other cloud-based services, researchers now have…
First experience with particle-in-cell plasma physics code on ARM-based HPC systems
NASA Astrophysics Data System (ADS)
Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco
2015-09-01
In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.
HPC enabled real-time remote processing of laparoscopic surgery
NASA Astrophysics Data System (ADS)
Ronaghi, Zahra; Sapra, Karan; Izard, Ryan; Duffy, Edward; Smith, Melissa C.; Wang, Kuang-Ching; Kwartowitz, David M.
2016-03-01
Laparoscopic surgery is a minimally invasive surgical technique. The benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures. One particular laparoscopic system is the daVinci-si robotic surgical system. The video streams generate approximately 360 megabytes of data per second. Real-time processing this large stream of data on a bedside PC, single or dual node setup, has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. We have implement and compared performance of compression, segmentation and registration algorithms on Clemson's Palmetto supercomputer using dual NVIDIA K40 GPUs per node. Our computing framework will also enable reliability using replication of computation. We will securely transfer the files to remote HPC clusters utilizing an OpenFlow-based network service, Steroid OpenFlow Service (SOS) that can increase performance of large data transfers over long-distance and high bandwidth networks. As a result, utilizing high-speed OpenFlow- based network to access computing clusters with GPUs will improve surgical procedures by providing real-time medical image processing and laparoscopic data.
Multi-Core Processor Memory Contention Benchmark Analysis Case Study
NASA Technical Reports Server (NTRS)
Simon, Tyler; McGalliard, James
2009-01-01
Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.
NASA Technical Reports Server (NTRS)
Chaudhary, Aashish; Votava, Petr; Nemani, Ramakrishna R.; Michaelis, Andrew; Kotfila, Chris
2016-01-01
We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.
Analytics and Visualization Pipelines for Big Data on the NASA Earth Exchange (NEX) and OpenNEX
NASA Astrophysics Data System (ADS)
Chaudhary, A.; Votava, P.; Nemani, R. R.; Michaelis, A.; Kotfila, C.
2016-12-01
We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.
GraphMeta: Managing HPC Rich Metadata in Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Dong; Chen, Yong; Carns, Philip
High-performance computing (HPC) systems face increasingly critical metadata management challenges, especially in the approaching exascale era. These challenges arise not only from exploding metadata volumes, but also from increasingly diverse metadata, which contains data provenance and arbitrary user-defined attributes in addition to traditional POSIX metadata. This ‘rich’ metadata is becoming critical to supporting advanced data management functionality such as data auditing and validation. In our prior work, we identified a graph-based model as a promising solution to uniformly manage HPC rich metadata due to its flexibility and generality. However, at the same time, graph-based HPC rich metadata anagement also introducesmore » significant challenges to the underlying infrastructure. In this study, we first identify the challenges on the underlying infrastructure to support scalable, high-performance rich metadata management. Based on that, we introduce GraphMeta, a graphbased engine designed for this use case. It achieves performance scalability by introducing a new graph partitioning algorithm and a write-optimal storage engine. We evaluate GraphMeta under both synthetic and real HPC metadata workloads, compare it with other approaches, and demonstrate its advantages in terms of efficiency and usability for rich metadata management in HPC systems.« less
Linux VPN Set Up | High-Performance Computing | NREL
methods to connect to NREL's HPC systems via the HPC VPN: one using a simple command line, and a second UserID in place of the one in the example image. Connection name: hpcvpn Gateway: hpcvpn.nrel.gov User hpcvpn option as seen in the following screen shot. Screenshot image NetworkManager will present you with
WinHPC System Policies | High-Performance Computing | NREL
requiring high CPU utilization or large amounts of memory should be run on the worker nodes. WinHPC02 is not associated data are removed when NREL worker status is discontinued. Users should make arrangements to save other users. Licenses are returned to the license pool when other users close the application or after
Clearing your Desk! Software and Data Services for Collaborative Web Based GIS Analysis
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Hooper, R. P.; Maidment, D. R.; Dash, P. K.; Stealey, M.; Yi, H.; Gan, T.; Gichamo, T.; Yildirim, A. A.; Liu, Y.
2015-12-01
Can your desktop computer crunch the large GIS datasets that are becoming increasingly common across the geosciences? Do you have access to or the know-how to take advantage of advanced high performance computing (HPC) capability? Web based cyberinfrastructure takes work off your desk or laptop computer and onto infrastructure or "cloud" based data and processing servers. This talk will describe the HydroShare collaborative environment and web based services being developed to support the sharing and processing of hydrologic data and models. HydroShare supports the upload, storage, and sharing of a broad class of hydrologic data including time series, geographic features and raster datasets, multidimensional space-time data, and other structured collections of data. Web service tools and a Python client library provide researchers with access to HPC resources without requiring them to become HPC experts. This reduces the time and effort spent in finding and organizing the data required to prepare the inputs for hydrologic models and facilitates the management of online data and execution of models on HPC systems. This presentation will illustrate the use of web based data and computation services from both the browser and desktop client software. These web-based services implement the Terrain Analysis Using Digital Elevation Model (TauDEM) tools for watershed delineation, generation of hydrology-based terrain information, and preparation of hydrologic model inputs. They allow users to develop scripts on their desktop computer that call analytical functions that are executed completely in the cloud, on HPC resources using input datasets stored in the cloud, without installing specialized software, learning how to use HPC, or transferring large datasets back to the user's desktop. These cases serve as examples for how this approach can be extended to other models to enhance the use of web and data services in the geosciences.
PREFACE: High Performance Computing Symposium 2011
NASA Astrophysics Data System (ADS)
Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod
2012-02-01
HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unat, Didem; Dubey, Anshu; Hoefler, Torsten
The cost of data movement has always been an important concern in high performance computing (HPC) systems. It has now become the dominant factor in terms of both energy consumption and performance. Support for expression of data locality has been explored in the past, but those efforts have had only modest success in being adopted in HPC applications for various reasons. However, with the increasing complexity of the memory hierarchy and higher parallelism in emerging HPC systems, locality management has acquired a new urgency. Developers can no longer limit themselves to low-level solutions and ignore the potential for productivity andmore » performance portability obtained by using locality abstractions. Fortunately, the trend emerging in recent literature on the topic alleviates many of the concerns that got in the way of their adoption by application developers. Data locality abstractions are available in the forms of libraries, data structures, languages and runtime systems; a common theme is increasing productivity without sacrificing performance. Furthermore, this paper examines these trends and identifies commonalities that can combine various locality concepts to develop a comprehensive approach to expressing and managing data locality on future large-scale high-performance computing systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth; Sewell, Christopher; Usher, William
Here, one of the most critical challenges for high-performance computing (HPC) scientific visualization is execution on massively threaded processors. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Our current production scientific visualization software is not designed for these new types of architectures. To address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth; Sewell, Christopher; Usher, William
Execution on massively threaded processors is one of the most critical challenges for high-performance computing (HPC) scientific visualization. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Moreover, our current production scientific visualization software is not designed for these new types of architectures. In order to address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.
Computational Science News | Computational Science | NREL
-Cooled High-Performance Computing Technology at the ESIF February 28, 2018 NREL Launches New Website for High-Performance Computing System Users The National Renewable Energy Laboratory (NREL) Computational Science Center has launched a revamped website for users of the lab's high-performance computing (HPC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkata, Manjunath Gorentla; Aderholdt, William F
The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less
Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Chen, Yousu; Wu, Di
2015-12-09
Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less
Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Engelmann, Christian
Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest that very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Practical limits on power consumption in HPC systems will require future systems to embrace innovative architectures, increasing the levels of hardware and software complexities. The resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies thatmore » are capable of handling a broad set of fault models at accelerated fault rates. These techniques must seek to improve resilience at reasonable overheads to power consumption and performance. While the HPC community has developed various solutions, application-level as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power eciency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software ecosystems, which are expected to be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience based on the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. The catalog of resilience design patterns provides designers with reusable design elements. We define a design framework that enhances our understanding of the important constraints and opportunities for solutions deployed at various layers of the system stack. The framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also enables optimization of the cost-benefit trade-os among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-ecient manner in spite of frequent faults, errors, and failures of various types.« less
Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments
Yim, Won Cheol; Cushman, John C.
2017-07-22
Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less
Towards Portable Large-Scale Image Processing with High-Performance Computing.
Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A
2018-05-03
High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.
Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yim, Won Cheol; Cushman, John C.
Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less
OCCAM: a flexible, multi-purpose and extendable HPC cluster
NASA Astrophysics Data System (ADS)
Aldinucci, M.; Bagnasco, S.; Lusso, S.; Pasteris, P.; Rabellino, S.; Vallero, S.
2017-10-01
The Open Computing Cluster for Advanced data Manipulation (OCCAM) is a multipurpose flexible HPC cluster designed and operated by a collaboration between the University of Torino and the Sezione di Torino of the Istituto Nazionale di Fisica Nucleare. It is aimed at providing a flexible, reconfigurable and extendable infrastructure to cater to a wide range of different scientific computing use cases, including ones from solid-state chemistry, high-energy physics, computer science, big data analytics, computational biology, genomics and many others. Furthermore, it will serve as a platform for R&D activities on computational technologies themselves, with topics ranging from GPU acceleration to Cloud Computing technologies. A heterogeneous and reconfigurable system like this poses a number of challenges related to the frequency at which heterogeneous hardware resources might change their availability and shareability status, which in turn affect methods and means to allocate, manage, optimize, bill, monitor VMs, containers, virtual farms, jobs, interactive bare-metal sessions, etc. This work describes some of the use cases that prompted the design and construction of the HPC cluster, its architecture and resource provisioning model, along with a first characterization of its performance by some synthetic benchmark tools and a few realistic use-case tests.
Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan
2015-01-01
Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.
Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan
2015-01-01
Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450
NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2014-09-01
NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC datamore » center.« less
ERIC Educational Resources Information Center
Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu
2013-01-01
With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…
RAPPORT: running scientific high-performance computing applications on the cloud.
Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt
2013-01-28
Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.
HPC Software Stack Testing Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garvey, Cormac
The HPC Software stack testing framework (hpcswtest) is used in the INL Scientific Computing Department to test the basic sanity and integrity of the HPC Software stack (Compilers, MPI, Numerical libraries and Applications) and to quickly discover hard failures, and as a by-product it will indirectly check the HPC infrastructure (network, PBS and licensing servers).
Roy Fraley Roy Fraley Professional II-Engineer Roy.Fraley@nrel.gov | 303-384-6468 Roy Fraley is the high-performance computing (HPC) data center engineer with the Computational Science Center's HPC
Performance Analysis, Modeling and Scaling of HPC Applications and Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatele, Abhinav
2016-01-13
E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research alongmore » the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.« less
VTK-m: Accelerating the Visualization Toolkit for Massively Threaded Architectures
Moreland, Kenneth; Sewell, Christopher; Usher, William; ...
2016-05-09
Here, one of the most critical challenges for high-performance computing (HPC) scientific visualization is execution on massively threaded processors. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Our current production scientific visualization software is not designed for these new types of architectures. To address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.
VTK-m: Accelerating the Visualization Toolkit for Massively Threaded Architectures
Moreland, Kenneth; Sewell, Christopher; Usher, William; ...
2016-05-09
Execution on massively threaded processors is one of the most critical challenges for high-performance computing (HPC) scientific visualization. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Moreover, our current production scientific visualization software is not designed for these new types of architectures. In order to address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.
Data Storage and Transfer | High-Performance Computing | NREL
High-Performance Computing (HPC) systems. Photo of computer server wiring and lights, blurred to show data. WinSCP for Windows File Transfers Use to transfer files from a local computer to a remote computer. Robinhood for File Management Use this tool to manage your data files on Peregrine. Best
Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Engelmann, Christian
Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Therefore the resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. Also, due to practical limits on powermore » consumption in HPC systems future systems are likely to embrace innovative architectures, increasing the levels of hardware and software complexities. As a result the techniques that seek to improve resilience must navigate the complex trade-off space between resilience and the overheads to power consumption and performance. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power efficiency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience using the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. Each established solution is described in the form of a pattern that addresses concrete problems in the design of resilient systems. The complete catalog of resilience design patterns provides designers with reusable design elements. We also define a framework that enhances a designer's understanding of the important constraints and opportunities for the design patterns to be implemented and deployed at various layers of the system stack. This design framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also supports optimization of the cost-benefit trade-offs among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner in spite of frequent faults, errors, and failures of various types.« less
System-Level Virtualization for High Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallee, Geoffroy R; Naughton, III, Thomas J; Engelmann, Christian
2008-01-01
System-level virtualization has been a research topic since the 70's but regained popularity during the past few years because of the availability of efficient solution such as Xen and the implementation of hardware support in commodity processors (e.g. Intel-VT, AMD-V). However, a majority of system-level virtualization projects is guided by the server consolidation market. As a result, current virtualization solutions appear to not be suitable for high performance computing (HPC) which is typically based on large-scale systems. On another hand there is significant interest in exploiting virtual machines (VMs) within HPC for a number of other reasons. By virtualizing themore » machine, one is able to run a variety of operating systems and environments as needed by the applications. Virtualization allows users to isolate workloads, improving security and reliability. It is also possible to support non-native environments and/or legacy operating environments through virtualization. In addition, it is possible to balance work loads, use migration techniques to relocate applications from failing machines, and isolate fault systems for repair. This document presents the challenges for the implementation of a system-level virtualization solution for HPC. It also presents a brief survey of the different approaches and techniques to address these challenges.« less
Continuous Security and Configuration Monitoring of HPC Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia-Lomeli, H. D.; Bertsch, A. D.; Fox, D. M.
Continuous security and configuration monitoring of information systems has been a time consuming and laborious task for system administrators at the High Performance Computing (HPC) center. Prior to this project, system administrators had to manually check the settings of thousands of nodes, which required a significant number of hours rendering the old process ineffective and inefficient. This paper explains the application of Splunk Enterprise, a software agent, and a reporting tool in the development of a user application interface to track and report on critical system updates and security compliance status of HPC Clusters. In conjunction with other configuration managementmore » systems, the reporting tool is to provide continuous situational awareness to system administrators of the compliance state of information systems. Our approach consisted of the development, testing, and deployment of an agent to collect any arbitrary information across a massively distributed computing center, and organize that information into a human-readable format. Using Splunk Enterprise, this raw data was then gathered into a central repository and indexed for search, analysis, and correlation. Following acquisition and accumulation, the reporting tool generated and presented actionable information by filtering the data according to command line parameters passed at run time. Preliminary data showed results for over six thousand nodes. Further research and expansion of this tool could lead to the development of a series of agents to gather and report critical system parameters. However, in order to make use of the flexibility and resourcefulness of the reporting tool the agent must conform to specifications set forth in this paper. This project has simplified the way system administrators gather, analyze, and report on the configuration and security state of HPC clusters, maintaining ongoing situational awareness. Rather than querying each cluster independently, compliance checking can be managed from one central location.« less
Cockrell, Chase; An, Gary
2017-10-07
Sepsis affects nearly 1 million people in the United States per year, has a mortality rate of 28-50% and requires more than $20 billion a year in hospital costs. Over a quarter century of research has not yielded a single reliable diagnostic test or a directed therapeutic agent for sepsis. Central to this insufficiency is the fact that sepsis remains a clinical/physiological diagnosis representing a multitude of molecularly heterogeneous pathological trajectories. Advances in computational capabilities offered by High Performance Computing (HPC) platforms call for an evolution in the investigation of sepsis to attempt to define the boundaries of traditional research (bench, clinical and computational) through the use of computational proxy models. We present a novel investigatory and analytical approach, derived from how HPC resources and simulation are used in the physical sciences, to identify the epistemic boundary conditions of the study of clinical sepsis via the use of a proxy agent-based model of systemic inflammation. Current predictive models for sepsis use correlative methods that are limited by patient heterogeneity and data sparseness. We address this issue by using an HPC version of a system-level validated agent-based model of sepsis, the Innate Immune Response ABM (IIRBM), as a proxy system in order to identify boundary conditions for the possible behavioral space for sepsis. We then apply advanced analysis derived from the study of Random Dynamical Systems (RDS) to identify novel means for characterizing system behavior and providing insight into the tractability of traditional investigatory methods. The behavior space of the IIRABM was examined by simulating over 70 million sepsis patients for up to 90 days in a sweep across the following parameters: cardio-respiratory-metabolic resilience; microbial invasiveness; microbial toxigenesis; and degree of nosocomial exposure. In addition to using established methods for describing parameter space, we developed two novel methods for characterizing the behavior of a RDS: Probabilistic Basins of Attraction (PBoA) and Stochastic Trajectory Analysis (STA). Computationally generated behavioral landscapes demonstrated attractor structures around stochastic regions of behavior that could be described in a complementary fashion through use of PBoA and STA. The stochasticity of the boundaries of the attractors highlights the challenge for correlative attempts to characterize and classify clinical sepsis. HPC simulations of models like the IIRABM can be used to generate approximations of the behavior space of sepsis to both establish "boundaries of futility" with respect to existing investigatory approaches and apply system engineering principles to investigate the general dynamic properties of sepsis to provide a pathway for developing control strategies. The issues that bedevil the study and treatment of sepsis, namely clinical data sparseness and inadequate experimental sampling of system behavior space, are fundamental to nearly all biomedical research, manifesting in the "Crisis of Reproducibility" at all levels. HPC-augmented simulation-based research offers an investigatory strategy more consistent with that seen in the physical sciences (which combine experiment, theory and simulation), and an opportunity to utilize the leading advances in HPC, namely deep machine learning and evolutionary computing, to form the basis of an iterative scientific process to meet the full promise of Precision Medicine (right drug, right patient, right time). Copyright © 2017. Published by Elsevier Ltd.
High Performance Computing Innovation Service Portal Study (HPC-ISP)
2009-04-01
threatened by global competition. It is essential that these suppliers remain competitive and maintain their technological advantage . In this increasingly...place themselves, as well as customers who rely on them, in competitive jeopardy. Despite the potential competitive advantage associated with adopting...computing users into the HPC fold and to enable more entry-level users to exploit HPC more fully for competitive advantage . About half of the surveyed
SCEAPI: A unified Restful Web API for High-Performance Computing
NASA Astrophysics Data System (ADS)
Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi
2017-10-01
The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.
Opportunities for nonvolatile memory systems in extreme-scale high-performance computing
Vetter, Jeffrey S.; Mittal, Sparsh
2015-01-12
For extreme-scale high-performance computing systems, system-wide power consumption has been identified as one of the key constraints moving forward, where DRAM main memory systems account for about 30 to 50 percent of a node's overall power consumption. As the benefits of device scaling for DRAM memory slow, it will become increasingly difficult to keep memory capacities balanced with increasing computational rates offered by next-generation processors. However, several emerging memory technologies related to nonvolatile memory (NVM) devices are being investigated as an alternative for DRAM. Moving forward, NVM devices could offer solutions for HPC architectures. Researchers are investigating how to integratemore » these emerging technologies into future extreme-scale HPC systems and how to expose these capabilities in the software stack and applications. In addition, current results show several of these strategies could offer high-bandwidth I/O, larger main memory capacities, persistent data structures, and new approaches for application resilience and output postprocessing, such as transaction-based incremental checkpointing and in situ visualization, respectively.« less
System Connection via SSH Gateway | High-Performance Computing | NREL
;@peregrine.hpc.nrel.gov First time logging in? If this is the first time you've logged in with your new account, you will password. You will be prompted to enter it a second time, then you will be logged off. Just reconnect with your HPC password at any time, you can simply use the passwd command. Remote Users If you're connecting
Cognitive Model Exploration and Optimization: A New Challenge for Computational Science
2010-03-01
the generation and analysis of computational cognitive models to explain various aspects of cognition. Typically the behavior of these models...computational scale of a workstation, so we have turned to high performance computing (HPC) clusters and volunteer computing for large-scale...computational resources. The majority of applications on the Department of Defense HPC clusters focus on solving partial differential equations (Post
A Long History of Supercomputing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grider, Gary
As part of its national security science mission, Los Alamos National Laboratory and HPC have a long, entwined history dating back to the earliest days of computing. From bringing the first problem to the nation’s first computer to building the first machine to break the petaflop barrier, Los Alamos holds many “firsts” in HPC breakthroughs. Today, supercomputers are integral to stockpile stewardship and the Laboratory continues to work with vendors in developing the future of HPC.
Advanced Biomedical Computing Center (ABCC) | DSITP
The Advanced Biomedical Computing Center (ABCC), located in Frederick Maryland (MD), provides HPC resources for both NIH/NCI intramural scientists and the extramural biomedical research community. Its mission is to provide HPC support, to provide collaborative research, and to conduct in-house research in various areas of computational biology and biomedical research.
Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale
Engelmann, Christian; Hukerikar, Saurabh
2017-09-01
Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics.more » Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The resilience patterns and the design framework also enable exploration and evaluation of design alternatives and support optimization of the cost-benefit trade-offs among performance, protection coverage, and power consumption of resilience solutions. Here, the overall goal of this work is to establish a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner despite frequent faults, errors, and failures of various types.« less
Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelmann, Christian; Hukerikar, Saurabh
Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics.more » Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The resilience patterns and the design framework also enable exploration and evaluation of design alternatives and support optimization of the cost-benefit trade-offs among performance, protection coverage, and power consumption of resilience solutions. Here, the overall goal of this work is to establish a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner despite frequent faults, errors, and failures of various types.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klitsner, Tom
The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.
Analysis of Application Power and Schedule Composition in a High Performance Computing Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elmore, Ryan; Gruchalla, Kenny; Phillips, Caleb
As the capacity of high performance computing (HPC) systems continues to grow, small changes in energy management have the potential to produce significant energy savings. In this paper, we employ an extensive informatics system for aggregating and analyzing real-time performance and power use data to evaluate energy footprints of jobs running in an HPC data center. We look at the effects of algorithmic choices for a given job on the resulting energy footprints, and analyze application-specific power consumption, and summarize average power use in the aggregate. All of these views reveal meaningful power variance between classes of applications as wellmore » as chosen methods for a given job. Using these data, we discuss energy-aware cost-saving strategies based on reordering the HPC job schedule. Using historical job and power data, we present a hypothetical job schedule reordering that: (1) reduces the facility's peak power draw and (2) manages power in conjunction with a large-scale photovoltaic array. Lastly, we leverage this data to understand the practical limits on predicting key power use metrics at the time of submission.« less
Workload Characterization of a Leadership Class Storage Cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjae; Gunasekaran, Raghul; Shipman, Galen M
2010-01-01
Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the scientific workloads of the world s fastest HPC (High Performance Computing) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). Spider provides an aggregate bandwidth of over 240 GB/s with over 10 petabytes of RAID 6 formatted capacity. OLCFs flagship petascale simulation platform, Jaguar, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize themore » system utilization, the demands of reads and writes, idle time, and the distribution of read requests to write requests for the storage system observed over a period of 6 months. From this study we develop synthesized workloads and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aderholdt, Ferrol; Caldwell, Blake A.; Hicks, Susan Elaine
High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data at various security levels but in so doing are often enclaved at the highest security posture. This approach places significant restrictions on the users of the system even when processing data at a lower security level and exposes data at higher levels of confidentiality to a much broader population than otherwise necessary. The traditional approach of isolation, while effective in establishing security enclaves poses significant challenges formore » the use of shared infrastructure in HPC environments. This report details current state-of-the-art in reconfigurable network enclaving through Software Defined Networking (SDN) and Network Function Virtualization (NFV) and their applicability to secure enclaves in HPC environments. SDN and NFV methods are based on a solid foundation of system wide virtualization. The purpose of which is very straight forward, the system administrator can deploy networks that are more amenable to customer needs, and at the same time achieve increased scalability making it easier to increase overall capacity as needed without negatively affecting functionality. The network administration of both the server system and the virtual sub-systems is simplified allowing control of the infrastructure through well-defined APIs (Application Programming Interface). While SDN and NFV technologies offer significant promise in meeting these goals, they also provide the ability to address a significant component of the multi-tenant challenge in HPC environments, namely resource isolation. Traditional HPC systems are built upon scalable high-performance networking technologies designed to meet specific application requirements. Dynamic isolation of resources within these environments has remained difficult to achieve. SDN and NFV methodology provide us with relevant concepts and available open standards based APIs that isolate compute and storage resources within an otherwise common networking infrastructure. Additionally, the integration of the networking APIs within larger system frameworks such as OpenStack provide the tools necessary to establish isolated enclaves dynamically allowing the benefits of HPC while providing a controlled security structure surrounding these systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Li; Chen, Zizhong; Song, Shuaiwen
2016-01-18
Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Li; Chen, Zizhong; Song, Shuaiwen Leon
2015-11-16
Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lingerfelt, Eric J; Endeve, Eirik; Hui, Yawei
Improvements in scientific instrumentation allow imaging at mesoscopic to atomic length scales, many spectroscopic modes, and now--with the rise of multimodal acquisition systems and the associated processing capability--the era of multidimensional, informationally dense data sets has arrived. Technical issues in these combinatorial scientific fields are exacerbated by computational challenges best summarized as a necessity for drastic improvement in the capability to transfer, store, and analyze large volumes of data. The Bellerophon Environment for Analysis of Materials (BEAM) platform provides material scientists the capability to directly leverage the integrated computational and analytical power of High Performance Computing (HPC) to perform scalablemore » data analysis and simulation and manage uploaded data files via an intuitive, cross-platform client user interface. This framework delivers authenticated, "push-button" execution of complex user workflows that deploy data analysis algorithms and computational simulations utilizing compute-and-data cloud infrastructures and HPC environments like Titan at the Oak Ridge Leadershp Computing Facility (OLCF).« less
Goel, Deepa; Babu, Sasidhara; Prayaga, Aruna K; Sundaram, Challa
2008-01-01
Meningeal hemangiopericytoma (HPC) is a rare neoplasm. It is closely related to hemangiopericytomas in systemic tissues, with a tendency to recur and metastasize outside the CNS. Only a few case reports describe the cytomorphologic appearance of these metastasizing lesions, most having primary tumor in deep soft tissues. We report a case of recurrent meningeal HPC metastasizing to lungs. A 48-year-old woman presented with a history of headache. She underwent primary surgery 10 years previously for left parietal tumor. Histopathologic diagnosis was HPC. Radiotherapy was given postoperatively. Brain magnetic resonance imaging (MRI) at admission suggested local recurrence. She also complained of dry cough and shortness of breath. On evaluation, computed tomography (CT) scan lung showed multiple, bilateral, small nodules. Fine needle aspiration cytology (FNAC) of a larger nodule revealed spindle-shaped cells arranged around blood vessels. Immunohistochemistry with CD34 on cell block confirmed metastatic HPC. FNAC is an easy, accurate, relatively noninvasive procedure for diagnosing metastases, especially in patients with a history of recurrent intracranial HPC. Immunohistochemistry on cell block material collected at the time of FNAC may aid in distinguishing HPC from other tumors that are close mimics cytologically.
Data Transfer Study HPSS Archiving
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wynne, James; Parete-Koon, Suzanne T; Mitchell, Quinn
2015-01-01
The movement of the large amounts of data produced by codes run in a High Performance Computing (HPC) environment can be a bottleneck for project workflows. To balance filesystem capacity and performance requirements, HPC centers enforce data management policies to purge old files to make room for new computation and analysis results. Users at Oak Ridge Leadership Computing Facility (OLCF) and many other HPC user facilities must archive data to avoid data loss during purges, therefore the time associated with data movement for archiving is something that all users must consider. This study observed the difference in transfer speed frommore » the originating location on the Lustre filesystem to the more permanent High Performance Storage System (HPSS). The tests were done with a number of different transfer methods for files that spanned a variety of sizes and compositions that reflect OLCF user data. This data will be used to help users of Titan and other Cray supercomputers plan their workflow and data transfers so that they are most efficient for their project. We will also discuss best practice for maintaining data at shared user facilities.« less
NASA Astrophysics Data System (ADS)
Fasel, Markus
2016-10-01
High-Performance Computing Systems are powerful tools tailored to support large- scale applications that rely on low-latency inter-process communications to run efficiently. By design, these systems often impose constraints on application workflows, such as limited external network connectivity and whole node scheduling, that make more general-purpose computing tasks, such as those commonly found in high-energy nuclear physics applications, more difficult to carry out. In this work, we present a tool designed to simplify access to such complicated environments by handling the common tasks of job submission, software management, and local data management, in a framework that is easily adaptable to the specific requirements of various computing systems. The tool, initially constructed to process stand-alone ALICE simulations for detector and software development, was successfully deployed on the NERSC computing systems, Carver, Hopper and Edison, and is being configured to provide access to the next generation NERSC system, Cori. In this report, we describe the tool and discuss our experience running ALICE applications on NERSC HPC systems. The discussion will include our initial benchmarks of Cori compared to other systems and our attempts to leverage the new capabilities offered with Cori to support data-intensive applications, with a future goal of full integration of such systems into ALICE grid operations.
An Integrated Software Package to Enable Predictive Simulation Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Fitzhenry, Erin B.; Jin, Shuangshuang
The power grid is increasing in complexity due to the deployment of smart grid technologies. Such technologies vastly increase the size and complexity of power grid systems for simulation and modeling. This increasing complexity necessitates not only the use of high-performance-computing (HPC) techniques, but a smooth, well-integrated interplay between HPC applications. This paper presents a new integrated software package that integrates HPC applications and a web-based visualization tool based on a middleware framework. This framework can support the data communication between different applications. Case studies with a large power system demonstrate the predictive capability brought by the integrated software package,more » as well as the better situational awareness provided by the web-based visualization tool in a live mode. Test results validate the effectiveness and usability of the integrated software package.« less
Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy
NASA Astrophysics Data System (ADS)
Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli
2014-03-01
One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.
Auto-Versioning Systems Image Manager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pezzaglia, Larry
2013-08-01
The av_sys_image_mgr utility provides an interface for the creation, manipulation, and analysis of system boot images for computer systems. It is primarily intended to provide a convenient method for managing the introduction of changes to boot images for long-lived production HPC systems.
ERIC Educational Resources Information Center
Amenyo, John-Thones
2012-01-01
Carefully engineered playable games can serve as vehicles for students and practitioners to learn and explore the programming of advanced computer architectures to execute applications, such as high performance computing (HPC) and complex, inter-networked, distributed systems. The article presents families of playable games that are grounded in…
Enabling parallel simulation of large-scale HPC network systems
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; ...
2016-04-07
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less
Enabling parallel simulation of large-scale HPC network systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less
A Long History of Supercomputing
Grider, Gary
2018-06-13
As part of its national security science mission, Los Alamos National Laboratory and HPC have a long, entwined history dating back to the earliest days of computing. From bringing the first problem to the nationâs first computer to building the first machine to break the petaflop barrier, Los Alamos holds many âfirstsâ in HPC breakthroughs. Today, supercomputers are integral to stockpile stewardship and the Laboratory continues to work with vendors in developing the future of HPC.
Understanding I/O workload characteristics of a Peta-scale storage system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjae; Gunasekaran, Raghul
2015-01-01
Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the I/O workloads of scientific applications of one of the world s fastest high performance computing (HPC) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). OLCF flagship petascale simulation platform, Titan, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, storage space utilization,more » and the distribution of read requests to write requests for the Peta-scale Storage Systems. From this study, we develop synthesized workloads, and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution. We also study the I/O load imbalance problems using I/O performance data collected from the Spider storage system.« less
CyberShake: Running Seismic Hazard Workflows on Distributed HPC Resources
NASA Astrophysics Data System (ADS)
Callaghan, S.; Maechling, P. J.; Graves, R. W.; Gill, D.; Olsen, K. B.; Milner, K. R.; Yu, J.; Jordan, T. H.
2013-12-01
As part of its program of earthquake system science research, the Southern California Earthquake Center (SCEC) has developed a simulation platform, CyberShake, to perform physics-based probabilistic seismic hazard analysis (PSHA) using 3D deterministic wave propagation simulations. CyberShake performs PSHA by simulating a tensor-valued wavefield of Strain Green Tensors, and then using seismic reciprocity to calculate synthetic seismograms for about 415,000 events per site of interest. These seismograms are processed to compute ground motion intensity measures, which are then combined with probabilities from an earthquake rupture forecast to produce a site-specific hazard curve. Seismic hazard curves for hundreds of sites in a region can be used to calculate a seismic hazard map, representing the seismic hazard for a region. We present a recently completed PHSA study in which we calculated four CyberShake seismic hazard maps for the Southern California area to compare how CyberShake hazard results are affected by different SGT computational codes (AWP-ODC and AWP-RWG) and different community velocity models (Community Velocity Model - SCEC (CVM-S4) v11.11 and Community Velocity Model - Harvard (CVM-H) v11.9). We present our approach to running workflow applications on distributed HPC resources, including systems without support for remote job submission. We show how our approach extends the benefits of scientific workflows, such as job and data management, to large-scale applications on Track 1 and Leadership class open-science HPC resources. We used our distributed workflow approach to perform CyberShake Study 13.4 on two new NSF open-science HPC computing resources, Blue Waters and Stampede, executing over 470 million tasks to calculate physics-based hazard curves for 286 locations in the Southern California region. For each location, we calculated seismic hazard curves with two different community velocity models and two different SGT codes, resulting in over 1100 hazard curves. We will report on the performance of this CyberShake study, four times larger than previous studies. Additionally, we will examine the challenges we face applying these workflow techniques to additional open-science HPC systems and discuss whether our workflow solutions continue to provide value to our large-scale PSHA calculations.
The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform
NASA Astrophysics Data System (ADS)
Xie, Qingyun
2016-06-01
This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fadika, Zacharia; Dede, Elif; Govindaraju, Madhusudhan
MapReduce is increasingly becoming a popular framework, and a potent programming model. The most popular open source implementation of MapReduce, Hadoop, is based on the Hadoop Distributed File System (HDFS). However, as HDFS is not POSIX compliant, it cannot be fully leveraged by applications running on a majority of existing HPC environments such as Teragrid and NERSC. These HPC environments typicallysupport globally shared file systems such as NFS and GPFS. On such resourceful HPC infrastructures, the use of Hadoop not only creates compatibility issues, but also affects overall performance due to the added overhead of the HDFS. This paper notmore » only presents a MapReduce implementation directly suitable for HPC environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems' functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPCenvironments, but also allows for better performance in such settings. This paper shows the applicability and high performance of the MapReduce paradigm through MARIANE, an implementation designed for clustered and shared-disk file systems and as such not dedicated to a specific MapReduce solution. The paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in distributed environments over Apache Hadoop in a data intensive setting, on the Magellan testbed at the National Energy Research Scientific Computing Center (NERSC).« less
NASA Astrophysics Data System (ADS)
Zhu, Dehua; Echendu, Shirley; Xuan, Yunqing; Webster, Mike; Cluckie, Ian
2016-11-01
Impact-focused studies of extreme weather require coupling of accurate simulations of weather and climate systems and impact-measuring hydrological models which themselves demand larger computer resources. In this paper, we present a preliminary analysis of a high-performance computing (HPC)-based hydrological modelling approach, which is aimed at utilizing and maximizing HPC power resources, to support the study on extreme weather impact due to climate change. Here, four case studies are presented through implementation on the HPC Wales platform of the UK mesoscale meteorological Unified Model (UM) with high-resolution simulation suite UKV, alongside a Linux-based hydrological model, Hydrological Predictions for the Environment (HYPE). The results of this study suggest that the coupled hydro-meteorological model was still able to capture the major flood peaks, compared with the conventional gauge- or radar-driving forecast, but with the added value of much extended forecast lead time. The high-resolution rainfall estimation produced by the UKV performs similarly to that of radar rainfall products in the first 2-3 days of tested flood events, but the uncertainties particularly increased as the forecast horizon goes beyond 3 days. This study takes a step forward to identify how the online mode approach can be used, where both numerical weather prediction and the hydrological model are executed, either simultaneously or on the same hardware infrastructures, so that more effective interaction and communication can be achieved and maintained between the models. But the concluding comments are that running the entire system on a reasonably powerful HPC platform does not yet allow for real-time simulations, even without the most complex and demanding data simulation part.
Investigating the Interplay between Energy Efficiency and Resilience in High Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Li; Song, Shuaiwen; Wu, Panruo
2015-05-29
Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.
Cognitive Model Exploration and Optimization: A New Challenge for Computational Science
2010-01-01
Introduction Research in cognitive science often involves the generation and analysis of computational cognitive models to explain various...HPC) clusters and volunteer computing for large-scale computational resources. The majority of applications on the Department of Defense HPC... clusters focus on solving partial differential equations (Post, 2009). These tend to be lean, fast models with little noise. While we lack specific
2011-08-01
5 Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis...classification of streaming data. Example input images (top left). All digit prototypes (cluster centers) found, with size proportional to frequency (top...Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis 1 http
Parallel computing in genomic research: advances and applications
Ocaña, Kary; de Oliveira, Daniel
2015-01-01
Today’s genomic experiments have to process the so-called “biological big data” that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801
Parallel computing in genomic research: advances and applications.
Ocaña, Kary; de Oliveira, Daniel
2015-01-01
Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.
The Use of High Performance Computing (HPC) to Strengthen the Development of Army Systems
2011-11-01
accurately predicting the supersonic magus effect about spinning cones, ogive- cylinders , and boat-tailed afterbodies. This work led to the successful...successful computer model of the proposed product or system, one can then build prototypes on the computer and study the effects on the performance of...needed. The NRC report discusses the requirements for effective use of such computing power. One needs “models, algorithms, software, hardware
Innovative HPC architectures for the study of planetary plasma environments
NASA Astrophysics Data System (ADS)
Amaya, Jorge; Wolf, Anna; Lembège, Bertrand; Zitz, Anke; Alvarez, Damian; Lapenta, Giovanni
2016-04-01
DEEP-ER is an European Commission founded project that develops a new type of High Performance Computer architecture. The revolutionary system is currently used by KU Leuven to study the effects of the solar wind on the global environments of the Earth and Mercury. The new architecture combines the versatility of Intel Xeon computing nodes with the power of the upcoming Intel Xeon Phi accelerators. Contrary to classical heterogeneous HPC architectures, where it is customary to find CPU and accelerators in the same computing nodes, in the DEEP-ER system CPU nodes are grouped together (Cluster) and independently from the accelerator nodes (Booster). The system is equipped with a state of the art interconnection network, a highly scalable and fast I/O and a fail recovery resiliency system. The final objective of the project is to introduce a scalable system that can be used to create the next generation of exascale supercomputers. The code iPic3D from KU Leuven is being adapted to this new architecture. This particle-in-cell code can now perform the computation of the electromagnetic fields in the Cluster while the particles are moved in the Booster side. Using fast and scalable Xeon Phi accelerators in the Booster we can introduce many more particles per cell in the simulation than what is possible in the current generation of HPC systems, allowing to calculate fully kinetic plasmas with very low interpolation noise. The system will be used to perform fully kinetic, low noise, 3D simulations of the interaction of the solar wind with the magnetosphere of the Earth and Mercury. Preliminary simulations have been performed in other HPC centers in order to compare the results in different systems. In this presentation we show the complexity of the plasma flow around the planets, including the development of hydrodynamic instabilities at the flanks, the presence of the collision-less shock, the magnetosheath, the magnetopause, reconnection zones, the formation of the plasma sheet and the magnetotail, and the variation of ion/electron plasma flows when crossing these frontiers. The simulations also give access to detailed information about the particle dynamics and their velocity distribution at locations that can be used for comparison with satellite data.
2011 Computation Directorate Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, D L
2012-04-11
From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilitiesmore » and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence. Specifically, ASCI/ASC accelerated the development of simulation capabilities necessary to ensure confidence in the nuclear stockpile-far exceeding what might have been achieved in the absence of a focused initiative. While stockpile stewardship research pushed LLNL scientists to develop new computer codes, better simulation methods, and improved visualization technologies, this work also stimulated the exploration of HPC applications beyond the standard sponsor base. As LLNL advances to a petascale platform and pursues exascale computing (1,000 times faster than Sequoia), ASC will be paramount to achieving predictive simulation and uncertainty quantification. Predictive simulation and quantifying the uncertainty of numerical predictions where little-to-no data exists demands exascale computing and represents an expanding area of scientific research important not only to nuclear weapons, but to nuclear attribution, nuclear reactor design, and understanding global climate issues, among other fields. Aside from these lofty goals and challenges, computing at LLNL is anything but 'business as usual.' International competition in supercomputing is nothing new, but the HPC community is now operating in an expanded, more aggressive climate of global competitiveness. More countries understand how science and technology research and development are inextricably linked to economic prosperity, and they are aggressively pursuing ways to integrate HPC technologies into their native industrial and consumer products. In the interest of the nation's economic security and the science and technology that underpins it, LLNL is expanding its portfolio and forging new collaborations. We must ensure that HPC remains an asymmetric engine of innovation for the Laboratory and for the U.S. and, in doing so, protect our research and development dynamism and the prosperity it makes possible. One untapped area of opportunity LLNL is pursuing is to help U.S. industry understand how supercomputing can benefit their business. Industrial investment in HPC applications has historically been limited by the prohibitive cost of entry, the inaccessibility of software to run the powerful systems, and the years it takes to grow the expertise to develop codes and run them in an optimal way. LLNL is helping industry better compete in the global market place by providing access to some of the world's most powerful computing systems, the tools to run them, and the experts who are adept at using them. Our scientists are collaborating side by side with industrial partners to develop solutions to some of industry's toughest problems. The goal of the Livermore Valley Open Campus High Performance Computing Innovation Center is to allow American industry the opportunity to harness the power of supercomputing by leveraging the scientific and computational expertise at LLNL in order to gain a competitive advantage in the global economy.« less
Final Report for File System Support for Burst Buffers on HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, W.; Mohror, K.
Distributed burst buffers are a promising storage architecture for handling I/O workloads for exascale computing. As they are being deployed on more supercomputers, a file system that efficiently manages these burst buffers for fast I/O operations carries great consequence. Over the past year, FSU team has undertaken several efforts to design, prototype and evaluate distributed file systems for burst buffers on HPC systems. These include MetaKV: a Key-Value Store for Metadata Management of Distributed Burst Buffers, a user-level file system with multiple backends, and a specialized file system for large datasets of deep neural networks. Our progress for these respectivemore » efforts are elaborated further in this report.« less
Notredame, Cedric
2018-05-02
Cedric Notredame from the Centre for Genomic Regulation gives a presentation on New Challenges of the Computation of Multiple Sequence Alignments in the High-Throughput Era at the JGI/Argonne HPC Workshop on January 26, 2010.
KITTEN Lightweight Kernel 0.1 Beta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pedretti, Kevin; Levenhagen, Michael; Kelly, Suzanne
2007-12-12
The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten provides unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency andmore » scalability than with general purpose OS kernels.« less
A Heterogeneous High-Performance System for Computational and Computer Science
2016-11-15
Patents Submitted Patents Awarded Awards Graduate Students Names of Post Doctorates Names of Faculty Supported Names of Under Graduate students supported...team of research faculty from the departments of computer science and natural science at Bowie State University. The supercomputer is not only to...accelerated HPC systems. The supercomputer is also ideal for the research conducted in the Department of Natural Science, as research faculty work on
The OptIPuter microscopy demonstrator: enabling science through a transatlantic lightpath
Ellisman, M.; Hutton, T.; Kirkland, A.; Lin, A.; Lin, C.; Molina, T.; Peltier, S.; Singh, R.; Tang, K.; Trefethen, A.E.; Wallom, D.C.H.; Xiong, X.
2009-01-01
The OptIPuter microscopy demonstrator project has been designed to enable concurrent and remote usage of world-class electron microscopes located in Oxford and San Diego. The project has constructed a network consisting of microscopes and computational and data resources that are all connected by a dedicated network infrastructure using the UK Lightpath and US Starlight systems. Key science drivers include examples from both materials and biological science. The resulting system is now a permanent link between the Oxford and San Diego microscopy centres. This will form the basis of further projects between the sites and expansion of the types of systems that can be remotely controlled, including optical, as well as electron, microscopy. Other improvements will include the updating of the Microsoft cluster software to the high performance computing (HPC) server 2008, which includes the HPC basic profile implementation that will enable the development of interoperable clients. PMID:19487201
The OptIPuter microscopy demonstrator: enabling science through a transatlantic lightpath.
Ellisman, M; Hutton, T; Kirkland, A; Lin, A; Lin, C; Molina, T; Peltier, S; Singh, R; Tang, K; Trefethen, A E; Wallom, D C H; Xiong, X
2009-07-13
The OptIPuter microscopy demonstrator project has been designed to enable concurrent and remote usage of world-class electron microscopes located in Oxford and San Diego. The project has constructed a network consisting of microscopes and computational and data resources that are all connected by a dedicated network infrastructure using the UK Lightpath and US Starlight systems. Key science drivers include examples from both materials and biological science. The resulting system is now a permanent link between the Oxford and San Diego microscopy centres. This will form the basis of further projects between the sites and expansion of the types of systems that can be remotely controlled, including optical, as well as electron, microscopy. Other improvements will include the updating of the Microsoft cluster software to the high performance computing (HPC) server 2008, which includes the HPC basic profile implementation that will enable the development of interoperable clients.
Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems
Wadhwa, Bharti; Byna, Suren; Butt, Ali R.
2018-04-17
Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objectsmore » to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.« less
Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wadhwa, Bharti; Byna, Suren; Butt, Ali R.
Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objectsmore » to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.« less
Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin
2015-07-29
The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.
An efficient framework for Java data processing systems in HPC environments
NASA Astrophysics Data System (ADS)
Fries, Aidan; Castañeda, Javier; Isasi, Yago; Taboada, Guillermo L.; Portell de Mora, Jordi; Sirvent, Raül
2011-11-01
Java is a commonly used programming language, although its use in High Performance Computing (HPC) remains relatively low. One of the reasons is a lack of libraries offering specific HPC functions to Java applications. In this paper we present a Java-based framework, called DpcbTools, designed to provide a set of functions that fill this gap. It includes a set of efficient data communication functions based on message-passing, thus providing, when a low latency network such as Myrinet is available, higher throughputs and lower latencies than standard solutions used by Java. DpcbTools also includes routines for the launching, monitoring and management of Java applications on several computing nodes by making use of JMX to communicate with remote Java VMs. The Gaia Data Processing and Analysis Consortium (DPAC) is a real case where scientific data from the ESA Gaia astrometric satellite will be entirely processed using Java. In this paper we describe the main elements of DPAC and its usage of the DpcbTools framework. We also assess the usefulness and performance of DpcbTools through its performance evaluation and the analysis of its impact on some DPAC systems deployed in the MareNostrum supercomputer (Barcelona Supercomputing Center).
2016-11-01
Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation by Kathryn Esham, Luis Bravo, Anindya Ghoshal, Muthuvel Murugan, and Michael...Computational Study on the Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation by Luis Bravo, Anindya Ghoshal, Muthuvel...High Performance Computing (HPC)-Enabled Computational Study on the Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation 5a
Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service
NASA Astrophysics Data System (ADS)
Cameron, D.; Filipčič, A.; Guan, W.; Tsulaia, V.; Walker, R.; Wenaus, T.;
2017-10-01
With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from those that comprise the Grid computing of most experiments, therefore exploiting them requires a change in strategy for the experiment. They may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The Advanced Resource Connector Computing Element (ARC CE) with its nonintrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the ATLAS Event Service (AES) primarily to address the issue of jobs that can be terminated at any point when opportunistic computing capacity is needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in a restrictive environment. In addition to the technical details, results from deployment of this solution in the SuperMUC HPC centre in Munich are shown.
Arc4nix: A cross-platform geospatial analytical library for cluster and cloud computing
NASA Astrophysics Data System (ADS)
Tang, Jingyin; Matyas, Corene J.
2018-02-01
Big Data in geospatial technology is a grand challenge for processing capacity. The ability to use a GIS for geospatial analysis on Cloud Computing and High Performance Computing (HPC) clusters has emerged as a new approach to provide feasible solutions. However, users lack the ability to migrate existing research tools to a Cloud Computing or HPC-based environment because of the incompatibility of the market-dominating ArcGIS software stack and Linux operating system. This manuscript details a cross-platform geospatial library "arc4nix" to bridge this gap. Arc4nix provides an application programming interface compatible with ArcGIS and its Python library "arcpy". Arc4nix uses a decoupled client-server architecture that permits geospatial analytical functions to run on the remote server and other functions to run on the native Python environment. It uses functional programming and meta-programming language to dynamically construct Python codes containing actual geospatial calculations, send them to a server and retrieve results. Arc4nix allows users to employ their arcpy-based script in a Cloud Computing and HPC environment with minimal or no modification. It also supports parallelizing tasks using multiple CPU cores and nodes for large-scale analyses. A case study of geospatial processing of a numerical weather model's output shows that arcpy scales linearly in a distributed environment. Arc4nix is open-source software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duro, Francisco Rodrigo; Blas, Javier Garcia; Isaila, Florin
The increasing volume of scientific data and the limited scalability and performance of storage systems are currently presenting a significant limitation for the productivity of the scientific workflows running on both high-performance computing (HPC) and cloud platforms. Clearly needed is better integration of storage systems and workflow engines to address this problem. This paper presents and evaluates a novel solution that leverages codesign principles for integrating Hercules—an in-memory data store—with a workflow management system. We consider four main aspects: workflow representation, task scheduling, task placement, and task termination. As a result, the experimental evaluation on both cloud and HPC systemsmore » demonstrates significant performance and scalability improvements over existing state-of-the-art approaches.« less
76 FR 64330 - Advanced Scientific Computing Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-18
... talks on HPC Reliability, Diffusion on Complex Networks, and Reversible Software Execution Systems Report from Applied Math Workshop on Mathematics for the Analysis, Simulation, and Optimization of Complex Systems Report from ASCR-BES Workshop on Data Challenges from Next Generation Facilities Public...
Using Rollback Avoidance to Mitigate Failures in Next-Generation Extreme-Scale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levy, Scott N.
2016-05-01
High-performance computing (HPC) systems enable scientists to numerically model complex phenomena in many important physical systems. The next major milestone in the development of HPC systems is the construction of the rst supercomputer capable executing more than an exa op, 10 18 oating point operations per second. On systems of this scale, failures will occur much more frequently than on current systems. As a result, resilience is a key obstacle to building next-generation extremescale systems. Coordinated checkpointing is currently the most widely-used mechanism for handling failures on HPC systems. Although coordinated checkpointing remains e ective on current systems, increasing themore » scale of today's systems to build next-generation systems will increase the cost of fault tolerance as more and more time is taken away from the application to protect against or recover from failure. Rollback avoidance techniques seek to mitigate the cost of checkpoint/restart by allowing an application to continue its execution rather than rolling back to an earlier checkpoint when failures occur. These techniqes include failure prediction and preventive migration, replicated computation, fault-tolerant algorithms, and softwarebased memory fault correction. In this thesis, we examine how rollback avoidance techniques can be used to address failures on extreme-scale systems. Using a combination of analytic modeling and simulation, we evaluate the potential impact of rollback avoidance on these systems. We then present a novel rollback avoidance technique that exploits similarities in application memory. Finally, we examine the feasibility of using this technique to protect against memory faults in kernel memory.« less
NASA Astrophysics Data System (ADS)
Evans, B. J. K.; Pugh, T.; Wyborn, L. A.; Porter, D.; Allen, C.; Smillie, J.; Antony, J.; Trenham, C.; Evans, B. J.; Beckett, D.; Erwin, T.; King, E.; Hodge, J.; Woodcock, R.; Fraser, R.; Lescinsky, D. T.
2014-12-01
The National Computational Infrastructure (NCI) has co-located a priority set of national data assets within a HPC research platform. This powerful in-situ computational platform has been created to help serve and analyse the massive amounts of data across the spectrum of environmental collections - in particular the climate, observational data and geoscientific domains. This paper examines the infrastructure, innovation and opportunity for this significant research platform. NCI currently manages nationally significant data collections (10+ PB) categorised as 1) earth system sciences, climate and weather model data assets and products, 2) earth and marine observations and products, 3) geosciences, 4) terrestrial ecosystem, 5) water management and hydrology, and 6) astronomy, social science and biosciences. The data is largely sourced from the NCI partners (who include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. By co-locating these large valuable data assets, new opportunities have arisen by harmonising the data collections, making a powerful transdisciplinary research platformThe data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. New scientific software, cloud-scale techniques, server-side visualisation and data services have been harnessed and integrated into the platform, so that analysis is performed seamlessly across the traditional boundaries of the underlying data domains. Characterisation of the techniques along with performance profiling ensures scalability of each software component, all of which can either be enhanced or replaced through future improvements. A Development-to-Operations (DevOps) framework has also been implemented to manage the scale of the software complexity alone. This ensures that software is both upgradable and maintainable, and can be readily reused with complexly integrated systems and become part of the growing global trusted community tools for cross-disciplinary research.
NASA Astrophysics Data System (ADS)
Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.
2014-12-01
The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will involve the further integration and analysis of this data across the social sciences to facilitate the impacts across the societal domain, including timely analysis to more accurately predict and forecast future climate and environmental state.
Expanding HPC and Research Computing--The Sustainable Way
ERIC Educational Resources Information Center
Grush, Mary
2009-01-01
Increased demands for research and high-performance computing (HPC)--along with growing expectations for cost and environmental savings--are putting new strains on the campus data center. More and more, CIOs like the University of Notre Dame's (Indiana) Gordon Wishon are seeking creative ways to build more sustainable models for data center and…
Business Models of High Performance Computing Centres in Higher Education in Europe
ERIC Educational Resources Information Center
Eurich, Markus; Calleja, Paul; Boutellier, Roman
2013-01-01
High performance computing (HPC) service centres are a vital part of the academic infrastructure of higher education organisations. However, despite their importance for research and the necessary high capital expenditures, business research on HPC service centres is mostly missing. From a business perspective, it is important to find an answer to…
Computational Aspects of Data Assimilation and the ESMF
NASA Technical Reports Server (NTRS)
daSilva, A.
2003-01-01
The scientific challenge of developing advanced data assimilation applications is a daunting task. Independently developed components may have incompatible interfaces or may be written in different computer languages. The high-performance computer (HPC) platforms required by numerically intensive Earth system applications are complex, varied, rapidly evolving and multi-part systems themselves. Since the market for high-end platforms is relatively small, there is little robust middleware available to buffer the modeler from the difficulties of HPC programming. To complicate matters further, the collaborations required to develop large Earth system applications often span initiatives, institutions and agencies, involve geoscience, software engineering, and computer science communities, and cross national borders.The Earth System Modeling Framework (ESMF) project is a concerted response to these challenges. Its goal is to increase software reuse, interoperability, ease of use and performance in Earth system models through the use of a common software framework, developed in an open manner by leaders in the modeling community. The ESMF addresses the technical and to some extent the cultural - aspects of Earth system modeling, laying the groundwork for addressing the more difficult scientific aspects, such as the physical compatibility of components, in the future. In this talk we will discuss the general philosophy and architecture of the ESMF, focussing on those capabilities useful for developing advanced data assimilation applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joseph, Earl C.; Conway, Steve; Dekate, Chirag
This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size. Amore » new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentile, Ann C.; Brandt, James M.; Tucker, Thomas
2011-09-01
This report provides documentation for the completion of the Sandia Level II milestone 'Develop feedback system for intelligent dynamic resource allocation to improve application performance'. This milestone demonstrates the use of a scalable data collection analysis and feedback system that enables insight into how an application is utilizing the hardware resources of a high performance computing (HPC) platform in a lightweight fashion. Further we demonstrate utilizing the same mechanisms used for transporting data for remote analysis and visualization to provide low latency run-time feedback to applications. The ultimate goal of this body of work is performance optimization in the facemore » of the ever increasing size and complexity of HPC systems.« less
Data Provenance Hybridization Supporting Extreme-Scale Scientific WorkflowApplications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elsethagen, Todd O.; Stephan, Eric G.; Raju, Bibi
As high performance computing (HPC) infrastructures continue to grow in capability and complexity, so do the applications that they serve. HPC and distributed-area computing (DAC) (e.g. grid and cloud) users are looking increasingly toward workflow solutions to orchestrate their complex application coupling, pre- and post-processing needs To gain insight and a more quantitative understanding of a workflow’s performance our method includes not only the capture of traditional provenance information, but also the capture and integration of system environment metrics helping to give context and explanation for a workflow’s execution. In this paper, we describe IPPD’s provenance management solution (ProvEn) andmore » its hybrid data store combining both of these data provenance perspectives.« less
Large Scale GW Calculations on the Cori System
NASA Astrophysics Data System (ADS)
Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven
The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.
The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunderam, Vaidy S.
2012-03-20
The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept ofmore » a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.« less
Running R Statistical Computing Environment Software on the Peregrine
for the development of new statistical methodologies and enjoys a large user base. Please consult the distribution details. Natural language support but running in an English locale R is a collaborative project programming paradigms to better leverage modern HPC systems. The CRAN task view for High Performance Computing
User Accounts | High-Performance Computing | NREL
see information on user account policies. ACCOUNT PASSWORDS Logging in for the first time? Forgot your Accounts User Accounts Learn how to request an NREL HPC user account. Request an HPC Account To request an HPC account, please complete our request form. This form is provided using DocuSign. REQUEST
NASA Technical Reports Server (NTRS)
Son, Chang H.
2012-01-01
The Human Powered Centrifuge (HPC) is a facility that is planned to be installed on board the International Space Station (ISS) to enable crew exercises under the artificial gravity conditions. The HPC equipment includes a "bicycle" for long-term exercises of a crewmember that provides power for rotation of HPC at a speed of 30 rpm. The crewmember exercising vigorously on the centrifuge generates the amount of carbon dioxide of about two times higher than a crewmember in ordinary conditions. The goal of the study is to analyze the airflow and carbon dioxide distribution within Pressurized Multipurpose Module (PMM) cabin when HPC is operating. A full unsteady formulation is used for airflow and CO2 transport CFD-based modeling with the so-called sliding mesh concept when the HPC equipment with the adjacent Bay 4 cabin volume is considered in the rotating reference frame while the rest of the cabin volume is considered in the stationary reference frame. The rotating part of the computational domain includes also a human body model. Localized effects of carbon dioxide dispersion are examined. Strong influence of the rotating HPC equipment on the CO2 distribution detected is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayo, Jackson R.; Chen, Frank Xiaoxiao; Pebay, Philippe Pierre
2010-06-01
Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe context-relevant methodologies for determining themore » accuracy and cost-benefit of predictors. While many research studies have quantified the expected impact of growing system size, and the associated shortened mean time to failure (MTTF), on application performance in large-scale high-performance computing (HPC) platforms, there has been little if any work to quantify the possible gains from predicting system resource failures with significant but imperfect accuracy. This possibly stems from HPC system complexity and the fact that, to date, no one has established any good predictors of failure in these systems. Our work in the OVIS project aims to discover these predictors via a variety of data collection techniques and statistical analysis methods that yield probabilistic predictions. The question then is, 'How good or useful are these predictions?' We investigate methods for answering this question in a general setting, and illustrate them using a specific failure predictor discovered on a production system at Sandia.« less
Providing a parallel and distributed capability for JMASS using SPEEDES
NASA Astrophysics Data System (ADS)
Valinski, Maria; Driscoll, Jonathan; McGraw, Robert M.; Meyer, Bob
2002-07-01
The Joint Modeling And Simulation System (JMASS) is a Tri-Service simulation environment that supports engineering and engagement-level simulations. As JMASS is expanded to support other Tri-Service domains, the current set of modeling services must be expanded for High Performance Computing (HPC) applications by adding support for advanced time-management algorithms, parallel and distributed topologies, and high speed communications. By providing support for these services, JMASS can better address modeling domains requiring parallel computationally intense calculations such clutter, vulnerability and lethality calculations, and underwater-based scenarios. A risk reduction effort implementing some HPC services for JMASS using the SPEEDES (Synchronous Parallel Environment for Emulation and Discrete Event Simulation) Simulation Framework has recently concluded. As an artifact of the JMASS-SPEEDES integration, not only can HPC functionality be brought to the JMASS program through SPEEDES, but an additional HLA-based capability can be demonstrated that further addresses interoperability issues. The JMASS-SPEEDES integration provided a means of adding HLA capability to preexisting JMASS scenarios through an implementation of the standard JMASS port communication mechanism that allows players to communicate.
Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias
2011-10-01
Future multiscale and multiphysics models that support research into human disease, translational medical science, and treatment can utilize the power of high-performance computing (HPC) systems. We anticipate that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message-passing processes [e.g., the message-passing interface (MPI)] with multithreading (e.g., OpenMP, Pthreads). The objective of this study is to compare the performance of such hybrid programming models when applied to the simulation of a realistic physiological multiscale model of the heart. Our results show that the hybrid models perform favorably when compared to an implementation using only the MPI and, furthermore, that OpenMP in combination with the MPI provides a satisfactory compromise between performance and code complexity. Having the ability to use threads within MPI processes enables the sophisticated use of all processor cores for both computation and communication phases. Considering that HPC systems in 2012 will have two orders of magnitude more cores than what was used in this study, we believe that faster than real-time multiscale cardiac simulations can be achieved on these systems.
The impact of the U.S. supercomputing initiative will be global
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, Dona
2016-01-15
Last July, President Obama issued an executive order that created a coordinated federal strategy for HPC research, development, and deployment called the U.S. National Strategic Computing Initiative (NSCI). However, this bold, necessary step toward building the next generation of supercomputers has inaugurated a new era for U.S. high performance computing (HPC).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aderholdt, Ferrol; Caldwell, Blake A.; Hicks, Susan Elaine
High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data at various security levels but in so doing are often enclaved at the highest security posture. This approach places significant restrictions on the users of the system even when processing data at a lower security level and exposes data at higher levels of confidentiality to a much broader population than otherwise necessary. The traditional approach of isolation, while effective in establishing security enclaves poses significant challenges formore » the use of shared infrastructure in HPC environments. This report details current state-of-the-art in virtualization, reconfigurable network enclaving via Software Defined Networking (SDN), and storage architectures and bridging techniques for creating secure enclaves in HPC environments.« less
NASA Astrophysics Data System (ADS)
Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley
2015-04-01
The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially increasing data volumes at NCI. Traditional HPC and data environments are still made available in a way that flexibly provides the tools, services and supporting software systems on these new petascale infrastructures. But to enable the research to take place at this scale, the data, metadata and software now need to evolve together - creating a new integrated high performance infrastructure. The new infrastructure at NCI currently supports a catalogue of integrated, reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. One of the challenges for NCI has been to support existing techniques and methods, while carefully preparing the underlying infrastructure for the transition needed for the next class of Data-intensive Science. In doing so, a flexible range of techniques and software can be made available for application across the corpus of data collections available, and to provide a new infrastructure for future interdisciplinary research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willis, D. K.
2016-12-01
High performance computing (HPC) has been a defining strength of Lawrence Livermore National Laboratory (LLNL) since its founding. Livermore scientists have designed and used some of the world’s most powerful computers to drive breakthroughs in nearly every mission area. Today, the Laboratory is recognized as a world leader in the application of HPC to complex science, technology, and engineering challenges. Most importantly, HPC has been integral to the National Nuclear Security Administration’s (NNSA’s) Stockpile Stewardship Program—designed to ensure the safety, security, and reliability of our nuclear deterrent without nuclear testing. A critical factor behind Lawrence Livermore’s preeminence in HPC ismore » the ongoing investments made by the Laboratory Directed Research and Development (LDRD) Program in cutting-edge concepts to enable efficient utilization of these powerful machines. Congress established the LDRD Program in 1991 to maintain the technical vitality of the Department of Energy (DOE) national laboratories. Since then, LDRD has been, and continues to be, an essential tool for exploring anticipated needs that lie beyond the planning horizon of our programs and for attracting the next generation of talented visionaries. Through LDRD, Livermore researchers can examine future challenges, propose and explore innovative solutions, and deliver creative approaches to support our missions. The present scientific and technical strengths of the Laboratory are, in large part, a product of past LDRD investments in HPC. Here, we provide seven examples of LDRD projects from the past decade that have played a critical role in building LLNL’s HPC, computer science, mathematics, and data science research capabilities, and describe how they have impacted LLNL’s mission.« less
ERIC Educational Resources Information Center
O'Hanlon, Charlene
2007-01-01
Traditionally, the high-performance computing (HPC) systems used to conduct research at universities have amounted to silos of technology scattered across the campus and falling under the purview of the researchers themselves. This article reports that a growing number of universities are now taking over the management of those systems and…
Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...
2014-12-09
Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less
BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.
Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel
2015-06-02
Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.
A Framework for Debugging Geoscience Projects in a High Performance Computing Environment
NASA Astrophysics Data System (ADS)
Baxter, C.; Matott, L.
2012-12-01
High performance computing (HPC) infrastructure has become ubiquitous in today's world with the emergence of commercial cloud computing and academic supercomputing centers. Teams of geoscientists, hydrologists and engineers can take advantage of this infrastructure to undertake large research projects - for example, linking one or more site-specific environmental models with soft computing algorithms, such as heuristic global search procedures, to perform parameter estimation and predictive uncertainty analysis, and/or design least-cost remediation systems. However, the size, complexity and distributed nature of these projects can make identifying failures in the associated numerical experiments using conventional ad-hoc approaches both time- consuming and ineffective. To address these problems a multi-tiered debugging framework has been developed. The framework allows for quickly isolating and remedying a number of potential experimental failures, including: failures in the HPC scheduler; bugs in the soft computing code; bugs in the modeling code; and permissions and access control errors. The utility of the framework is demonstrated via application to a series of over 200,000 numerical experiments involving a suite of 5 heuristic global search algorithms and 15 mathematical test functions serving as cheap analogues for the simulation-based optimization of pump-and-treat subsurface remediation systems.
2012-01-01
Background Bioinformatics services have been traditionally provided in the form of a web-server that is hosted at institutional infrastructure and serves multiple users. This model, however, is not flexible enough to cope with the increasing number of users, increasing data size, and new requirements in terms of speed and availability of service. The advent of cloud computing suggests a new service model that provides an efficient solution to these problems, based on the concepts of "resources-on-demand" and "pay-as-you-go". However, cloud computing has not yet been introduced within bioinformatics servers due to the lack of usage scenarios and software layers that address the requirements of the bioinformatics domain. Results In this paper, we provide different use case scenarios for providing cloud computing based services, considering both the technical and financial aspects of the cloud computing service model. These scenarios are for individual users seeking computational power as well as bioinformatics service providers aiming at provision of personalized bioinformatics services to their users. We also present elasticHPC, a software package and a library that facilitates the use of high performance cloud computing resources in general and the implementation of the suggested bioinformatics scenarios in particular. Concrete examples that demonstrate the suggested use case scenarios with whole bioinformatics servers and major sequence analysis tools like BLAST are presented. Experimental results with large datasets are also included to show the advantages of the cloud model. Conclusions Our use case scenarios and the elasticHPC package are steps towards the provision of cloud based bioinformatics services, which would help in overcoming the data challenge of recent biological research. All resources related to elasticHPC and its web-interface are available at http://www.elasticHPC.org. PMID:23281941
El-Kalioby, Mohamed; Abouelhoda, Mohamed; Krüger, Jan; Giegerich, Robert; Sczyrba, Alexander; Wall, Dennis P; Tonellato, Peter
2012-01-01
Bioinformatics services have been traditionally provided in the form of a web-server that is hosted at institutional infrastructure and serves multiple users. This model, however, is not flexible enough to cope with the increasing number of users, increasing data size, and new requirements in terms of speed and availability of service. The advent of cloud computing suggests a new service model that provides an efficient solution to these problems, based on the concepts of "resources-on-demand" and "pay-as-you-go". However, cloud computing has not yet been introduced within bioinformatics servers due to the lack of usage scenarios and software layers that address the requirements of the bioinformatics domain. In this paper, we provide different use case scenarios for providing cloud computing based services, considering both the technical and financial aspects of the cloud computing service model. These scenarios are for individual users seeking computational power as well as bioinformatics service providers aiming at provision of personalized bioinformatics services to their users. We also present elasticHPC, a software package and a library that facilitates the use of high performance cloud computing resources in general and the implementation of the suggested bioinformatics scenarios in particular. Concrete examples that demonstrate the suggested use case scenarios with whole bioinformatics servers and major sequence analysis tools like BLAST are presented. Experimental results with large datasets are also included to show the advantages of the cloud model. Our use case scenarios and the elasticHPC package are steps towards the provision of cloud based bioinformatics services, which would help in overcoming the data challenge of recent biological research. All resources related to elasticHPC and its web-interface are available at http://www.elasticHPC.org.
Climate Science Performance, Data and Productivity on Titan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayer, Benjamin W; Worley, Patrick H; Gaddis, Abigail L
2015-01-01
Climate Science models are flagship codes for the largest of high performance computing (HPC) resources, both in visibility, with the newly launched Department of Energy (DOE) Accelerated Climate Model for Energy (ACME) effort, and in terms of significant fractions of system usage. The performance of the DOE ACME model is captured with application level timers and examined through a sizeable run archive. Performance and variability of compute, queue time and ancillary services are examined. As Climate Science advances in the use of HPC resources there has been an increase in the required human and data systems to achieve programs goals.more » A description of current workflow processes (hardware, software, human) and planned automation of the workflow, along with historical and projected data in motion and at rest data usage, are detailed. The combination of these two topics motivates a description of future systems requirements for DOE Climate Modeling efforts, focusing on the growth of data storage and network and disk bandwidth required to handle data at an acceptable rate.« less
PuTTY | High-Performance Computing | NREL
PuTTY PuTTY Learn how to use PuTTY to connect to NREL's high-performance computing (HPC) systems . Connecting When you start the PuTTY app, the program will display PuTTY's Configuration menu. When this comes
Exploring the capabilities of support vector machines in detecting silent data corruptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subasi, Omer; Di, Sheng; Bautista-Gomez, Leonardo
As the exascale era approaches, the increasing capacity of high-performance computing (HPC) systems with targeted power and energy budget goals introduces significant challenges in reliability. Silent data corruptions (SDCs), or silent errors, are one of the major sources that corrupt the execution results of HPC applications without being detected. Here in this paper, we explore a set of novel SDC detectors – by leveraging epsilon-insensitive support vector machine regression – to detect SDCs that occur in HPC applications. The key contributions are threefold. (1) Our exploration takes temporal, spatial, and spatiotemporal features into account and analyzes different detectors based onmore » different features. (2) We provide an in-depth study on the detection ability and performance with different parameters, and we optimize the detection range carefully. (3) Experiments with eight real-world HPC applications show that support-vector-machine-based detectors can achieve detection sensitivity (i.e., recall) up to 99% yet suffer a less than 1% false positive rate for most cases. Our detectors incur low performance overhead, 5% on average, for all benchmarks studied in this work.« less
Exploring the capabilities of support vector machines in detecting silent data corruptions
Subasi, Omer; Di, Sheng; Bautista-Gomez, Leonardo; ...
2018-02-01
As the exascale era approaches, the increasing capacity of high-performance computing (HPC) systems with targeted power and energy budget goals introduces significant challenges in reliability. Silent data corruptions (SDCs), or silent errors, are one of the major sources that corrupt the execution results of HPC applications without being detected. Here in this paper, we explore a set of novel SDC detectors – by leveraging epsilon-insensitive support vector machine regression – to detect SDCs that occur in HPC applications. The key contributions are threefold. (1) Our exploration takes temporal, spatial, and spatiotemporal features into account and analyzes different detectors based onmore » different features. (2) We provide an in-depth study on the detection ability and performance with different parameters, and we optimize the detection range carefully. (3) Experiments with eight real-world HPC applications show that support-vector-machine-based detectors can achieve detection sensitivity (i.e., recall) up to 99% yet suffer a less than 1% false positive rate for most cases. Our detectors incur low performance overhead, 5% on average, for all benchmarks studied in this work.« less
NASA Astrophysics Data System (ADS)
Alameda, J. C.
2011-12-01
Development and optimization of computational science models, particularly on high performance computers, and with the advent of ubiquitous multicore processor systems, practically on every system, has been accomplished with basic software tools, typically, command-line based compilers, debuggers, performance tools that have not changed substantially from the days of serial and early vector computers. However, model complexity, including the complexity added by modern message passing libraries such as MPI, and the need for hybrid code models (such as openMP and MPI) to be able to take full advantage of high performance computers with an increasing core count per shared memory node, has made development and optimization of such codes an increasingly arduous task. Additional architectural developments, such as many-core processors, only complicate the situation further. In this paper, we describe how our NSF-funded project, "SI2-SSI: A Productive and Accessible Development Workbench for HPC Applications Using the Eclipse Parallel Tools Platform" (WHPC) seeks to improve the Eclipse Parallel Tools Platform, an environment designed to support scientific code development targeted at a diverse set of high performance computing systems. Our WHPC project to improve Eclipse PTP takes an application-centric view to improve PTP. We are using a set of scientific applications, each with a variety of challenges, and using PTP to drive further improvements to both the scientific application, as well as to understand shortcomings in Eclipse PTP from an application developer perspective, to drive our list of improvements we seek to make. We are also partnering with performance tool providers, to drive higher quality performance tool integration. We have partnered with the Cactus group at Louisiana State University to improve Eclipse's ability to work with computational frameworks and extremely complex build systems, as well as to develop educational materials to incorporate into computational science and engineering codes. Finally, we are partnering with the lead PTP developers at IBM, to ensure we are as effective as possible within the Eclipse community development. We are also conducting training and outreach to our user community, including conference BOF sessions, monthly user calls, and an annual user meeting, so that we can best inform the improvements we make to Eclipse PTP. With these activities we endeavor to encourage use of modern software engineering practices, as enabled through the Eclipse IDE, with computational science and engineering applications. These practices include proper use of source code repositories, tracking and rectifying issues, measuring and monitoring code performance changes against both optimizations as well as ever-changing software stacks and configurations on HPC systems, as well as ultimately encouraging development and maintenance of testing suites -- things that have become commonplace in many software endeavors, but have lagged in the development of science applications. We view that the challenge with the increased complexity of both HPC systems and science applications demands the use of better software engineering methods, preferably enabled by modern tools such as Eclipse PTP, to help the computational science community thrive as we evolve the HPC landscape.
Austin, from 2001 to 2007. There he was principal in HPC applications and user support, as well as in research and development in large-scale scientific applications and different HPC systems and technologies Interests HPC applications performance and optimizations|HPC systems and accelerator technologies|Scientific
Distributed Accounting on the Grid
NASA Technical Reports Server (NTRS)
Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.
2001-01-01
By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.
Fine grained event processing on HPCs with the ATLAS Yoda system
NASA Astrophysics Data System (ADS)
Calafiura, Paolo; De, Kaushik; Guan, Wen; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Tsulaia, Vakhtang; Van Gemmeren, Peter; Wenaus, Torre
2015-12-01
High performance computing facilities present unique challenges and opportunities for HEP event processing. The massive scale of many HPC systems means that fractionally small utilization can yield large returns in processing throughput. Parallel applications which can dynamically and efficiently fill any scheduling opportunities the resource presents benefit both the facility (maximal utilization) and the (compute-limited) science. The ATLAS Yoda system provides this capability to HEP-like event processing applications by implementing event-level processing in an MPI-based master-client model that integrates seamlessly with the more broadly scoped ATLAS Event Service. Fine grained, event level work assignments are intelligently dispatched to parallel workers to sustain full utilization on all cores, with outputs streamed off to destination object stores in near real time with similarly fine granularity, such that processing can proceed until termination with full utilization. The system offers the efficiency and scheduling flexibility of preemption without requiring the application actually support or employ check-pointing. We will present the new Yoda system, its motivations, architecture, implementation, and applications in ATLAS data processing at several US HPC centers.
Computational Science and Innovation
NASA Astrophysics Data System (ADS)
Dean, D. J.
2011-09-01
Simulations - utilizing computers to solve complicated science and engineering problems - are a key ingredient of modern science. The U.S. Department of Energy (DOE) is a world leader in the development of high-performance computing (HPC), the development of applied math and algorithms that utilize the full potential of HPC platforms, and the application of computing to science and engineering problems. An interesting general question is whether the DOE can strategically utilize its capability in simulations to advance innovation more broadly. In this article, I will argue that this is certainly possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alexander J.
There is a lack of state-of-the-art HPC simulation tools for simulating general quantum computing. Furthermore, there are no real software tools that integrate current quantum computers into existing classical HPC workflows. This product, the Quantum Virtual Machine (QVM), solves this problem by providing an extensible framework for pluggable virtual, or physical, quantum processing units (QPUs). It enables the execution of low level quantum assembly codes and returns the results of such executions.
Automating NEURON Simulation Deployment in Cloud Resources.
Stockton, David B; Santamaria, Fidel
2017-01-01
Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the OpenStack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon's proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model.
Automating NEURON Simulation Deployment in Cloud Resources
Santamaria, Fidel
2016-01-01
Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the Open-Stack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon’s proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model. PMID:27655341
WMT: The CSDMS Web Modeling Tool
NASA Astrophysics Data System (ADS)
Piper, M.; Hutton, E. W. H.; Overeem, I.; Syvitski, J. P.
2015-12-01
The Community Surface Dynamics Modeling System (CSDMS) has a mission to enable model use and development for research in earth surface processes. CSDMS strives to expand the use of quantitative modeling techniques, promotes best practices in coding, and advocates for the use of open-source software. To streamline and standardize access to models, CSDMS has developed the Web Modeling Tool (WMT), a RESTful web application with a client-side graphical interface and a server-side database and API that allows users to build coupled surface dynamics models in a web browser on a personal computer or a mobile device, and run them in a high-performance computing (HPC) environment. With WMT, users can: Design a model from a set of components Edit component parameters Save models to a web-accessible server Share saved models with the community Submit runs to an HPC system Download simulation results The WMT client is an Ajax application written in Java with GWT, which allows developers to employ object-oriented design principles and development tools such as Ant, Eclipse and JUnit. For deployment on the web, the GWT compiler translates Java code to optimized and obfuscated JavaScript. The WMT client is supported on Firefox, Chrome, Safari, and Internet Explorer. The WMT server, written in Python and SQLite, is a layered system, with each layer exposing a web service API: wmt-db: database of component, model, and simulation metadata and output wmt-api: configure and connect components wmt-exe: launch simulations on remote execution servers The database server provides, as JSON-encoded messages, the metadata for users to couple model components, including descriptions of component exchange items, uses and provides ports, and input parameters. Execution servers are network-accessible computational resources, ranging from HPC systems to desktop computers, containing the CSDMS software stack for running a simulation. Once a simulation completes, its output, in NetCDF, is packaged and uploaded to a data server where it is stored and from which a user can download it as a single compressed archive file.
NASA Astrophysics Data System (ADS)
Rohde, Mitchell M.; Crawford, Justin; Toschlog, Matthew; Iagnemma, Karl D.; Kewlani, Guarav; Cummins, Christopher L.; Jones, Randolph A.; Horner, David A.
2009-05-01
It is widely recognized that simulation is pivotal to vehicle development, whether manned or unmanned. There are few dedicated choices, however, for those wishing to perform realistic, end-to-end simulations of unmanned ground vehicles (UGVs). The Virtual Autonomous Navigation Environment (VANE), under development by US Army Engineer Research and Development Center (ERDC), provides such capabilities but utilizes a High Performance Computing (HPC) Computational Testbed (CTB) and is not intended for on-line, real-time performance. A product of the VANE HPC research is a real-time desktop simulation application under development by the authors that provides a portal into the HPC environment as well as interaction with wider-scope semi-automated force simulations (e.g. OneSAF). This VANE desktop application, dubbed the Autonomous Navigation Virtual Environment Laboratory (ANVEL), enables analysis and testing of autonomous vehicle dynamics and terrain/obstacle interaction in real-time with the capability to interact within the HPC constructive geo-environmental CTB for high fidelity sensor evaluations. ANVEL leverages rigorous physics-based vehicle and vehicle-terrain interaction models in conjunction with high-quality, multimedia visualization techniques to form an intuitive, accurate engineering tool. The system provides an adaptable and customizable simulation platform that allows developers a controlled, repeatable testbed for advanced simulations. ANVEL leverages several key technologies not common to traditional engineering simulators, including techniques from the commercial video-game industry. These enable ANVEL to run on inexpensive commercial, off-the-shelf (COTS) hardware. In this paper, the authors describe key aspects of ANVEL and its development, as well as several initial applications of the system.
High Performance Computing Operations Review Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cupps, Kimberly C.
2013-12-19
The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.
Who watches the watchers?: preventing fault in a fault tolerance library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stanavige, C. D.
The Scalable Checkpoint/Restart library (SCR) was developed and is used by researchers at Lawrence Livermore National Laboratory to provide a fast and efficient method of saving and recovering large applications during runtime on high-performance computing (HPC) systems. Though SCR protects other programs, up until June 2017, nothing was actively protecting SCR. The goal of this project was to automate the building and testing of this library on the varying HPC architectures on which it is used. Our methods centered around the use of a continuous integration tool called Bamboo that allowed for automation agents to be installed on the HPCmore » systems themselves. These agents provided a way for us to establish a new and unique way to automate and customize the allocation of resources and running of tests with CMake’s unit testing framework, CTest, as well as integration testing scripts though an HPC package manager called Spack. These methods provided a parallel environment in which to test the more complex features of SCR. As a result, SCR is now automatically built and tested on several HPC architectures any time changes are made by developers to the library’s source code. The results of these tests are then communicated back to the developers for immediate feedback, allowing them to fix functionality of SCR that may have broken. Hours of developers’ time are now being saved from the tedious process of manually testing and debugging, which saves money and allows the SCR project team to focus their efforts towards development. Thus, HPC system users can use SCR in conjunction with their own applications to efficiently and effectively checkpoint and restart as needed with the assurance that SCR itself is functioning properly.« less
Yokohama, Noriya
2013-07-01
This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost.
Duro, Francisco Rodrigo; Blas, Javier Garcia; Isaila, Florin; ...
2016-10-06
The increasing volume of scientific data and the limited scalability and performance of storage systems are currently presenting a significant limitation for the productivity of the scientific workflows running on both high-performance computing (HPC) and cloud platforms. Clearly needed is better integration of storage systems and workflow engines to address this problem. This paper presents and evaluates a novel solution that leverages codesign principles for integrating Hercules—an in-memory data store—with a workflow management system. We consider four main aspects: workflow representation, task scheduling, task placement, and task termination. As a result, the experimental evaluation on both cloud and HPC systemsmore » demonstrates significant performance and scalability improvements over existing state-of-the-art approaches.« less
Allocation Usage Tracking and Management | High-Performance Computing |
NREL's high-performance computing (HPC) systems, learn how to track and manage your allocations. The alloc_tracker script (/usr/local/bin/alloc_tracker) may be used to see what allocations you have access to, how much of the allocation has been used, how much remains and how many node hours will be forfeited at the
Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun
2008-05-28
Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4-15.9 times faster, while Unphased jobs performed 1.1-18.6 times faster compared to the accumulated computation duration. Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance.
Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun
2008-01-01
Background Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Results Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4–15.9 times faster, while Unphased jobs performed 1.1–18.6 times faster compared to the accumulated computation duration. Conclusion Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance. PMID:18541045
Direct SSH Gateway Access to Peregrine | High Performance Computing |
can access peregrine-ssh.nrel.gov, you must have: An active NREL HPC user account (see User Accounts ) An OTP Token (see One Time Password Tokens) Logging into peregrine-ssh.nrel.gov With your HPC account
Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias
2011-01-01
Future multiscale and multiphysics models must use the power of high performance computing (HPC) systems to enable research into human disease, translational medical science, and treatment. Previously we showed that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message passing processes (e.g. the message passing interface (MPI)) with multithreading (e.g. OpenMP, POSIX pthreads). The objective of this work is to compare the performance of such hybrid programming models when applied to the simulation of a lightweight multiscale cardiac model. Our results show that the hybrid models do not perform favourably when compared to an implementation using only MPI which is in contrast to our results using complex physiological models. Thus, with regards to lightweight multiscale cardiac models, the user may not need to increase programming complexity by using a hybrid programming approach. However, considering that model complexity will increase as well as the HPC system size in both node count and number of cores per node, it is still foreseeable that we will achieve faster than real time multiscale cardiac simulations on these systems using hybrid programming models.
Hybrid data storage system in an HPC exascale environment
Bent, John M.; Faibish, Sorin; Gupta, Uday K.; Tzelnic, Percy; Ting, Dennis P. J.
2015-08-18
A computer-executable method, system, and computer program product for managing I/O requests from a compute node in communication with a data storage system, including a first burst buffer node and a second burst buffer node, the computer-executable method, system, and computer program product comprising striping data on the first burst buffer node and the second burst buffer node, wherein a first portion of the data is communicated to the first burst buffer node and a second portion of the data is communicated to the second burst buffer node, processing the first portion of the data at the first burst buffer node, and processing the second portion of the data at the second burst buffer node.
Towards Cloud-based Asynchronous Elasticity for Iterative HPC Applications
NASA Astrophysics Data System (ADS)
da Rosa Righi, Rodrigo; Facco Rodrigues, Vinicius; André da Costa, Cristiano; Kreutz, Diego; Heiss, Hans-Ulrich
2015-10-01
Elasticity is one of the key features of cloud computing. It allows applications to dynamically scale computing and storage resources, avoiding over- and under-provisioning. In high performance computing (HPC), initiatives are normally modeled to handle bag-of-tasks or key-value applications through a load balancer and a loosely-coupled set of virtual machine (VM) instances. In the joint-field of Message Passing Interface (MPI) and tightly-coupled HPC applications, we observe the need of rewriting source codes, previous knowledge of the application and/or stop-reconfigure-and-go approaches to address cloud elasticity. Besides, there are problems related to how profit this new feature in the HPC scope, since in MPI 2.0 applications the programmers need to handle communicators by themselves, and a sudden consolidation of a VM, together with a process, can compromise the entire execution. To address these issues, we propose a PaaS-based elasticity model, named AutoElastic. It acts as a middleware that allows iterative HPC applications to take advantage of dynamic resource provisioning of cloud infrastructures without any major modification. AutoElastic provides a new concept denoted here as asynchronous elasticity, i.e., it provides a framework to allow applications to either increase or decrease their computing resources without blocking the current execution. The feasibility of AutoElastic is demonstrated through a prototype that runs a CPU-bound numerical integration application on top of the OpenNebula middleware. The results showed the saving of about 3 min at each scaling out operations, emphasizing the contribution of the new concept on contexts where seconds are precious.
NASA Astrophysics Data System (ADS)
van Hemert, Jano; Vilotte, Jean-Pierre
2010-05-01
Research in earthquake and seismology addresses fundamental problems in understanding Earth's internal wave sources and structures, and augment applications to societal concerns about natural hazards, energy resources and environmental change. This community is central to the European Plate Observing System (EPOS)—the ESFRI initiative in solid Earth Sciences. Global and regional seismology monitoring systems are continuously operated and are transmitting a growing wealth of data from Europe and from around the world. These tremendous volumes of seismograms, i.e., records of ground motions as a function of time, have a definite multi-use attribute, which puts a great premium on open-access data infrastructures that are integrated globally. In Europe, the earthquake and seismology community is part of the European Integrated Data Archives (EIDA) infrastructure and is structured as "horizontal" data services. On top of this distributed data archive system, the community has developed recently within the EC project NERIES advanced SOA-based web services and a unified portal system. Enabling advanced analysis of these data by utilising a data-aware distributed computing environment is instrumental to fully exploit the cornucopia of data and to guarantee optimal operation of the high-cost monitoring facilities. The strategy of VERCE is driven by the needs of data-intensive applications in data mining and modelling and will be illustrated through a set of applications. It aims to provide a comprehensive architecture and framework adapted to the scale and the diversity of these applications, and to integrate the community data infrastructure with Grid and HPC infrastructures. A first novel aspect is a service-oriented architecture that provides well-equipped integrated workbenches, with an efficient communication layer between data and Grid infrastructures, augmented with bridges to the HPC facilities. A second novel aspect is the coupling between Grid data analysis and HPC data modelling applications through workflow and data sharing mechanisms. VERCE will develop important interactions with the European infrastructure initiatives in Grid and HPC computing. The VERCE team: CNRS-France (IPG Paris, LGIT Grenoble), UEDIN (UK), KNMI-ORFEUS (Holland), EMSC, INGV (Italy), LMU (Germany), ULIV (UK), BADW-LRZ (Germany), SCAI (Germany), CINECA (Italy)
Hierarchical parallelisation of functional renormalisation group calculations - hp-fRG
NASA Astrophysics Data System (ADS)
Rohe, Daniel
2016-10-01
The functional renormalisation group (fRG) has evolved into a versatile tool in condensed matter theory for studying important aspects of correlated electron systems. Practical applications of the method often involve a high numerical effort, motivating the question in how far High Performance Computing (HPC) can leverage the approach. In this work we report on a multi-level parallelisation of the underlying computational machinery and show that this can speed up the code by several orders of magnitude. This in turn can extend the applicability of the method to otherwise inaccessible cases. We exploit three levels of parallelisation: Distributed computing by means of Message Passing (MPI), shared-memory computing using OpenMP, and vectorisation by means of SIMD units (single-instruction-multiple-data). Results are provided for two distinct High Performance Computing (HPC) platforms, namely the IBM-based BlueGene/Q system JUQUEEN and an Intel Sandy-Bridge-based development cluster. We discuss how certain issues and obstacles were overcome in the course of adapting the code. Most importantly, we conclude that this vast improvement can actually be accomplished by introducing only moderate changes to the code, such that this strategy may serve as a guideline for other researcher to likewise improve the efficiency of their codes.
Exploring similarities among many species distributions
Simmerman, Scott; Wang, Jingyuan; Osborne, James; Shook, Kimberly; Huang, Jian; Godsoe, William; Simons, Theodore R.
2012-01-01
Collecting species presence data and then building models to predict species distribution has been long practiced in the field of ecology for the purpose of improving our understanding of species relationships with each other and with the environment. Due to limitations of computing power as well as limited means of using modeling software on HPC facilities, past species distribution studies have been unable to fully explore diverse data sets. We build a system that can, for the first time to our knowledge, leverage HPC to support effective exploration of species similarities in distribution as well as their dependencies on common environmental conditions. Our system can also compute and reveal uncertainties in the modeling results enabling domain experts to make informed judgments about the data. Our work was motivated by and centered around data collection efforts within the Great Smoky Mountains National Park that date back to the 1940s. Our findings present new research opportunities in ecology and produce actionable field-work items for biodiversity management personnel to include in their planning of daily management activities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tock, Yoav; Mandler, Benjamin; Moreira, Jose
2013-01-01
As HPC systems and applications get bigger and more complex, we are approaching an era in which resiliency and run-time elasticity concerns be- come paramount.We offer a building block for an alternative resiliency approach in which computations will be able to make progress while components fail, in addition to enabling a dynamic set of nodes throughout a computation lifetime. The core of our solution is a hierarchical scalable membership service provid- ing eventual consistency semantics. An attribute replication service is used for hierarchy organization, and is exposed to external applications. Our solution is based on P2P technologies and provides resiliencymore » and elastic runtime support at ultra large scales. Resulting middleware is general purpose while exploiting HPC platform unique features and architecture. We have implemented and tested this system on BlueGene/P with Linux, and using worst-case analysis, evaluated the service scalability as effective for up to 1M nodes.« less
Scalability improvements to NRLMOL for DFT calculations of large molecules
NASA Astrophysics Data System (ADS)
Diaz, Carlos Manuel
Advances in high performance computing (HPC) have provided a way to treat large, computationally demanding tasks using thousands of processors. With the development of more powerful HPC architectures, the need to create efficient and scalable code has grown more important. Electronic structure calculations are valuable in understanding experimental observations and are routinely used for new materials predictions. For the electronic structure calculations, the memory and computation time are proportional to the number of atoms. Memory requirements for these calculations scale as N2, where N is the number of atoms. While the recent advances in HPC offer platforms with large numbers of cores, the limited amount of memory available on a given node and poor scalability of the electronic structure code hinder their efficient usage of these platforms. This thesis will present some developments to overcome these bottlenecks in order to study large systems. These developments, which are implemented in the NRLMOL electronic structure code, involve the use of sparse matrix storage formats and the use of linear algebra using sparse and distributed matrices. These developments along with other related development now allow ground state density functional calculations using up to 25,000 basis functions and the excited state calculations using up to 17,000 basis functions while utilizing all cores on a node. An example on a light-harvesting triad molecule is described. Finally, future plans to further improve the scalability will be presented.
2013-01-01
M. Ahmadi, and M. Shridhar, “ Handwritten Numeral Recognition with Multiple Features and Multistage Classifiers,” Proc. IEEE Int’l Symp. Circuits...ARTICLE (Post Print) 3. DATES COVERED (From - To) SEP 2011 – SEP 2013 4. TITLE AND SUBTITLE A PARALLEL NEUROMORPHIC TEXT RECOGNITION SYSTEM AND ITS...research in computational intelligence has entered a new era. In this paper, we present an HPC-based context-aware intelligent text recognition
Development of a HIPAA-compliant environment for translational research data and analytics.
Bradford, Wayne; Hurdle, John F; LaSalle, Bernie; Facelli, Julio C
2014-01-01
High-performance computing centers (HPC) traditionally have far less restrictive privacy management policies than those encountered in healthcare. We show how an HPC can be re-engineered to accommodate clinical data while retaining its utility in computationally intensive tasks such as data mining, machine learning, and statistics. We also discuss deploying protected virtual machines. A critical planning step was to engage the university's information security operations and the information security and privacy office. Access to the environment requires a double authentication mechanism. The first level of authentication requires access to the university's virtual private network and the second requires that the users be listed in the HPC network information service directory. The physical hardware resides in a data center with controlled room access. All employees of the HPC and its users take the university's local Health Insurance Portability and Accountability Act training series. In the first 3 years, researcher count has increased from 6 to 58.
A Framework for Adaptable Operating and Runtime Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sterling, Thomas
The emergence of new classes of HPC systems where performance improvement is enabled by Moore’s Law for technology is manifest through multi-core-based architectures including specialized GPU structures. Operating systems were originally designed for control of uniprocessor systems. By the 1980s multiprogramming, virtual memory, and network interconnection were integral services incorporated as part of most modern computers. HPC operating systems were primarily derivatives of the Unix model with Linux dominating the Top-500 list. The use of Linux for commodity clusters was first pioneered by the NASA Beowulf Project. However, the rapid increase in number of cores to achieve performance gain throughmore » technology advances has exposed the limitations of POSIX general-purpose operating systems in scaling and efficiency. This project was undertaken through the leadership of Sandia National Laboratories and in partnership of the University of New Mexico to investigate the alternative of composable lightweight kernels on scalable HPC architectures to achieve superior performance for a wide range of applications. The use of composable operating systems is intended to provide a minimalist set of services specifically required by a given application to preclude overheads and operational uncertainties (“OS noise”) that have been demonstrated to degrade efficiency and operational consistency. This project was undertaken as an exploration to investigate possible strategies and methods for composable lightweight kernel operating systems towards support for extreme scale systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armstrong, Robert C.; Ray, Jaideep; Malony, A.
2003-11-01
We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.
Peregrine System User Basics | High-Performance Computing | NREL
peregrine.hpc.nrel.gov or to one of the login nodes. Example commands to access Peregrine from a Linux or Mac OS X system Code Example Create a file called hello.F90 containing the following code: program hello write(6 information by enclosing it in brackets < >. For example: $ ssh -Y
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Roser, Robert; Gerber, Richard
The U.S. Department of Energy (DOE) Office of Science (SC) Offices of High Energy Physics (HEP) and Advanced Scientific Computing Research (ASCR) convened a programmatic Exascale Requirements Review on June 10–12, 2015, in Bethesda, Maryland. This report summarizes the findings, results, and recommendations derived from that meeting. The high-level findings and observations are as follows. Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude — and in some cases greatermore » — than that available currently. The growth rate of data produced by simulations is overwhelming the current ability of both facilities and researchers to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. Data rates and volumes from experimental facilities are also straining the current HEP infrastructure in its ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. A close integration of high-performance computing (HPC) simulation and data analysis will greatly aid in interpreting the results of HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. Long-range planning between HEP and ASCR will be required to meet HEP’s research needs. To best use ASCR HPC resources, the experimental HEP program needs (1) an established, long-term plan for access to ASCR computational and data resources, (2) the ability to map workflows to HPC resources, (3) the ability for ASCR facilities to accommodate workflows run by collaborations potentially comprising thousands of individual members, (4) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, (5) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less
Computational challenges in atomic, molecular and optical physics.
Taylor, Kenneth T
2002-06-15
Six challenges are discussed. These are the laser-driven helium atom; the laser-driven hydrogen molecule and hydrogen molecular ion; electron scattering (with ionization) from one-electron atoms; the vibrational and rotational structure of molecules such as H(3)(+) and water at their dissociation limits; laser-heated clusters; and quantum degeneracy and Bose-Einstein condensation. The first four concern fundamental few-body systems where use of high-performance computing (HPC) is currently making possible accurate modelling from first principles. This leads to reliable predictions and support for laboratory experiment as well as true understanding of the dynamics. Important aspects of these challenges addressable only via a terascale facility are set out. Such a facility makes the last two challenges in the above list meaningfully accessible for the first time, and the scientific interest together with the prospective role for HPC in these is emphasized.
Contact Us | High-Performance Computing | NREL
Select Peregrine Merlin WinHPC Allocation project handle (if requesting HPC account) Description of "SEND REQUEST" and nothing happens, it most likely means you forgot to provide information in a required field. You may need to scroll up to see what required information is missing
National Energy Research Scientific Computing Center
Overview NERSC Mission Contact us Staff Org Chart NERSC History NERSC Stakeholders Usage and User HPC Requirements Reviews NERSC HPC Achievement Awards User Submitted Research Citations NERSC User data archive NERSC Resources Table For Users Live Status User Announcements My NERSC Getting Started
the one illustrated here, the outer membrane protein OprF of Pseudomonas aeruginosa in its -1990s, NWChem was designed to run on networked processors, as in an HPC system, using one-sided communication, says Jeff Hammond of Intel Corp.'s Parallel Computing Laboratory. In one-sided communication, a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peles, Slaven
2016-11-06
GridKit is a software development kit for interfacing power systems and power grid application software with high performance computing (HPC) libraries developed at National Labs and academia. It is also intended as interoperability layer between different numerical libraries. GridKit is not a standalone application, but comes with a suite of test examples illustrating possible usage.
AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 3, Issue 1
2011-01-01
release; distribution is unlimited. Multiscale Modeling of Materials The rotating reflector antenna associated with airport traffic control systems is...batteries and phased-array antennas . Power and efficiency studies evaluate on-board HPC systems and advanced image processing applications. 2010 marked...giving way in some applications to a newer technology called the phased array antenna system (sometimes called a beamformer, example shown at right
A streamlined build system foundation for developing HPC software
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Chris; Harrison, Cyrus; Hornung, Richard
2017-02-09
BLT bundles custom CMake macros, unit testing frameworks for C++ and Fortran, and a set of smoke tests for common HPC dependencies. The combination of these three provides a foundation for quickly bootstrapping a CMale-based system for developing HPC softward.
INFN-Pisa scientific computation environment (GRID, HPC and Interactive Analysis)
NASA Astrophysics Data System (ADS)
Arezzini, S.; Carboni, A.; Caruso, G.; Ciampa, A.; Coscetti, S.; Mazzoni, E.; Piras, S.
2014-06-01
The INFN-Pisa Tier2 infrastructure is described, optimized not only for GRID CPU and Storage access, but also for a more interactive use of the resources in order to provide good solutions for the final data analysis step. The Data Center, equipped with about 6700 production cores, permits the use of modern analysis techniques realized via advanced statistical tools (like RooFit and RooStat) implemented in multicore systems. In particular a POSIX file storage access integrated with standard SRM access is provided. Therefore the unified storage infrastructure is described, based on GPFS and Xrootd, used both for SRM data repository and interactive POSIX access. Such a common infrastructure allows a transparent access to the Tier2 data to the users for their interactive analysis. The organization of a specialized many cores CPU facility devoted to interactive analysis is also described along with the login mechanism integrated with the INFN-AAI (National INFN Infrastructure) to extend the site access and use to a geographical distributed community. Such infrastructure is used also for a national computing facility in use to the INFN theoretical community, it enables a synergic use of computing and storage resources. Our Center initially developed for the HEP community is now growing and includes also HPC resources fully integrated. In recent years has been installed and managed a cluster facility (1000 cores, parallel use via InfiniBand connection) and we are now updating this facility that will provide resources for all the intermediate level HPC computing needs of the INFN theoretical national community.
Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter
2015-01-01
Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files. PMID:25942438
Getting ready for petaflop capacities and beyond: a utility perspective
NASA Astrophysics Data System (ADS)
Hamelin, J. F.; Berthou, J. Y.
2008-07-01
Why should EDF, the leading producer and marketer of electricity in Europe, start adding teraflops to its terawatt-hours and become involved in high-performance computing (HPC)? In this paper we answer this question through examples of major opportunities that HPC brings to our business today and, we hope well into the future of petaflop and exaflop computing. Five cases are presented dealing with nondestructive testing, nuclear fuel management, mechanical behavior of nuclear fuel assemblies, water management, and energy management. For each case we show the benefits brought by HPC, describe the current level of numerical simulation performance, and discuss the perspectives for future steps. We also present the general background that explains why EDF is moving to this technology and briefly comment on the development of user-oriented simulation platforms.
WEI, GUANGQUAN; KANG, XIAOWEI; LIU, XIANPING; TANG, XING; LI, QINLONG; HAN, JUNTAO; YIN, HONG
2015-01-01
Regardless of the controversial pathogenesis, intracranial meningeal hemangiopericytoma (M-HPC) is a rare, highly cellular and vascularized mesenchymal tumor that is characterized by a high tendency for recurrence and extraneural metastasis, despite radical excision and postoperative radiotherapy. M-HPC shares similar clinical manifestations and radiological findings with meningioma, which causes difficulty in differentiation of this entity from those prognostically favorable mimics prior to surgery. Treatment of M-HPC, particularly in metastatic settings, remains a challenge. A case is described of primary M-HPC with recurrence at the initial and distant intracranial sites and extraneural multiple-organ metastases in a 36-year-old female. The metastasis of M-HPC was extremely extensive, and to the best of our knowledge this is the first case of M-HPC with delayed metastasis to the bilateral kidneys. The data suggests that preoperative computed tomography and magnetic resonance imaging could provide certain diagnostic clues and useful information for more optimal treatment planning. The results may imply that novel drugs, such as temozolomide and bevacizumab, as a component of multimodality therapy of M-HPC may deserve further investigation. PMID:26171177
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.
It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less
Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.
2016-07-26
It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less
Using ANSYS Fluent on the Peregrine System | High-Performance Computing |
two ways to run ANSYS CFD interactively on NREL HPC systems. When graphics rendering is not a critical when used as above is quite low (e.g., windows take a long time to come up). For small tasks, it may be , go to Category/Connection/SSH, and check off the box "enable compression". When graphics
Lightweight Provenance Service for High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Dong; Chen, Yong; Carns, Philip
Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. Themore » need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.« less
PerSEUS: Ultra-Low-Power High Performance Computing for Plasma Simulations
NASA Astrophysics Data System (ADS)
Doxas, I.; Andreou, A.; Lyon, J.; Angelopoulos, V.; Lu, S.; Pritchett, P. L.
2017-12-01
Peta-op SupErcomputing Unconventional System (PerSEUS) aims to explore the use for High Performance Scientific Computing (HPC) of ultra-low-power mixed signal unconventional computational elements developed by Johns Hopkins University (JHU), and demonstrate that capability on both fluid and particle Plasma codes. We will describe the JHU Mixed-signal Unconventional Supercomputing Elements (MUSE), and report initial results for the Lyon-Fedder-Mobarry (LFM) global magnetospheric MHD code, and a UCLA general purpose relativistic Particle-In-Cell (PIC) code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
KURTZER, GREGORY; MURIKI, KRISHNA
Singularity is a container solution designed to facilitate mobility of compute across systems and HPC infrastructures. It does this by creating minimal containers that are defined by a specfile and files from the host system are used to build the container. The resulting container can then be launched by any Linux computer with Singularity installed regardless if the programs inside the container are present on the target system, or if they are a different version, or even incompatible versions. Singularity achieves extreme portability without sacrificing usability thus solving the need of mobility of compute. Singularity containers can be executed withinmore » a normal/standard command line process flow.« less
High throughput computing: a solution for scientific analysis
O'Donnell, M.
2011-01-01
handle job failures due to hardware, software, or network interruptions (obviating the need to manually resubmit the job after each stoppage); be affordable; and most importantly, allow us to complete very large, complex analyses that otherwise would not even be possible. In short, we envisioned a job-management system that would take advantage of unused FORT CPUs within a local area network (LAN) to effectively distribute and run highly complex analytical processes. What we found was a solution that uses High Throughput Computing (HTC) and High Performance Computing (HPC) systems to do exactly that (Figure 1).
Towards real-time remote processing of laparoscopic video
NASA Astrophysics Data System (ADS)
Ronaghi, Zahra; Duffy, Edward B.; Kwartowitz, David M.
2015-03-01
Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery (IGS) uses images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). The video streams generate approximately 360 megabytes of data per second, demonstrating a trend towards increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process and visualize data in real-time is essential for performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We aim to develop a medical video processing system using an OpenFlow software defined network that is capable of connecting to multiple remote medical facilities and HPC servers.
Facilitating Co-Design for Extreme-Scale Systems Through Lightweight Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelmann, Christian; Lauer, Frank
This work focuses on tools for investigating algorithm performance at extreme scale with millions of concurrent threads and for evaluating the impact of future architecture choices to facilitate the co-design of high-performance computing (HPC) architectures and applications. The approach focuses on lightweight simulation of extreme-scale HPC systems with the needed amount of accuracy. The prototype presented in this paper is able to provide this capability using a parallel discrete event simulation (PDES), such that a Message Passing Interface (MPI) application can be executed at extreme scale, and its performance properties can be evaluated. The results of an initial prototype aremore » encouraging as a simple 'hello world' MPI program could be scaled up to 1,048,576 virtual MPI processes on a four-node cluster, and the performance properties of two MPI programs could be evaluated at up to 16,384 virtual MPI processes on the same system.« less
An Optimizing Compiler for Petascale I/O on Leadership-Class Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kandemir, Mahmut Taylan; Choudary, Alok; Thakur, Rajeev
In high-performance computing (HPC), parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our DOE project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final reportmore » summarizes the major achievements of the project and also points out promising future directions Two new sections in this report compared to the previous report are IOGenie and SSD/NVM-specific optimizations.« less
NREL Evaluates Aquarius Liquid-Cooled High-Performance Computing Technology
HPC and influence the modern data center designer towards adoption of liquid cooling. Our shared technology. Aquila and Sandia chose NREL's HPC Data Center for the initial installation and evaluation because the data center is configured for liquid cooling, along with the required instrumentation to
High performance computing (HPC) requirements for the new generation variable grid resolution (VGR) global climate models differ from that of traditional global models. A VGR global model with 15 km grids over the CONUS stretching to 60 km grids elsewhere will have about ~2.5 tim...
NASA Astrophysics Data System (ADS)
Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.
2015-12-01
The CyberShake computational platform, developed by the Southern California Earthquake Center (SCEC), is an integrated collection of scientific software and middleware that performs 3D physics-based probabilistic seismic hazard analysis (PSHA) for Southern California. CyberShake integrates large-scale and high-throughput research codes to produce probabilistic seismic hazard curves for individual locations of interest and hazard maps for an entire region. A recent CyberShake calculation produced about 500,000 two-component seismograms for each of 336 locations, resulting in over 300 million synthetic seismograms in a Los Angeles-area probabilistic seismic hazard model. CyberShake calculations require a series of scientific software programs. Early computational stages produce data used as inputs by later stages, so we describe CyberShake calculations using a workflow definition language. Scientific workflow tools automate and manage the input and output data and enable remote job execution on large-scale HPC systems. To satisfy the requests of broad impact users of CyberShake data, such as seismologists, utility companies, and building code engineers, we successfully completed CyberShake Study 15.4 in April and May 2015, calculating a 1 Hz urban seismic hazard map for Los Angeles. We distributed the calculation between the NSF Track 1 system NCSA Blue Waters, the DOE Leadership-class system OLCF Titan, and USC's Center for High Performance Computing. This study ran for over 5 weeks, burning about 1.1 million node-hours and producing over half a petabyte of data. The CyberShake Study 15.4 results doubled the maximum simulated seismic frequency from 0.5 Hz to 1.0 Hz as compared to previous studies, representing a factor of 16 increase in computational complexity. We will describe how our workflow tools supported splitting the calculation across multiple systems. We will explain how we modified CyberShake software components, including GPU implementations and migrating from file-based communication to MPI messaging, to greatly reduce the I/O demands and node-hour requirements of CyberShake. We will also present performance metrics from CyberShake Study 15.4, and discuss challenges that producers of Big Data on open-science HPC resources face moving forward.
Mini-Ckpts: Surviving OS Failures in Persistent Memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiala, David; Mueller, Frank; Ferreira, Kurt Brian
Concern is growing in the high-performance computing (HPC) community on the reliability of future extreme-scale systems. Current efforts have focused on application fault-tolerance rather than the operating system (OS), despite the fact that recent studies have suggested that failures in OS memory are more likely. The OS is critical to a system's correct and efficient operation of the node and processes it governs -- and in HPC also for any other nodes a parallelized application runs on and communicates with: Any single node failure generally forces all processes of this application to terminate due to tight communication in HPC. Therefore,more » the OS itself must be capable of tolerating failures. In this work, we introduce mini-ckpts, a framework which enables application survival despite the occurrence of a fatal OS failure or crash. Mini-ckpts achieves this tolerance by ensuring that the critical data describing a process is preserved in persistent memory prior to the failure. Following the failure, the OS is rejuvenated via a warm reboot and the application continues execution effectively making the failure and restart transparent. The mini-ckpts rejuvenation and recovery process is measured to take between three to six seconds and has a failure-free overhead of between 3-5% for a number of key HPC workloads. In contrast to current fault-tolerance methods, this work ensures that the operating and runtime system can continue in the presence of faults. This is a much finer-grained and dynamic method of fault-tolerance than the current, coarse-grained, application-centric methods. Handling faults at this level has the potential to greatly reduce overheads and enables mitigation of additional fault scenarios.« less
I/O load balancing for big data HPC applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, Arnab K.; Goyal, Arpit; Wang, Feiyi
High Performance Computing (HPC) big data problems require efficient distributed storage systems. However, at scale, such storage systems often experience load imbalance and resource contention due to two factors: the bursty nature of scientific application I/O; and the complex I/O path that is without centralized arbitration and control. For example, the extant Lustre parallel file system-that supports many HPC centers-comprises numerous components connected via custom network topologies, and serves varying demands of a large number of users and applications. Consequently, some storage servers can be more loaded than others, which creates bottlenecks and reduces overall application I/O performance. Existing solutionsmore » typically focus on per application load balancing, and thus are not as effective given their lack of a global view of the system. In this paper, we propose a data-driven approach to load balance the I/O servers at scale, targeted at Lustre deployments. To this end, we design a global mapper on Lustre Metadata Server, which gathers runtime statistics from key storage components on the I/O path, and applies Markov chain modeling and a minimum-cost maximum-flow algorithm to decide where data should be placed. Evaluation using a realistic system simulator and a real setup shows that our approach yields better load balancing, which in turn can improve end-to-end performance.« less
Primary osseous hemangiopericytoma in the thoracic spine.
Ren, Ke; Zhou, Xing; Wu, SuJia; Sun, Xiaoliang
2014-01-01
Hemangiopericytoma (HPC) is a rare tumor of the central nervous system, most commonly found in the cranial cavity. HPCs in the spine are rare, and very few of them are primary osseous HPC. The aims of this study were to describe a rare case of primary osseous HPC in the thoracic spine and review the literature. A 54-year-old man presented with a 3-month history of back pain. Aneuro logical examination revealed no motor or sensory deficits. Magnetic resonance imaging (MRI) and computed tomography (CT) scan showed a tumor originating from the bone structure of the T10 vertebra with paravertebral extension, and chest CT revealed pulmonary metastases. A laminectomy, face-totomy,and subtotal resection of the tumor was performed with posterior pedicle screw system fixation followed by radiotherapy. The post-operative course was uneventful. His back pain was resolved completely after surgery. The patient survived with tumor during the 18-month follow-up period. Histopathology and immunohistologic findings were consistent with HPC. On immunohistochemistry, the tumor was positive for vimentin and CD34, partially positive for S-100, but negative for EMA, desmin, CD117, and CD1a. A literature review identified eight such cases reported between 1942 and 2013. As a conclusion, clinical manifestations of primary osseous spinal HPCs are different from intraspinal meningeal HPCs. Although showing certain variability, histopathology and immunohistochemical examinations are essential to establish the diagnosis. Surgical resection and radiotherapy are the treatment of choice. *These authors contributed equally to this work.
NASA Astrophysics Data System (ADS)
Guan, Zhen; Pekurovsky, Dmitry; Luce, Jason; Thornton, Katsuyo; Lowengrub, John
The structural phase field crystal (XPFC) model can be used to model grain growth in polycrystalline materials at diffusive time-scales while maintaining atomic scale resolution. However, the governing equation of the XPFC model is an integral-partial-differential-equation (IPDE), which poses challenges in implementation onto high performance computing (HPC) platforms. In collaboration with the XSEDE Extended Collaborative Support Service, we developed a distributed memory HPC solver for the XPFC model, which combines parallel multigrid and P3DFFT. The performance benchmarking on the Stampede supercomputer indicates near linear strong and weak scaling for both multigrid and transfer time between multigrid and FFT modules up to 1024 cores. Scalability of the FFT module begins to decline at 128 cores, but it is sufficient for the type of problem we will be examining. We have demonstrated simulations using 1024 cores, and we expect to achieve 4096 cores and beyond. Ongoing work involves optimization of MPI/OpenMP-based codes for the Intel KNL Many-Core Architecture. This optimizes the code for coming pre-exascale systems, in particular many-core systems such as Stampede 2.0 and Cori 2 at NERSC, without sacrificing efficiency on other general HPC systems.
Additive Manufacturing and High-Performance Computing: a Disruptive Latent Technology
NASA Astrophysics Data System (ADS)
Goodwin, Bruce
2015-03-01
This presentation will discuss the relationship between recent advances in Additive Manufacturing (AM) technology, High-Performance Computing (HPC) simulation and design capabilities, and related advances in Uncertainty Quantification (UQ), and then examines their impacts upon national and international security. The presentation surveys how AM accelerates the fabrication process, while HPC combined with UQ provides a fast track for the engineering design cycle. The combination of AM and HPC/UQ almost eliminates the engineering design and prototype iterative cycle, thereby dramatically reducing cost of production and time-to-market. These methods thereby present significant benefits for US national interests, both civilian and military, in an age of austerity. Finally, considering cyber security issues and the advent of the ``cloud,'' these disruptive, currently latent technologies may well enable proliferation and so challenge both nuclear and non-nuclear aspects of international security.
2009 fault tolerance for extreme-scale computing workshop, Albuquerque, NM - March 19-20, 2009.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katz, D. S.; Daly, J.; DeBardeleben, N.
2009-02-01
This is a report on the third in a series of petascale workshops co-sponsored by Blue Waters and TeraGrid to address challenges and opportunities for making effective use of emerging extreme-scale computing. This workshop was held to discuss fault tolerance on large systems for running large, possibly long-running applications. The main point of the workshop was to have systems people, middleware people (including fault-tolerance experts), and applications people talk about the issues and figure out what needs to be done, mostly at the middleware and application levels, to run such applications on the emerging petascale systems, without having faults causemore » large numbers of application failures. The workshop found that there is considerable interest in fault tolerance, resilience, and reliability of high-performance computing (HPC) systems in general, at all levels of HPC. The only way to recover from faults is through the use of some redundancy, either in space or in time. Redundancy in time, in the form of writing checkpoints to disk and restarting at the most recent checkpoint after a fault that cause an application to crash/halt, is the most common tool used in applications today, but there are questions about how long this can continue to be a good solution as systems and memories grow faster than I/O bandwidth to disk. There is interest in both modifications to this, such as checkpoints to memory, partial checkpoints, and message logging, and alternative ideas, such as in-memory recovery using residues. We believe that systematic exploration of these ideas holds the most promise for the scientific applications community. Fault tolerance has been an issue of discussion in the HPC community for at least the past 10 years; but much like other issues, the community has managed to put off addressing it during this period. There is a growing recognition that as systems continue to grow to petascale and beyond, the field is approaching the point where we don't have any choice but to address this through R&D efforts.« less
Canon, Shane
2018-01-24
DOE JGI's Zhong Wang, chair of the High-performance Computing session, gives a brief introduction before Berkeley Lab's Shane Canon talks about "Exploiting HPC Platforms for Metagenomics: Challenges and Opportunities" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.
ASCR/HEP Exascale Requirements Review Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Roser, Robert; Gerber, Richard
This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, tomore » store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less
ASCR/HEP Exascale Requirements Review Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; et al.
2016-03-30
This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, tomore » store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less
Python and HPC for High Energy Physics Data Analyses
Sehrish, S.; Kowalkowski, J.; Paterno, M.; ...
2017-01-01
High level abstractions in Python that can utilize computing hardware well seem to be an attractive option for writing data reduction and analysis tasks. In this paper, we explore the features available in Python which are useful and efficient for end user analysis in High Energy Physics (HEP). A typical vertical slice of an HEP data analysis is somewhat fragmented: the state of the reduction/analysis process must be saved at certain stages to allow for selective reprocessing of only parts of a generally time-consuming workflow. Also, algorithms tend to to be modular because of the heterogeneous nature of most detectorsmore » and the need to analyze different parts of the detector separately before combining the information. This fragmentation causes difficulties for interactive data analysis, and as data sets increase in size and complexity (O10 TiB for a “small” neutrino experiment to the O10 PiB currently held by the CMS experiment at the LHC), data analysis methods traditional to the field must evolve to make optimum use of emerging HPC technologies and platforms. Mainstream big data tools, while suggesting a direction in terms of what can be done if an entire data set can be available across a system and analysed with high-level programming abstractions, are not designed with either scientific computing generally, or modern HPC platform features in particular, such as data caching levels, in mind. Our example HPC use case is a search for a new elementary particle which might explain the phenomenon known as “Dark Matter”. Here, using data from the CMS detector, we will use HDF5 as our input data format, and MPI with Python to implement our use case.« less
HPC AND GRID COMPUTING FOR INTEGRATIVE BIOMEDICAL RESEARCH
Kurc, Tahsin; Hastings, Shannon; Kumar, Vijay; Langella, Stephen; Sharma, Ashish; Pan, Tony; Oster, Scott; Ervin, David; Permar, Justin; Narayanan, Sivaramakrishnan; Gil, Yolanda; Deelman, Ewa; Hall, Mary; Saltz, Joel
2010-01-01
Integrative biomedical research projects query, analyze, and integrate many different data types and make use of datasets obtained from measurements or simulations of structure and function at multiple biological scales. With the increasing availability of high-throughput and high-resolution instruments, the integrative biomedical research imposes many challenging requirements on software middleware systems. In this paper, we look at some of these requirements using example research pattern templates. We then discuss how middleware systems, which incorporate Grid and high-performance computing, could be employed to address the requirements. PMID:20107625
Rolex: Resilience-oriented language extensions for extreme-scale systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Robert F.; Hukerikar, Saurabh
Future exascale high-performance computing (HPC) systems will be constructed from VLSI devices that will be less reliable than those used today, and faults will become the norm, not the exception. This will pose significant problems for system designers and programmers, who for half-a-century have enjoyed an execution model that assumed correct behavior by the underlying computing system. The mean time to failure (MTTF) of the system scales inversely to the number of components in the system and therefore faults and resultant system level failures will increase, as systems scale in terms of the number of processor cores and memory modulesmore » used. However every error detected need not cause catastrophic failure. Many HPC applications are inherently fault resilient. Yet it is the application programmers who have this knowledge but lack mechanisms to convey it to the system. In this paper, we present new Resilience Oriented Language Extensions (Rolex) which facilitate the incorporation of fault resilience as an intrinsic property of the application code. We describe the syntax and semantics of the language extensions as well as the implementation of the supporting compiler infrastructure and runtime system. Furthermore, our experiments show that an approach that leverages the programmer's insight to reason about the context and significance of faults to the application outcome significantly improves the probability that an application runs to a successful conclusion.« less
Rolex: Resilience-oriented language extensions for extreme-scale systems
Lucas, Robert F.; Hukerikar, Saurabh
2016-05-26
Future exascale high-performance computing (HPC) systems will be constructed from VLSI devices that will be less reliable than those used today, and faults will become the norm, not the exception. This will pose significant problems for system designers and programmers, who for half-a-century have enjoyed an execution model that assumed correct behavior by the underlying computing system. The mean time to failure (MTTF) of the system scales inversely to the number of components in the system and therefore faults and resultant system level failures will increase, as systems scale in terms of the number of processor cores and memory modulesmore » used. However every error detected need not cause catastrophic failure. Many HPC applications are inherently fault resilient. Yet it is the application programmers who have this knowledge but lack mechanisms to convey it to the system. In this paper, we present new Resilience Oriented Language Extensions (Rolex) which facilitate the incorporation of fault resilience as an intrinsic property of the application code. We describe the syntax and semantics of the language extensions as well as the implementation of the supporting compiler infrastructure and runtime system. Furthermore, our experiments show that an approach that leverages the programmer's insight to reason about the context and significance of faults to the application outcome significantly improves the probability that an application runs to a successful conclusion.« less
Khoo, James B.; Sittampalam, Kesavan; Chee, Soo K.
2008-01-01
Abstract We report an extremely rare case of malignant hemangiopericytoma (HPC) of the parotid gland and its metastatic spread to lung, liver, and skeletal muscle. Computed tomography (CT) imaging, histopathological and immunohistochemical methods were employed to study the features of malignant HPC and its metastases. CT imaging was helpful to determine the exact location, involvement of adjacent structures and vascularity, as well as evaluating pulmonary, hepatic, peritoneal, and muscular metastases. Immunohistochemical and histopatholgical features of the primary tumor as well as the metastases were consistent with the diagnosis of malignant HPC. PMID:18940737
Final Report on the Proposal to Provide Asian Science and Technology Information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahaner, David K.
2003-07-23
The Asian Technology Information Program (ATIP) conducted a seven-month Asian science and technology information program for the Office:of Energy Research (ER), U.S: Department of Energy (DOE.) The seven-month program consists of 1) monitoring, analyzing, and dissemiuating science and technology trends and developments associated with Asian high performance computing and communications (HPC), networking, and associated topics, 2) access to ATIP's annual series of Asian S&T reports for ER and HPC related personnel and, 3) supporting DOE and ER designated visits to Asia to study and assess Asian HPC.
Unified Performance and Power Modeling of Scientific Workloads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Shuaiwen; Barker, Kevin J.; Kerbyson, Darren J.
2013-11-17
It is expected that scientific applications executing on future large-scale HPC must be optimized not only in terms of performance, but also in terms of power consumption. As power and energy become increasingly constrained resources, researchers and developers must have access to tools that will allow for accurate prediction of both performance and power consumption. Reasoning about performance and power consumption in concert will be critical for achieving maximum utilization of limited resources on future HPC systems. To this end, we present a unified performance and power model for the Nek-Bone mini-application developed as part of the DOE's CESAR Exascalemore » Co-Design Center. Our models consider the impact of computation, point-to-point communication, and collective communication« less
Havery Mudd 2014-2015 Computer Science Conduit Clinic Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aspesi, G; Bai, J; Deese, R
2015-05-12
Conduit, a new open-source library developed at Lawrence Livermore National Laboratories, provides a C++ application programming interface (API) to describe and access scientific data. Conduit’s primary use is for inmemory data exchange in high performance computing (HPC) applications. Our team tested and improved Conduit to make it more appealing to potential adopters in the HPC community. We extended Conduit’s capabilities by prototyping four libraries: one for parallel communication using MPI, one for I/O functionality, one for aggregating performance data, and one for data visualization.
Collective input/output under memory constraints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Yin; Chen, Yong; Zhuang, Yu
2014-12-18
Compared with current high-performance computing (HPC) systems, exascale systems are expected to have much less memory per node, which can significantly reduce necessary collective input/output (I/O) performance. In this study, we introduce a memory-conscious collective I/O strategy that takes into account memory capacity and bandwidth constraints. The new strategy restricts aggregation data traffic within disjointed subgroups, coordinates I/O accesses in intranode and internode layers, and determines I/O aggregators at run time considering memory consumption among processes. We have prototyped the design and evaluated it with commonly used benchmarks to verify its potential. The evaluation results demonstrate that this strategy holdsmore » promise in mitigating the memory pressure, alleviating the contention for memory bandwidth, and improving the I/O performance for projected extreme-scale systems. Given the importance of supporting increasingly data-intensive workloads and projected memory constraints on increasingly larger scale HPC systems, this new memory-conscious collective I/O can have a significant positive impact on scientific discovery productivity.« less
Research of aerohydrodynamic and aeroelastic processes on PNRPU HPC system
NASA Astrophysics Data System (ADS)
Modorskii, V. Ya.; Shevelev, N. A.
2016-10-01
Research of aerohydrodynamic and aeroelastic processes with the High Performance Computing Complex in PNIPU is actively conducted within the university priority development direction "Aviation engine and gas turbine technology". Work is carried out in two areas: development and use of domestic software and use of well-known foreign licensed applied software packets. In addition, the third direction associated with the verification of computational experiments - physical modeling, with unique proprietary experimental installations is being developed.
Evaluating Application Resilience with XRay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Sui; Bronevetsky, Greg; Li, Bin
2015-05-07
The rising count and shrinking feature size of transistors within modern computers is making them increasingly vulnerable to various types of soft faults. This problem is especially acute in high-performance computing (HPC) systems used for scientific computing, because these systems include many thousands of compute cores and nodes, all of which may be utilized in a single large-scale run. The increasing vulnerability of HPC applications to errors induced by soft faults is motivating extensive work on techniques to make these applications more resiilent to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithmspecific error detection andmore » tolerance techniques. Effective use of such techniques requires a detailed understanding of how a given application is affected by soft faults to ensure that (i) efforts to improve application resilience are spent in the code regions most vulnerable to faults and (ii) the appropriate resilience technique is applied to each code region. This paper presents XRay, a tool to view the application vulnerability to soft errors, and illustrates how XRay can be used in the context of a representative application. In addition to providing actionable insights into application behavior XRay automatically selects the number of fault injection experiments required to provide an informative view of application behavior, ensuring that the information is statistically well-grounded without performing unnecessary experiments.« less
NASA Center for Climate Simulation (NCCS) Advanced Technology AT5 Virtualized Infiniband Report
NASA Technical Reports Server (NTRS)
Thompson, John H.; Bledsoe, Benjamin C.; Wagner, Mark; Shakshober, John; Fromkin, Russ
2013-01-01
The NCCS is part of the Computational and Information Sciences and Technology Office (CISTO) of Goddard Space Flight Center's (GSFC) Sciences and Exploration Directorate. The NCCS's mission is to enable scientists to increase their understanding of the Earth, the solar system, and the universe by supplying state-of-the-art high performance computing (HPC) solutions. To accomplish this mission, the NCCS (https://www.nccs.nasa.gov) provides high performance compute engines, mass storage, and network solutions to meet the specialized needs of the Earth and space science user communities
Toward performance portability of the Albany finite element analysis code using the Kokkos library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.
Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less
Toward performance portability of the Albany finite element analysis code using the Kokkos library
Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.; ...
2018-02-05
Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less
Webinar: Delivering Transformational HPC Solutions to Industry
Streitz, Frederick
2018-01-16
Dr. Frederick Streitz, director of the High Performance Computing Innovation Center, discusses Lawrence Livermore National Laboratory computational capabilities and expertise available to industry in this webinar.
OpenTopography: Addressing Big Data Challenges Using Cloud Computing, HPC, and Data Analytics
NASA Astrophysics Data System (ADS)
Crosby, C. J.; Nandigam, V.; Phan, M.; Youn, C.; Baru, C.; Arrowsmith, R.
2014-12-01
OpenTopography (OT) is a geoinformatics-based data facility initiated in 2009 for democratizing access to high-resolution topographic data, derived products, and tools. Hosted at the San Diego Supercomputer Center (SDSC), OT utilizes cyberinfrastructure, including large-scale data management, high-performance computing, and service-oriented architectures to provide efficient Web based access to large, high-resolution topographic datasets. OT collocates data with processing tools to enable users to quickly access custom data and derived products for their application. OT's ongoing R&D efforts aim to solve emerging technical challenges associated with exponential growth in data, higher order data products, as well as user base. Optimization of data management strategies can be informed by a comprehensive set of OT user access metrics that allows us to better understand usage patterns with respect to the data. By analyzing the spatiotemporal access patterns within the datasets, we can map areas of the data archive that are highly active (hot) versus the ones that are rarely accessed (cold). This enables us to architect a tiered storage environment consisting of high performance disk storage (SSD) for the hot areas and less expensive slower disk for the cold ones, thereby optimizing price to performance. From a compute perspective, OT is looking at cloud based solutions such as the Microsoft Azure platform to handle sudden increases in load. An OT virtual machine image in Microsoft's VM Depot can be invoked and deployed quickly in response to increased system demand. OT has also integrated SDSC HPC systems like the Gordon supercomputer into our infrastructure tier to enable compute intensive workloads like parallel computation of hydrologic routing on high resolution topography. This capability also allows OT to scale to HPC resources during high loads to meet user demand and provide more efficient processing. With a growing user base and maturing scientific user community comes new requests for algorithms and processing capabilities. To address this demand, OT is developing an extensible service based architecture for integrating community-developed software. This "plugable" approach to Web service deployment will enable new processing and analysis tools to run collocated with OT hosted data.
Could Blobs Fuel Storage-Based Convergence between HPC and Big Data?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matri, Pierre; Alforov, Yevhen; Brandon, Alvaro
The increasingly growing data sets processed on HPC platforms raise major challenges for the underlying storage layer. A promising alternative to POSIX-IO- compliant file systems are simpler blobs (binary large objects), or object storage systems. Such systems offer lower overhead and better performance at the cost of largely unused features such as file hierarchies or permissions. Similarly, blobs are increasingly considered for replacing distributed file systems for big data analytics or as a base for storage abstractions such as key-value stores or time-series databases. This growing interest in such object storage on HPC and big data platforms raises the question:more » Are blobs the right level of abstraction to enable storage-based convergence between HPC and Big Data? In this paper we study the impact of blob-based storage for real-world applications on HPC and cloud environments. The results show that blobbased storage convergence is possible, leading to a significant performance improvement on both platforms« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Sheng; Berrocal, Eduardo; Cappello, Franck
The silent data corruption (SDC) problem is attracting more and more attentions because it is expected to have a great impact on exascale HPC applications. SDC faults are hazardous in that they pass unnoticed by hardware and can lead to wrong computation results. In this work, we formulate SDC detection as a runtime one-step-ahead prediction method, leveraging multiple linear prediction methods in order to improve the detection results. The contributions are twofold: (1) we propose an error feedback control model that can reduce the prediction errors for different linear prediction methods, and (2) we propose a spatial-data-based even-sampling method tomore » minimize the detection overheads (including memory and computation cost). We implement our algorithms in the fault tolerance interface, a fault tolerance library with multiple checkpoint levels, such that users can conveniently protect their HPC applications against both SDC errors and fail-stop errors. We evaluate our approach by using large-scale traces from well-known, large-scale HPC applications, as well as by running those HPC applications on a real cluster environment. Experiments show that our error feedback control model can improve detection sensitivity by 34-189% for bit-flip memory errors injected with the bit positions in the range [20,30], without any degradation on detection accuracy. Furthermore, memory size can be reduced by 33% with our spatial-data even-sampling method, with only a slight and graceful degradation in the detection sensitivity.« less
Spatial Support Vector Regression to Detect Silent Errors in the Exascale Era
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subasi, Omer; Di, Sheng; Bautista-Gomez, Leonardo
As the exascale era approaches, the increasing capacity of high-performance computing (HPC) systems with targeted power and energy budget goals introduces significant challenges in reliability. Silent data corruptions (SDCs) or silent errors are one of the major sources that corrupt the executionresults of HPC applications without being detected. In this work, we explore a low-memory-overhead SDC detector, by leveraging epsilon-insensitive support vector machine regression, to detect SDCs that occur in HPC applications that can be characterized by an impact error bound. The key contributions are three fold. (1) Our design takes spatialfeatures (i.e., neighbouring data values for each data pointmore » in a snapshot) into training data, such that little memory overhead (less than 1%) is introduced. (2) We provide an in-depth study on the detection ability and performance with different parameters, and we optimize the detection range carefully. (3) Experiments with eight real-world HPC applications show thatour detector can achieve the detection sensitivity (i.e., recall) up to 99% yet suffer a less than 1% of false positive rate for most cases. Our detector incurs low performance overhead, 5% on average, for all benchmarks studied in the paper. Compared with other state-of-the-art techniques, our detector exhibits the best tradeoff considering the detection ability and overheads.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crussell, Jonathan; Boote, Jeffrey W.; Fritz, David Jakob
Networked Information Technology systems play a key role supporting critical government, military, and private computer installations. Many of today's critical infrastructure systems have strong dependencies on secure information exchange among geographically dispersed facilities. As operations become increasingly dependent on the information exchange they also become targets for exploitation. The need to protect data and defend these systems from external attack has become increasingly vital while the nature of the threats has become sophisticated and pervasive making the challenges daunting. Enter Emulytics.
FLAME: A platform for high performance computing of complex systems, applied for three case studies
Kiran, Mariam; Bicak, Mesude; Maleki-Dizaji, Saeedeh; ...
2011-01-01
FLAME allows complex models to be automatically parallelised on High Performance Computing (HPC) grids enabling large number of agents to be simulated over short periods of time. Modellers are hindered by complexities of porting models on parallel platforms and time taken to run large simulations on a single machine, which FLAME overcomes. Three case studies from different disciplines were modelled using FLAME, and are presented along with their performance results on a grid.
76 FR 45786 - Advanced Scientific Computing Advisory Committee; Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-01
... updates. EU Data Initiative. HPC & EERE Wind Program. Early Career Research on Energy Efficient Interconnect for Exascale Computing. Separating Algorithm and Implentation. Update on ASCR exascale planning...
An Overview of the Computational Physics and Methods Group at Los Alamos National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Randal Scott
CCS Division was formed to strengthen the visibility and impact of computer science and computational physics research on strategic directions for the Laboratory. Both computer science and computational science are now central to scientific discovery and innovation. They have become indispensable tools for all other scientific missions at the Laboratory. CCS Division forms a bridge between external partners and Laboratory programs, bringing new ideas and technologies to bear on today’s important problems and attracting high-quality technical staff members to the Laboratory. The Computational Physics and Methods Group CCS-2 conducts methods research and develops scientific software aimed at the latest andmore » emerging HPC systems.« less
Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system
NASA Astrophysics Data System (ADS)
Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd
2016-10-01
Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare policies of the cluster. The developed thin integration layer between OpenStack and Moab can be adapted to other batch servers and virtualization systems, making the concept also applicable for other cluster operators. This contribution will report on the concept and implementation of an OpenStack-virtualized cluster used for HEP workflows. While the full cluster will be installed in spring 2016, a test-bed setup with 800 cores has been used to study the overall system performance and dedicated HEP jobs were run in a virtualized environment over many weeks. Furthermore, the dynamic integration of the virtualized worker nodes, depending on the workload at the institute's computing system, will be described.
1001 Ways to run AutoDock Vina for virtual screening
NASA Astrophysics Data System (ADS)
Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.
2016-03-01
Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.
1001 Ways to run AutoDock Vina for virtual screening.
Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D
2016-03-01
Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.
Lee, Justin Q; Sutherland, Robert J; McDonald, Robert J
2017-09-01
There is a substantial body of evidence that the hippocampus (HPC) plays and essential role in context discrimination in rodents. Studies reporting anterograde amnesia (AA) used repeated, alternating, distributed conditioning and extinction sessions to measure context fear discrimination. In addition, there is uncertainty about the extent of damage to the HPC. Here, we induced conditioned fear prior to discrimination tests and rats sustained extensive, quantified pre- or post-training HPC damage. Unlike previous work, we found that extensive HPC damage spares context discrimination, we observed no AA. There must be a non-HPC system that can acquire long-term memories that support context fear discrimination. Post-training HPC damage caused retrograde amnesia (RA) for context discrimination, even when rats are fear conditioned for multiple sessions. We discuss the implications of these findings for understanding the role of HPC in long-term memory. © 2017 Wiley Periodicals, Inc.
FOSS GIS on the GFZ HPC cluster: Towards a service-oriented Scientific Geocomputation Environment
NASA Astrophysics Data System (ADS)
Loewe, P.; Klump, J.; Thaler, J.
2012-12-01
High performance compute clusters can be used as geocomputation workbenches. Their wealth of resources enables us to take on geocomputation tasks which exceed the limitations of smaller systems. These general capabilities can be harnessed via tools such as Geographic Information System (GIS), provided they are able to utilize the available cluster configuration/architecture and provide a sufficient degree of user friendliness to allow for wide application. While server-level computing is clearly not sufficient for the growing numbers of data- or computation-intense tasks undertaken, these tasks do not get even close to the requirements needed for access to "top shelf" national cluster facilities. So until recently such kind of geocomputation research was effectively barred due to lack access to of adequate resources. In this paper we report on the experiences gained by providing GRASS GIS as a software service on a HPC compute cluster at the German Research Centre for Geosciences using Platform Computing's Load Sharing Facility (LSF). GRASS GIS is the oldest and largest Free Open Source (FOSS) GIS project. During ramp up in 2011, multiple versions of GRASS GIS (v 6.4.2, 6.5 and 7.0) were installed on the HPC compute cluster, which currently consists of 234 nodes with 480 CPUs providing 3084 cores. Nineteen different processing queues with varying hardware capabilities and priorities are provided, allowing for fine-grained scheduling and load balancing. After successful initial testing, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008) and allow to use all 3084 cores for GRASS based geocomputation work. However, in practice applications are limited to fewer resources as assigned to their respective queue. Applications of the new GIS functionality comprise so far of hydrological analysis, remote sensing and the generation of maps of simulated tsunamis in the Mediterranean Sea for the Tsunami Atlas of the FP-7 TRIDEC Project (www.tridec-online.eu). This included the processing of complex problems, requiring significant amounts of processing time up to full 20 CPU days. This GRASS GIS-based service is provided as a research utility in the sense of "Software as a Service" (SaaS) and is a first step towards a GFZ corporate cloud service.
Desktop supercomputer: what can it do?
NASA Astrophysics Data System (ADS)
Bogdanov, A.; Degtyarev, A.; Korkhov, V.
2017-12-01
The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.
Large-scale parallel genome assembler over cloud computing environment.
Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong
2017-06-01
The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.
NASA Astrophysics Data System (ADS)
Ryu, Hoon; Jeong, Yosang; Kang, Ji-Hoon; Cho, Kyu Nam
2016-12-01
Modelling of multi-million atomic semiconductor structures is important as it not only predicts properties of physically realizable novel materials, but can accelerate advanced device designs. This work elaborates a new Technology-Computer-Aided-Design (TCAD) tool for nanoelectronics modelling, which uses a sp3d5s∗ tight-binding approach to describe multi-million atomic structures, and simulate electronic structures with high performance computing (HPC), including atomic effects such as alloy and dopant disorders. Being named as Quantum simulation tool for Advanced Nanoscale Devices (Q-AND), the tool shows nice scalability on traditional multi-core HPC clusters implying the strong capability of large-scale electronic structure simulations, particularly with remarkable performance enhancement on latest clusters of Intel Xeon PhiTM coprocessors. A review of the recent modelling study conducted to understand an experimental work of highly phosphorus-doped silicon nanowires, is presented to demonstrate the utility of Q-AND. Having been developed via Intel Parallel Computing Center project, Q-AND will be open to public to establish a sound framework of nanoelectronics modelling with advanced HPC clusters of a many-core base. With details of the development methodology and exemplary study of dopant electronics, this work will present a practical guideline for TCAD development to researchers in the field of computational nanoelectronics.
Scheduling Operations for Massive Heterogeneous Clusters
NASA Technical Reports Server (NTRS)
Humphrey, John; Spagnoli, Kyle
2013-01-01
High-performance computing (HPC) programming has become increasingly difficult with the advent of hybrid supercomputers consisting of multicore CPUs and accelerator boards such as the GPU. Manual tuning of software to achieve high performance on this type of machine has been performed by programmers. This is needlessly difficult and prone to being invalidated by new hardware, new software, or changes in the underlying code. A system was developed for task-based representation of programs, which when coupled with a scheduler and runtime system, allows for many benefits, including higher performance and utilization of computational resources, easier programming and porting, and adaptations of code during runtime. The system consists of a method of representing computer algorithms as a series of data-dependent tasks. The series forms a graph, which can be scheduled for execution on many nodes of a supercomputer efficiently by a computer algorithm. The schedule is executed by a dispatch component, which is tailored to understand all of the hardware types that may be available within the system. The scheduler is informed by a cluster mapping tool, which generates a topology of available resources and their strengths and communication costs. Software is decoupled from its hardware, which aids in porting to future architectures. A computer algorithm schedules all operations, which for systems of high complexity (i.e., most NASA codes), cannot be performed optimally by a human. The system aids in reducing repetitive code, such as communication code, and aids in the reduction of redundant code across projects. It adds new features to code automatically, such as recovering from a lost node or the ability to modify the code while running. In this project, the innovators at the time of this reporting intend to develop two distinct technologies that build upon each other and both of which serve as building blocks for more efficient HPC usage. First is the scheduling and dynamic execution framework, and the second is scalable linear algebra libraries that are built directly on the former.
Performance Analysis of Ivshmem for High-Performance Computing in Virtual Machines
NASA Astrophysics Data System (ADS)
Ivanovic, Pavle; Richter, Harald
2018-01-01
High-Performance computing (HPC) is rarely accomplished via virtual machines (VMs). In this paper, we present a remake of ivshmem which can change this. Ivshmem was a shared memory (SHM) between virtual machines on the same server, with SHM-access synchronization included, until about 5 years ago when newer versions of Linux and its virtualization library libvirt evolved. We restored that SHM-access synchronization feature because it is indispensable for HPC and made ivshmem runnable with contemporary versions of Linux, libvirt, KVM, QEMU and especially MPICH, which is an implementation of MPI - the standard HPC communication library. Additionally, MPICH was transparently modified by us to get ivshmem included, resulting in a three to ten times performance improvement compared to TCP/IP. Furthermore, we have transparently replaced MPI_PUT, a single-side MPICH communication mechanism, by an own MPI_PUT wrapper. As a result, our ivshmem even surpasses non-virtualized SHM data transfers for block lengths greater than 512 KBytes, showing the benefits of virtualization. All improvements were possible without using SR-IOV.
Agelastos, Anthony; Allan, Benjamin; Brandt, Jim; ...
2016-05-18
A detailed understanding of HPC applications’ resource needs and their complex interactions with each other and HPC platform resources are critical to achieving scalability and performance. Such understanding has been difficult to achieve because typical application profiling tools do not capture the behaviors of codes under the potentially wide spectrum of actual production conditions and because typical monitoring tools do not capture system resource usage information with high enough fidelity to gain sufficient insight into application performance and demands. In this paper we present both system and application profiling results based on data obtained through synchronized system wide monitoring onmore » a production HPC cluster at Sandia National Laboratories (SNL). We demonstrate analytic and visualization techniques that we are using to characterize application and system resource usage under production conditions for better understanding of application resource needs. Furthermore, our goals are to improve application performance (through understanding application-to-resource mapping and system throughput) and to ensure that future system capabilities match their intended workloads.« less
Mantle Convection on Modern Supercomputers
NASA Astrophysics Data System (ADS)
Weismüller, J.; Gmeiner, B.; Huber, M.; John, L.; Mohr, M.; Rüde, U.; Wohlmuth, B.; Bunge, H. P.
2015-12-01
Mantle convection is the cause for plate tectonics, the formation of mountains and oceans, and the main driving mechanism behind earthquakes. The convection process is modeled by a system of partial differential equations describing the conservation of mass, momentum and energy. Characteristic to mantle flow is the vast disparity of length scales from global to microscopic, turning mantle convection simulations into a challenging application for high-performance computing. As system size and technical complexity of the simulations continue to increase, design and implementation of simulation models for next generation large-scale architectures is handled successfully only in an interdisciplinary context. A new priority program - named SPPEXA - by the German Research Foundation (DFG) addresses this issue, and brings together computer scientists, mathematicians and application scientists around grand challenges in HPC. Here we report from the TERRA-NEO project, which is part of the high visibility SPPEXA program, and a joint effort of four research groups. TERRA-NEO develops algorithms for future HPC infrastructures, focusing on high computational efficiency and resilience in next generation mantle convection models. We present software that can resolve the Earth's mantle with up to 1012 grid points and scales efficiently to massively parallel hardware with more than 50,000 processors. We use our simulations to explore the dynamic regime of mantle convection and assess the impact of small scale processes on global mantle flow.
LVFS: A Big Data File Storage Bridge for the HPC Community
NASA Astrophysics Data System (ADS)
Golpayegani, N.; Halem, M.; Mauoka, E.; Fonseca, L. F.
2015-12-01
Merging Big Data capabilities into High Performance Computing architecture starts at the file storage level. Heterogeneous storage systems are emerging which offer enhanced features for dealing with Big Data such as the IBM GPFS storage system's integration into Hadoop Map-Reduce. Taking advantage of these capabilities requires file storage systems to be adaptive and accommodate these new storage technologies. We present the extension of the Lightweight Virtual File System (LVFS) currently running as the production system for the MODIS Level 1 and Atmosphere Archive and Distribution System (LAADS) to incorporate a flexible plugin architecture which allows easy integration of new HPC hardware and/or software storage technologies without disrupting workflows, system architectures and only minimal impact on existing tools. We consider two essential aspects provided by the LVFS plugin architecture needed for the future HPC community. First, it allows for the seamless integration of new and emerging hardware technologies which are significantly different than existing technologies such as Segate's Kinetic disks and Intel's 3DXPoint non-volatile storage. Second is the transparent and instantaneous conversion between new software technologies and various file formats. With most current storage system a switch in file format would require costly reprocessing and nearly doubling of storage requirements. We will install LVFS on UMBC's IBM iDataPlex cluster with a heterogeneous storage architecture utilizing local, remote, and Seagate Kinetic storage as a case study. LVFS merges different kinds of storage architectures to show users a uniform layout and, therefore, prevent any disruption in workflows, architecture design, or tool usage. We will show how LVFS will convert HDF data produced by applying machine learning algorithms to Xco2 Level 2 data from the OCO-2 satellite to produce CO2 surface fluxes into GeoTIFF for visualization.
NASA Astrophysics Data System (ADS)
Elantkowska, Magdalena; Ruczkowski, Jarosław; Sikorski, Andrzej; Dembczyński, Jerzy
2017-11-01
A parametric analysis of the hyperfine structure (hfs) for the even parity configurations of atomic terbium (Tb I) is presented in this work. We introduce the complete set of 4fN-core states in our high-performance computing (HPC) calculations. For calculations of the huge hyperfine structure matrix, requiring approximately 5000 hours when run on a single CPU, we propose the methods utilizing a personal computer cluster or, alternatively a cluster of Microsoft Azure virtual machines (VM). These methods give a factor 12 performance boost, enabling the calculations to complete in an acceptable time.
Accelerating Science with the NERSC Burst Buffer Early User Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhimji, Wahid; Bard, Debbie; Romanus, Melissa
NVRAM-based Burst Buffers are an important part of the emerging HPC storage landscape. The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory recently installed one of the first Burst Buffer systems as part of its new Cori supercomputer, collaborating with Cray on the development of the DataWarp software. NERSC has a diverse user base comprised of over 6500 users in 700 different projects spanning a wide variety of scientific computing applications. The use-cases of the Burst Buffer at NERSC are therefore also considerable and diverse. We describe here performance measurements and lessons learned from the Burstmore » Buffer Early User Program at NERSC, which selected a number of research projects to gain early access to the Burst Buffer and exercise its capability to enable new scientific advancements. To the best of our knowledge this is the first time a Burst Buffer has been stressed at scale by diverse, real user workloads and therefore these lessons will be of considerable benefit to shaping the developing use of Burst Buffers at HPC centers.« less
High-Performance Computing Data Center Warm-Water Liquid Cooling |
Computational Science | NREL Warm-Water Liquid Cooling High-Performance Computing Data Center Warm-Water Liquid Cooling NREL's High-Performance Computing Data Center (HPC Data Center) is liquid water Liquid cooling technologies offer a more energy-efficient solution that also allows for effective
2013-12-01
authors present a Computing on Dissemination with predictable contacts ( pCoD ) algorithm, since it is impossible to reserve task execution time in advance...Computing While Charging DAG Directed Acyclic Graph 18 TTL Time-to-live pCoD Predictable contacts CoD Computing on Dissemination upCoD Unpredictable
Directional hippocampal-prefrontal interactions during working memory.
Liu, Tiaotiao; Bai, Wenwen; Xia, Mi; Tian, Xin
2018-02-15
Working memory refers to a system that is essential for performing complex cognitive tasks such as reasoning, comprehension and learning. Evidence shows that hippocampus (HPC) and prefrontal cortex (PFC) play important roles in working memory. The HPC-PFC interaction via theta-band oscillatory synchronization is critical for successful execution of working memory. However, whether one brain region is leading or lagging relative to another is still unclear. Therefore, in the present study, we simultaneously recorded local field potentials (LFPs) from rat ventral hippocampus (vHPC) and medial prefrontal cortex (mPFC) and while the rats performed a Y-maze working memory task. We then applied instantaneous amplitudes cross-correlation method to calculate the time lag between PFC and vHPC to explore the functional dynamics of the HPC-PFC interaction. Our results showed a strong lead from vHPC to mPFC preceded an animal's correct choice during the working memory task. These findings suggest the vHPC-leading interaction contributes to the successful execution of working memory. Copyright © 2017. Published by Elsevier B.V.
NREL, Sandia, and Johnson Controls See Significant Water Savings for HPC
Cooling | Energy Systems Integration Facility | NREL NREL, Sandia and Johnson Controls save 1M Gallons of Water a Year for HPC Cooling NREL, Sandia, and Johnson Controls See Significant Water Savings for HPC Cooling NREL partnered with Sandia National Laboratories and Johnson Controls to install the
Hitier, Martin; Sato, Go; Zhang, Yan-Feng; Besnard, Stephane; Smith, Paul F
2018-06-11
Several studies have demonstrated that electrical activation of the peripheral vestibular system can evoke field potential, multi-unit neuronal activity and acetylcholine release in the hippocampus (HPC). However, no study to date has employed the immediate early gene protein, c-Fos, to investigate the distribution of activation of cells in the HPC following electrical stimulation of the vestibular system. We found that vestibular stimulation increased the number of animals expressing c-Fos in the dorsal HPC compared to sham control rats (P ≤ 0.02), but not in the ventral HPC. c-Fos was also expressed in an increased number of animals in the dorsal dentate gyrus (DG) compared to sham control rats (P ≤ 0.0001), and to a lesser extent in the ventral DG (P ≤ 0.006). The results of this study show that activation of the vestibular system results in a differential increase in the expression of c-Fos across different regions of the HPC. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Brockmann, J. M.; Schuh, W.-D.
2011-07-01
The estimation of the global Earth's gravity field parametrized as a finite spherical harmonic series is computationally demanding. The computational effort depends on the one hand on the maximal resolution of the spherical harmonic expansion (i.e. the number of parameters to be estimated) and on the other hand on the number of observations (which are several millions for e.g. observations from the GOCE satellite missions). To circumvent these restrictions, a massive parallel software based on high-performance computing (HPC) libraries as ScaLAPACK, PBLAS and BLACS was designed in the context of GOCE HPF WP6000 and the GOCO consortium. A prerequisite for the use of these libraries is that all matrices are block-cyclic distributed on a processor grid comprised by a large number of (distributed memory) computers. Using this set of standard HPC libraries has the benefit that once the matrices are distributed across the computer cluster, a huge set of efficient and highly scalable linear algebra operations can be used.
ATLAS and LHC computing on CRAY
NASA Astrophysics Data System (ADS)
Sciacca, F. G.; Haug, S.; ATLAS Collaboration
2017-10-01
Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one important measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort of moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb, from a dedicated cluster to the large Cray systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.
Bethel, EW; Bauer, A; Abbasi, H; ...
2016-06-10
The considerable interest in the high performance computing (HPC) community regarding analyzing and visualization data without first writing to disk, i.e., in situ processing, is due to several factors. First is an I/O cost savings, where data is analyzed /visualized while being generated, without first storing to a filesystem. Second is the potential for increased accuracy, where fine temporal sampling of transient analysis might expose some complex behavior missed in coarse temporal sampling. Third is the ability to use all available resources, CPU’s and accelerators, in the computation of analysis products. This STAR paper brings together researchers, developers and practitionersmore » using in situ methods in extreme-scale HPC with the goal to present existing methods, infrastructures, and a range of computational science and engineering applications using in situ analysis and visualization.« less
Template Interfaces for Agile Parallel Data-Intensive Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramakrishnan, Lavanya; Gunter, Daniel; Pastorello, Gilerto Z.
Tigres provides a programming library to compose and execute large-scale data-intensive scientific workflows from desktops to supercomputers. DOE User Facilities and large science collaborations are increasingly generating large enough data sets that it is no longer practical to download them to a desktop to operate on them. They are instead stored at centralized compute and storage resources such as high performance computing (HPC) centers. Analysis of this data requires an ability to run on these facilities, but with current technologies, scaling an analysis to an HPC center and to a large data set is difficult even for experts. Tigres ismore » addressing the challenge of enabling collaborative analysis of DOE Science data through a new concept of reusable "templates" that enable scientists to easily compose, run and manage collaborative computational tasks. These templates define common computation patterns used in analyzing a data set.« less
NASA Astrophysics Data System (ADS)
Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek
2009-09-01
High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.
Climate simulations and services on HPC, Cloud and Grid infrastructures
NASA Astrophysics Data System (ADS)
Cofino, Antonio S.; Blanco, Carlos; Minondo Tshuma, Antonio
2017-04-01
Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Climate community. These paradigms are modifying the way how climate applications are being executed. By using these technologies the number, variety and complexity of experiments and resources are increasing substantially. But, although computational capacity is increasing, traditional applications and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to run climate simulations and services on Grid, Cloud and HPC infrestructures and how to tackle them. The Grid and Cloud infrastructures provided by EGI's VOs ( esr , earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. To solve those challenges, solutions using DRM4G framework will be shown. DRM4G provides a good framework to manage big volume and variety of computing resources for climate experiments. This work has been supported by the Spanish National R&D Plan under projects WRF4G (CGL2011-28864), INSIGNIA (CGL2016-79210-R) and MULTI-SDM (CGL2015-66583-R) ; the IS-ENES2 project from the 7FP of the European Commission (grant agreement no. 312979); the European Regional Development Fund—ERDF and the Programa de Personal Investigador en Formación Predoctoral from Universidad de Cantabria and Government of Cantabria.
An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung
2011-01-01
In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on highmore » performance computing platforms.« less
Process for Managing and Customizing HPC Operating Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, David ML
2014-04-02
A process for maintaining a custom HPC operating system was developed at the Environmental Molecular Sciences Laboratory (EMSL) over the past ten years. This process is generic and flexible to manage continuous change as well as keep systems updated while managing communication through well defined pieces of software.
Singularity: Scientific containers for mobility of compute.
Kurtzer, Gregory M; Sochat, Vanessa; Bauer, Michael W
2017-01-01
Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science.
Singularity: Scientific containers for mobility of compute
Kurtzer, Gregory M.; Bauer, Michael W.
2017-01-01
Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science. PMID:28494014
Self-service for software development projects and HPC activities
NASA Astrophysics Data System (ADS)
Husejko, M.; Høimyr, N.; Gonzalez, A.; Koloventzos, G.; Asbury, D.; Trzcinska, A.; Agtzidis, I.; Botrel, G.; Otto, J.
2014-05-01
This contribution describes how CERN has implemented several essential tools for agile software development processes, ranging from version control (Git) to issue tracking (Jira) and documentation (Wikis). Running such services in a large organisation like CERN requires many administrative actions both by users and service providers, such as creating software projects, managing access rights, users and groups, and performing tool-specific customisation. Dealing with these requests manually would be a time-consuming task. Another area of our CERN computing services that has required dedicated manual support has been clusters for specific user communities with special needs. Our aim is to move all our services to a layered approach, with server infrastructure running on the internal cloud computing infrastructure at CERN. This contribution illustrates how we plan to optimise the management of our of services by means of an end-user facing platform acting as a portal into all the related services for software projects, inspired by popular portals for open-source developments such as Sourceforge, GitHub and others. Furthermore, the contribution will discuss recent activities with tests and evaluations of High Performance Computing (HPC) applications on different hardware and software stacks, and plans to offer a dynamically scalable HPC service at CERN, based on affordable hardware.
Unmet needs for analyzing biological big data: A survey of 704 NSF principal investigators.
Barone, Lindsay; Williams, Jason; Micklos, David
2017-10-01
In a 2016 survey of 704 National Science Foundation (NSF) Biological Sciences Directorate principal investigators (BIO PIs), nearly 90% indicated they are currently or will soon be analyzing large data sets. BIO PIs considered a range of computational needs important to their work, including high performance computing (HPC), bioinformatics support, multistep workflows, updated analysis software, and the ability to store, share, and publish data. Previous studies in the United States and Canada emphasized infrastructure needs. However, BIO PIs said the most pressing unmet needs are training in data integration, data management, and scaling analyses for HPC-acknowledging that data science skills will be required to build a deeper understanding of life. This portends a growing data knowledge gap in biology and challenges institutions and funding agencies to redouble their support for computational training in biology.
A web portal for hydrodynamical, cosmological simulations
NASA Astrophysics Data System (ADS)
Ragagnin, A.; Dolag, K.; Biffi, V.; Cadolle Bel, M.; Hammer, N. J.; Krukau, A.; Petkova, M.; Steinborn, D.
2017-07-01
This article describes a data centre hosting a web portal for accessing and sharing the output of large, cosmological, hydro-dynamical simulations with a broad scientific community. It also allows users to receive related scientific data products by directly processing the raw simulation data on a remote computing cluster. The data centre has a multi-layer structure: a web portal, a job control layer, a computing cluster and a HPC storage system. The outer layer enables users to choose an object from the simulations. Objects can be selected by visually inspecting 2D maps of the simulation data, by performing highly compounded and elaborated queries or graphically by plotting arbitrary combinations of properties. The user can run analysis tools on a chosen object. These services allow users to run analysis tools on the raw simulation data. The job control layer is responsible for handling and performing the analysis jobs, which are executed on a computing cluster. The innermost layer is formed by a HPC storage system which hosts the large, raw simulation data. The following services are available for the users: (I) CLUSTERINSPECT visualizes properties of member galaxies of a selected galaxy cluster; (II) SIMCUT returns the raw data of a sub-volume around a selected object from a simulation, containing all the original, hydro-dynamical quantities; (III) SMAC creates idealized 2D maps of various, physical quantities and observables of a selected object; (IV) PHOX generates virtual X-ray observations with specifications of various current and upcoming instruments.
Role of adult neurogenesis in hippocampal-cortical memory consolidation
2014-01-01
Acquired memory is initially dependent on the hippocampus (HPC) for permanent memory formation. This hippocampal dependency of memory recall progressively decays with time, a process that is associated with a gradual increase in dependency upon cortical structures. This process is commonly referred to as systems consolidation theory. In this paper, we first review how memory becomes hippocampal dependent to cortical dependent with an emphasis on the interactions that occur between the HPC and cortex during systems consolidation. We also review the mechanisms underlying the gradual decay of HPC dependency during systems consolidation from the perspective of memory erasures by adult hippocampal neurogenesis. Finally, we discuss the relationship between systems consolidation and memory precision. PMID:24552281
Roter, Debra L; Erby, Lori H; Adams, Ann; Buckingham, Christopher D; Vail, Laura; Realpe, Alba; Larson, Susan; Hall, Judith A
2014-09-01
To disentangle the effects of physician gender and patient-centered communication style on patients' oral engagement in depression care. Physician gender, physician race and communication style (high patient-centered (HPC) and low patient-centered (LPC)) were manipulated and presented as videotaped actors within a computer simulated medical visit to assess effects on analogue patient (AP) verbal responsiveness and care ratings. 307 APs (56% female; 70% African American) were randomly assigned to conditions and instructed to verbally respond to depression-related questions and indicate willingness to continue care. Disclosures were coded using Roter Interaction Analysis System (RIAS). Both male and female APs talked more overall and conveyed more psychosocial and emotional talk to HPC gender discordant doctors (all p<.05). APs were more willing to continue treatment with gender-discordant HPC physicians (p<.05). No effects were evident in the LPC condition. Findings highlight a role for physician gender when considering active patient engagement in patient-centered depression care. This pattern suggests that there may be largely under-appreciated and consequential effects associated with patient expectations in regard to physician gender that these differ by patient gender. High patient-centeredness increases active patient engagement in depression care especially in gender discordant dyads. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Atlas : A library for numerical weather prediction and climate modelling
NASA Astrophysics Data System (ADS)
Deconinck, Willem; Bauer, Peter; Diamantakis, Michail; Hamrud, Mats; Kühnlein, Christian; Maciel, Pedro; Mengaldo, Gianmarco; Quintino, Tiago; Raoult, Baudouin; Smolarkiewicz, Piotr K.; Wedi, Nils P.
2017-11-01
The algorithms underlying numerical weather prediction (NWP) and climate models that have been developed in the past few decades face an increasing challenge caused by the paradigm shift imposed by hardware vendors towards more energy-efficient devices. In order to provide a sustainable path to exascale High Performance Computing (HPC), applications become increasingly restricted by energy consumption. As a result, the emerging diverse and complex hardware solutions have a large impact on the programming models traditionally used in NWP software, triggering a rethink of design choices for future massively parallel software frameworks. In this paper, we present Atlas, a new software library that is currently being developed at the European Centre for Medium-Range Weather Forecasts (ECMWF), with the scope of handling data structures required for NWP applications in a flexible and massively parallel way. Atlas provides a versatile framework for the future development of efficient NWP and climate applications on emerging HPC architectures. The applications range from full Earth system models, to specific tools required for post-processing weather forecast products. The Atlas library thus constitutes a step towards affordable exascale high-performance simulations by providing the necessary abstractions that facilitate the application in heterogeneous HPC environments by promoting the co-design of NWP algorithms with the underlying hardware.
Characterizing output bottlenecks in a supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Bing; Chase, Jeffrey; Dillow, David A
2012-01-01
Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic,more » contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.« less
Li, Xinwei; Li, Qiongling; Wang, Xuetong; Li, Deyu; Li, Shuyu
2018-01-01
The hippocampus plays an important role in memory function relying on information interaction between distributed brain areas. The hippocampus can be divided into the anterior and posterior sections with different structure and function along its long axis. The aim of this study is to investigate the effects of normal aging on the structural covariance of the anterior hippocampus (aHPC) and the posterior hippocampus (pHPC). In this study, 240 healthy subjects aged 18-89 years were selected and subdivided into young (18-23 years), middle-aged (30-58 years), and older (61-89 years) groups. The aHPC and pHPC was divided based on the location of uncal apex in the MNI space. Then, the structural covariance networks were constructed by examining their covariance in gray matter volumes with other brain regions. Finally, the influence of age on the structural covariance of these hippocampal sections was explored. We found that the aHPC and pHPC had different structural covariance patterns, but both of them were associated with the medial temporal lobe and insula. Moreover, both increased and decreased covariances were found with the aHPC but only increased covariance was found with the pHPC with age ( p < 0.05, family-wise error corrected). These decreased connections occurred within the default mode network, while the increased connectivity mainly occurred in other memory systems that differ from the hippocampus. This study reveals different age-related influence on the structural networks of the aHPC and pHPC, providing an essential insight into the mechanisms of the hippocampus in normal aging.
NASA Astrophysics Data System (ADS)
Harris, C. K.; Overeem, I.; Hutton, E.; Moriarty, J.; Wiberg, P.
2016-12-01
Numerical models are increasingly used for both research and applied sciences, and it is important that we train students to run models and analyze model data. This is especially true within oceanographic sciences, many of which use hydrodynamic models to address oceanographic transport problems. These models, however, often require a fair amount of training and computer skills before a student can run the models and analyze the large data sets produced by the models. One example is the Regional Ocean Modeling System (ROMS), an open source, three-dimensional primitive equation hydrodynamic ocean model that uses a structured curvilinear horizontal grid. It currently has thousands of users worldwide, and the full model includes modules for sediment transport and biogeochemistry, and several options for turbulence closures and numerical schemes. Implementing ROMS can be challenging to students, however, in part because the code was designed to provide flexibility for the choice of model parameterizations and processes, and to run on a variety of High Performance Computing (HPC) platforms. To provide a more accessible tool for classroom use, we have modified an existing idealized ROMS implementation to be run on a High Performance Computer (HPC) via the WMT (Web Modeling Toolkit), and developed a series of lesson plans that explore sediment transport within the idealized model domain. This has addressed our goal to provide a relatively easy introduction to the numerical modeling process that can be used within upper level undergraduate and graduate classes to explore sediment transport on continental shelves. The model implementation includes wave forcing, along-shelf currents, a riverine source, and suspended sediment transport. The model calculates suspended transport and deposition of sediment delivered to the continental shelf by a riverine flood. Lesson plans lead the students through running the model on a remote HPC, modifying the standard model. The lesson plans also include instruction for visualizing the model output within Matlab and Panoply. The lesson plans have been used within graduate, undergraduate classrooms, as well as in clinics aimed at educators. Feedback from these exercises has been used to improve the lesson plans and model implementation.
Dynamic Extension of a Virtualized Cluster by using Cloud Resources
NASA Astrophysics Data System (ADS)
Oberst, Oliver; Hauth, Thomas; Kernert, David; Riedel, Stephan; Quast, Günter
2012-12-01
The specific requirements concerning the software environment within the HEP community constrain the choice of resource providers for the outsourcing of computing infrastructure. The use of virtualization in HPC clusters and in the context of cloud resources is therefore a subject of recent developments in scientific computing. The dynamic virtualization of worker nodes in common batch systems provided by ViBatch serves each user with a dynamically virtualized subset of worker nodes on a local cluster. Now it can be transparently extended by the use of common open source cloud interfaces like OpenNebula or Eucalyptus, launching a subset of the virtual worker nodes within the cloud. This paper demonstrates how a dynamically virtualized computing cluster is combined with cloud resources by attaching remotely started virtual worker nodes to the local batch system.
CFD Ventilation Study for the Human Powered Centrifuge at the International Space Station
NASA Technical Reports Server (NTRS)
Son, Chang H.
2011-01-01
The Human Powered Centrifuge (HPC) is a hyper gravity facility that will be installed on board the International Space Station (ISS) to enable crew exercises under the artificial gravity conditions. The HPC equipment includes a bicycle for long-term exercises of a crewmember that provides power for rotation of HPC at a speed of 30 rpm. The crewmember exercising vigorously on the centrifuge generates the amount of carbon dioxide of several times higher than a crewmember in ordinary conditions. The goal of the study is to analyze the airflow and carbon dioxide distribution within Pressurized Multipurpose Module (PMM) cabin. The 3D computational model included PMM cabin. The full unsteady formulation was used for airflow and CO2 transport modeling with the so-called sliding mesh concept is considered in the rotating reference frame while the rest of the cabin volume is considered in the stationary reference frame. The localized effects of carbon dioxide dispersion are examined. Strong influence of the rotating HPC equipment on the CO2 distribution is detected and discussed.
User-level framework for performance monitoring of HPC applications
NASA Astrophysics Data System (ADS)
Hristova, R.; Goranov, G.
2013-10-01
HP-SEE is an infrastructure that links the existing HPC facilities in South East Europe in a common infrastructure. The analysis of the performance monitoring of the High-Performance Computing (HPC) applications in the infrastructure can be useful for the end user as diagnostic for the overall performance of his applications. The existing monitoring tools for HP-SEE provide to the end user only aggregated information for all applications. Usually, the user does not have permissions to select only the relevant information for him and for his applications. In this article we present a framework for performance monitoring of the HPC applications in the HP-SEE infrastructure. The framework provides standardized performance metrics, which every user can use in order to monitor his applications. Furthermore as a part of the framework a program interface is developed. The interface allows the user to publish metrics data from his application and to read and analyze gathered information. Publishing and reading through the framework is possible only with grid certificate valid for the infrastructure. Therefore the user is authorized to access only the data for his applications.
Exploiting Parallel R in the Cloud with SPRINT
Piotrowski, M.; McGilvary, G.A.; Sloan, T. M.; Mewissen, M.; Lloyd, A.D.; Forster, T.; Mitchell, L.; Ghazal, P.; Hill, J.
2012-01-01
Background Advances in DNA Microarray devices and next-generation massively parallel DNA sequencing platforms have led to an exponential growth in data availability but the arising opportunities require adequate computing resources. High Performance Computing (HPC) in the Cloud offers an affordable way of meeting this need. Objectives Bioconductor, a popular tool for high-throughput genomic data analysis, is distributed as add-on modules for the R statistical programming language but R has no native capabilities for exploiting multi-processor architectures. SPRINT is an R package that enables easy access to HPC for genomics researchers. This paper investigates: setting up and running SPRINT-enabled genomic analyses on Amazon’s Elastic Compute Cloud (EC2), the advantages of submitting applications to EC2 from different parts of the world and, if resource underutilization can improve application performance. Methods The SPRINT parallel implementations of correlation, permutation testing, partitioning around medoids and the multi-purpose papply have been benchmarked on data sets of various size on Amazon EC2. Jobs have been submitted from both the UK and Thailand to investigate monetary differences. Results It is possible to obtain good, scalable performance but the level of improvement is dependent upon the nature of algorithm. Resource underutilization can further improve the time to result. End-user’s location impacts on costs due to factors such as local taxation. Conclusions: Although not designed to satisfy HPC requirements, Amazon EC2 and cloud computing in general provides an interesting alternative and provides new possibilities for smaller organisations with limited funds. PMID:23223611
Exploiting parallel R in the cloud with SPRINT.
Piotrowski, M; McGilvary, G A; Sloan, T M; Mewissen, M; Lloyd, A D; Forster, T; Mitchell, L; Ghazal, P; Hill, J
2013-01-01
Advances in DNA Microarray devices and next-generation massively parallel DNA sequencing platforms have led to an exponential growth in data availability but the arising opportunities require adequate computing resources. High Performance Computing (HPC) in the Cloud offers an affordable way of meeting this need. Bioconductor, a popular tool for high-throughput genomic data analysis, is distributed as add-on modules for the R statistical programming language but R has no native capabilities for exploiting multi-processor architectures. SPRINT is an R package that enables easy access to HPC for genomics researchers. This paper investigates: setting up and running SPRINT-enabled genomic analyses on Amazon's Elastic Compute Cloud (EC2), the advantages of submitting applications to EC2 from different parts of the world and, if resource underutilization can improve application performance. The SPRINT parallel implementations of correlation, permutation testing, partitioning around medoids and the multi-purpose papply have been benchmarked on data sets of various size on Amazon EC2. Jobs have been submitted from both the UK and Thailand to investigate monetary differences. It is possible to obtain good, scalable performance but the level of improvement is dependent upon the nature of the algorithm. Resource underutilization can further improve the time to result. End-user's location impacts on costs due to factors such as local taxation. Although not designed to satisfy HPC requirements, Amazon EC2 and cloud computing in general provides an interesting alternative and provides new possibilities for smaller organisations with limited funds.
Optimisation of multiplet identifier processing on a PLAYSTATION® 3
NASA Astrophysics Data System (ADS)
Hattori, Masami; Mizuno, Takashi
2010-02-01
To enable high-performance computing (HPC) for applications with large datasets using a Sony® PLAYSTATION® 3 (PS3™) video game console, we configured a hybrid system consisting of a Windows® PC and a PS3™. To validate this system, we implemented the real-time multiplet identifier (RTMI) application, which identifies multiplets of microearthquakes in terms of the similarity of their waveforms. The cross-correlation computation, which is a core algorithm of the RTMI application, was optimised for the PS3™ platform, while the rest of the computation, including data input and output remained on the PC. With this configuration, the core part of the algorithm ran 69 times faster than the original program, accelerating total computation speed more than five times. As a result, the system processed up to 2100 total microseismic events, whereas the original implementation had a limit of 400 events. These results indicate that this system enables high-performance computing for large datasets using the PS3™, as long as data transfer time is negligible compared with computation time.
Use of Continuous Integration Tools for Application Performance Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vergara Larrea, Veronica G; Joubert, Wayne; Fuson, Christopher B
High performance computing systems are becom- ing increasingly complex, both in node architecture and in the multiple layers of software stack required to compile and run applications. As a consequence, the likelihood is increasing for application performance regressions to occur as a result of routine upgrades of system software components which interact in complex ways. The purpose of this study is to evaluate the effectiveness of continuous integration tools for application performance monitoring on HPC systems. In addition, this paper also describes a prototype system for application perfor- mance monitoring based on Jenkins, a Java-based continuous integration tool. The monitoringmore » system described leverages several features in Jenkins to track application performance results over time. Preliminary results and lessons learned from monitoring applications on Cray systems at the Oak Ridge Leadership Computing Facility are presented.« less
Ye, Ran; Harte, Federico
2014-03-01
The effect of high pressure homogenization on the improvement of the stability hydroxypropyl cellulose (HPC) and micellar casein was investigated. HPC with two molecular weights (80 and 1150 kDa) and micellar casein were mixed in water to a concentration leading to phase separation (0.45% w/v HPC and 3% w/v casein) and immediately subjected to high pressure homogenization ranging from 0 to 300 MPa, in 100 MPa increments. The various dispersions were evaluated for stability, particle size, turbidity, protein content, and viscosity over a period of two weeks and Scanning Transmission Electron Microscopy (STEM) at the end of the storage period. The stability of casein-HPC complexes was enhanced with the increasing homogenization pressure, especially for the complex containing high molecular weight HPC. The apparent particle size of complexes was reduced from ~200nm to ~130nm when using 300 MPa, corresponding to the sharp decrease of absorbance when compared to the non-homogenized controls. High pressure homogenization reduced the viscosity of HPC-casein complexes regardless of the molecular weight of HPC and STEM imagines revealed aggregates consistent with nano-scale protein polysaccharide interactions.
Ye, Ran; Harte, Federico
2013-01-01
The effect of high pressure homogenization on the improvement of the stability hydroxypropyl cellulose (HPC) and micellar casein was investigated. HPC with two molecular weights (80 and 1150 kDa) and micellar casein were mixed in water to a concentration leading to phase separation (0.45% w/v HPC and 3% w/v casein) and immediately subjected to high pressure homogenization ranging from 0 to 300 MPa, in 100 MPa increments. The various dispersions were evaluated for stability, particle size, turbidity, protein content, and viscosity over a period of two weeks and Scanning Transmission Electron Microscopy (STEM) at the end of the storage period. The stability of casein-HPC complexes was enhanced with the increasing homogenization pressure, especially for the complex containing high molecular weight HPC. The apparent particle size of complexes was reduced from ~200nm to ~130nm when using 300 MPa, corresponding to the sharp decrease of absorbance when compared to the non-homogenized controls. High pressure homogenization reduced the viscosity of HPC-casein complexes regardless of the molecular weight of HPC and STEM imagines revealed aggregates consistent with nano-scale protein polysaccharide interactions. PMID:24159250
DOE Office of Scientific and Technical Information (OSTI.GOV)
East, D. R.; Sexton, J.
This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and IBM TJ Watson Research Center to research, assess feasibility and develop an implementation plan for a High Performance Computing Innovation Center (HPCIC) in the Livermore Valley Open Campus (LVOC). The ultimate goal of this work was to help advance the State of California and U.S. commercial competitiveness in the arena of High Performance Computing (HPC) by accelerating the adoption of computational science solutions, consistent with recent DOE strategy directives. The desired result of this CRADA was a well-researched,more » carefully analyzed market evaluation that would identify those firms in core sectors of the US economy seeking to adopt or expand their use of HPC to become more competitive globally, and to define how those firms could be helped by the HPCIC with IBM as an integral partner.« less
Evaluation of FPGA to PC feedback loop
NASA Astrophysics Data System (ADS)
Linczuk, Pawel; Zabolotny, Wojciech M.; Wojenski, Andrzej; Krawczyk, Rafal D.; Pozniak, Krzysztof T.; Chernyshova, Maryna; Czarski, Tomasz; Gaska, Michal; Kasprowicz, Grzegorz; Kowalska-Strzeciwilk, Ewa; Malinowski, Karol
2017-08-01
The paper presents the evaluation study of the performance of the data transmission subsystem which can be used in High Energy Physics (HEP) and other High-Performance Computing (HPC) systems. The test environment consisted of Xilinx Artix-7 FPGA and server-grade PC connected via the PCIe 4xGen2 bus. The DMA engine was based on the Xilinx DMA for PCI Express Subsystem1 controlled by the modified Xilinx XDMA kernel driver.2 The research is focused on the influence of the system configuration on achievable throughput and latency of data transfer.
Challenges & Roadmap for Beyond CMOS Computing Simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodrigues, Arun F.; Frank, Michael P.
Simulating HPC systems is a difficult task and the emergence of “Beyond CMOS” architectures and execution models will increase that difficulty. This document presents a “tutorial” on some of the simulation challenges faced by conventional and non-conventional architectures (Section 1) and goals and requirements for simulating Beyond CMOS systems (Section 2). These provide background for proposed short- and long-term roadmaps for simulation efforts at Sandia (Sections 3 and 4). Additionally, a brief explanation of a proof-of-concept integration of a Beyond CMOS architectural simulator is presented (Section 2.3).
Decaf: Decoupled Dataflows for In Situ High-Performance Workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dreher, M.; Peterka, T.
Decaf is a dataflow system for the parallel communication of coupled tasks in an HPC workflow. The dataflow can perform arbitrary data transformations ranging from simply forwarding data to complex data redistribution. Decaf does this by allowing the user to allocate resources and execute custom code in the dataflow. All communication through the dataflow is efficient parallel message passing over MPI. The runtime for calling tasks is entirely message-driven; Decaf executes a task when all messages for the task have been received. Such a messagedriven runtime allows cyclic task dependencies in the workflow graph, for example, to enact computational steeringmore » based on the result of downstream tasks. Decaf includes a simple Python API for describing the workflow graph. This allows Decaf to stand alone as a complete workflow system, but Decaf can also be used as the dataflow layer by one or more other workflow systems to form a heterogeneous task-based computing environment. In one experiment, we couple a molecular dynamics code with a visualization tool using the FlowVR and Damaris workflow systems and Decaf for the dataflow. In another experiment, we test the coupling of a cosmology code with Voronoi tessellation and density estimation codes using MPI for the simulation, the DIY programming model for the two analysis codes, and Decaf for the dataflow. Such workflows consisting of heterogeneous software infrastructures exist because components are developed separately with different programming models and runtimes, and this is the first time that such heterogeneous coupling of diverse components was demonstrated in situ on HPC systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sehrish, S.; Kowalkowski, J.; Paterno, M.
High level abstractions in Python that can utilize computing hardware well seem to be an attractive option for writing data reduction and analysis tasks. In this paper, we explore the features available in Python which are useful and efficient for end user analysis in High Energy Physics (HEP). A typical vertical slice of an HEP data analysis is somewhat fragmented: the state of the reduction/analysis process must be saved at certain stages to allow for selective reprocessing of only parts of a generally time-consuming workflow. Also, algorithms tend to to be modular because of the heterogeneous nature of most detectorsmore » and the need to analyze different parts of the detector separately before combining the information. This fragmentation causes difficulties for interactive data analysis, and as data sets increase in size and complexity (O10 TiB for a “small” neutrino experiment to the O10 PiB currently held by the CMS experiment at the LHC), data analysis methods traditional to the field must evolve to make optimum use of emerging HPC technologies and platforms. Mainstream big data tools, while suggesting a direction in terms of what can be done if an entire data set can be available across a system and analysed with high-level programming abstractions, are not designed with either scientific computing generally, or modern HPC platform features in particular, such as data caching levels, in mind. Our example HPC use case is a search for a new elementary particle which might explain the phenomenon known as “Dark Matter”. Here, using data from the CMS detector, we will use HDF5 as our input data format, and MPI with Python to implement our use case.« less
WinHPC System Configuration | High-Performance Computing | NREL
CPUs with 48GB of memory. Node 04 has dual Intel Xeon E5530 CPUs with 24GB of memory. Nodes 05-20 have dual AMD Opteron 2374 HE CPUs with 16GB of memory. Nodes 21-30 have been decommissioned. Nodes 31-35 have dual Intel Xeon X5675 CPUs with 48GB of memory. Nodes 36-37 have dual Intel Xeon E5-2680 CPUs with
Middleware for big data processing: test results
NASA Astrophysics Data System (ADS)
Gankevich, I.; Gaiduchok, V.; Korkhov, V.; Degtyarev, A.; Bogdanov, A.
2017-12-01
Dealing with large volumes of data is resource-consuming work which is more and more often delegated not only to a single computer but also to a whole distributed computing system at once. As the number of computers in a distributed system increases, the amount of effort put into effective management of the system grows. When the system reaches some critical size, much effort should be put into improving its fault tolerance. It is difficult to estimate when some particular distributed system needs such facilities for a given workload, so instead they should be implemented in a middleware which works efficiently with a distributed system of any size. It is also difficult to estimate whether a volume of data is large or not, so the middleware should also work with data of any volume. In other words, the purpose of the middleware is to provide facilities that adapt distributed computing system for a given workload. In this paper we introduce such middleware appliance. Tests show that this middleware is well-suited for typical HPC and big data workloads and its performance is comparable with well-known alternatives.
Using Formal Grammars to Predict I/O Behaviors in HPC: The Omnisc'IO Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorier, Matthieu; Ibrahim, Shadi; Antoniu, Gabriel
2016-08-01
The increasing gap between the computation performance of post-petascale machines and the performance of their I/O subsystem has motivated many I/O optimizations including prefetching, caching, and scheduling. In order to further improve these techniques, modeling and predicting spatial and temporal I/O patterns of HPC applications as they run has become crucial. In this paper we present Omnisc'IO, an approach that builds a grammar-based model of the I/O behavior of HPC applications and uses it to predict when future I/O operations will occur, and where and how much data will be accessed. To infer grammars, Omnisc'IO is based on StarSequitur, amore » novel algorithm extending Nevill-Manning's Sequitur algorithm. Omnisc'IO is transparently integrated into the POSIX and MPI I/O stacks and does not require any modification in applications or higher-level I/O libraries. It works without any prior knowledge of the application and converges to accurate predictions of any N future I/O operations within a couple of iterations. Its implementation is efficient in both computation time and memory footprint.« less
Tigres Workflow Library: Supporting Scientific Pipelines on HPC Systems
Hendrix, Valerie; Fox, James; Ghoshal, Devarshi; ...
2016-07-21
The growth in scientific data volumes has resulted in the need for new tools that enable users to operate on and analyze data on large-scale resources. In the last decade, a number of scientific workflow tools have emerged. These tools often target distributed environments, and often need expert help to compose and execute the workflows. Data-intensive workflows are often ad-hoc, they involve an iterative development process that includes users composing and testing their workflows on desktops, and scaling up to larger systems. In this paper, we present the design and implementation of Tigres, a workflow library that supports the iterativemore » workflow development cycle of data-intensive workflows. Tigres provides an application programming interface to a set of programming templates i.e., sequence, parallel, split, merge, that can be used to compose and execute computational and data pipelines. We discuss the results of our evaluation of scientific and synthetic workflows showing Tigres performs with minimal template overheads (mean of 13 seconds over all experiments). We also discuss various factors (e.g., I/O performance, execution mechanisms) that affect the performance of scientific workflows on HPC systems.« less
Tigres Workflow Library: Supporting Scientific Pipelines on HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendrix, Valerie; Fox, James; Ghoshal, Devarshi
The growth in scientific data volumes has resulted in the need for new tools that enable users to operate on and analyze data on large-scale resources. In the last decade, a number of scientific workflow tools have emerged. These tools often target distributed environments, and often need expert help to compose and execute the workflows. Data-intensive workflows are often ad-hoc, they involve an iterative development process that includes users composing and testing their workflows on desktops, and scaling up to larger systems. In this paper, we present the design and implementation of Tigres, a workflow library that supports the iterativemore » workflow development cycle of data-intensive workflows. Tigres provides an application programming interface to a set of programming templates i.e., sequence, parallel, split, merge, that can be used to compose and execute computational and data pipelines. We discuss the results of our evaluation of scientific and synthetic workflows showing Tigres performs with minimal template overheads (mean of 13 seconds over all experiments). We also discuss various factors (e.g., I/O performance, execution mechanisms) that affect the performance of scientific workflows on HPC systems.« less
A high performance scientific cloud computing environment for materials simulations
NASA Astrophysics Data System (ADS)
Jorissen, K.; Vila, F. D.; Rehr, J. J.
2012-09-01
We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.
Pasupuleti, Latha V; Cook, Kristin M; Sifri, Ziad C; Alzate, Walter D; Livingston, David H; Mohr, Alicia M
2014-04-01
Severe injury results in increased mobilization of hematopoietic progenitor cells (HPC) from the bone marrow (BM) to sites of injury, which may contribute to persistent BM dysfunction after trauma. Norepinephrine is a known inducer of HPC mobilization, and nonselective β-blockade with propranolol has been shown to decrease mobilization after trauma and hemorrhagic shock (HS). This study will determine the role of selective β-adrenergic receptor blockade in HPC mobilization in a combined model of lung contusion (LC) and HS. Male Sprague-Dawley rats were subjected to LC, followed by 45 minutes of HS. Animals were then randomized to receive atenolol (LCHS + β1B), butoxamine (LCHS + β2B), or SR59230A (LCHS + β3B) immediately after resuscitation and daily for 6 days. Control groups were composed of naive animals. BM cellularity, %HPCs in peripheral blood, and plasma granulocyte-colony stimulating factor levels were assessed at 3 hours and 7 days. Systemic plasma-mediated effects were evaluated in vitro by assessment of BM HPC growth. Injured lung tissue was graded histologically by a blinded reader. The use of β2B or β3B following LCHS restored BM cellularity and significantly decreased HPC mobilization. In contrast, β1B had no effect on HPC mobilization. Only β3B significantly reduced plasma G-CSF levels. When evaluating the plasma systemic effects, both β2B and β3B significantly improved BM HPC growth as compared with LCHS alone. The use of β2 and β3 blockade did not affect lung injury scores. Both β2 and β3 blockade can prevent excess HPC mobilization and BM dysfunction when given after trauma and HS, and the effects seem to be mediated systemically, without adverse effects on subsequent healing. Only treatment with β3 blockade reduced plasma G-CSF levels, suggesting different mechanisms for adrenergic-induced G-CSF release and mobilization of HPCs. This study adds to the evidence that therapeutic strategies that reduce the exaggerated sympathetic stimulation after severe injury are beneficial and reduce BM dysfunction.
Preconditioning in neuroprotection: From hypoxia to ischemia
Li, Sijie; Hafeez, Adam; Noorulla, Fatima; Geng, Xiaokun; Shao, Guo; Ren, Changhong; Lu, Guowei; Zhao, Heng; Ding, Yuchuan; Ji, Xunming
2017-01-01
Sublethal hypoxic or ischemic events can improve the tolerance of tissues, organs, and even organisms from subsequent lethal injury caused by hypoxia or ischemia. This phenomenon has been termed hypoxic or ischemic preconditioning (HPC or IPC) and is well established in the heart and the brain. This review aims to discuss HPC and IPC with respect to their historical development and advancements in our understanding of the neurochemical basis for their neuroprotective role. Through decades of collaborative research and studies of HPC and IPC in other organ systems, our understanding of HPC and IPC-induced neuroprotection has expanded to include: early- (phosphorylation targets, transporter regulation, interfering RNA) and late- (regulation of genes like EPO, VEGF, and iNOS) phase changes, regulators of programmed cell death, members of metabolic pathways, receptor modulators, and many other novel targets. The rapid acceleration in our understanding of HPC and IPC will help facilitate transition into the clinical setting. PMID:28110083
STAR Data Reconstruction at NERSC/Cori, an adaptable Docker container approach for HPC
NASA Astrophysics Data System (ADS)
Mustafa, Mustafa; Balewski, Jan; Lauret, Jérôme; Porter, Jefferson; Canon, Shane; Gerhardt, Lisa; Hajdu, Levente; Lukascsyk, Mark
2017-10-01
As HPC facilities grow their resources, adaptation of classic HEP/NP workflows becomes a need. Linux containers may very well offer a way to lower the bar to exploiting such resources and at the time, help collaboration to reach vast elastic resources on such facilities and address their massive current and future data processing challenges. In this proceeding, we showcase STAR data reconstruction workflow at Cori HPC system at NERSC. STAR software is packaged in a Docker image and runs at Cori in Shifter containers. We highlight two of the typical end-to-end optimization challenges for such pipelines: 1) data transfer rate which was carried over ESnet after optimizing end points and 2) scalable deployment of conditions database in an HPC environment. Our tests demonstrate equally efficient data processing workflows on Cori/HPC, comparable to standard Linux clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madduri, Kamesh; Im, Eun-Jin; Ibrahim, Khaled Z.
The next decade of high-performance computing (HPC) systems will see a rapid evolution and divergence of multi- and manycore architectures as power and cooling constraints limit increases in microprocessor clock speeds. Understanding efficient optimization methodologies on diverse multicore designs in the context of demanding numerical methods is one of the greatest challenges faced today by the HPC community. In this paper, we examine the efficient multicore optimization of GTC, a petascale gyrokinetic toroidal fusion code for studying plasma microturbulence in tokamak devices. For GTC’s key computational components (charge deposition and particle push), we explore efficient parallelization strategies across a broadmore » range of emerging multicore designs, including the recently-released Intel Nehalem-EX, the AMD Opteron Istanbul, and the highly multithreaded Sun UltraSparc T2+. We also present the first study on tuning gyrokinetic particle-in-cell (PIC) algorithms for graphics processors, using the NVIDIA C2050 (Fermi). Our work discusses several novel optimization approaches for gyrokinetic PIC, including mixed-precision computation, particle binning and decomposition strategies, grid replication, SIMDized atomic floating-point operations, and effective GPU texture memory utilization. Overall, we achieve significant performance improvements of 1.3–4.7× on these complex PIC kernels, despite the inherent challenges of data dependency and locality. Finally, our work also points to several architectural and programming features that could significantly enhance PIC performance and productivity on next-generation architectures.« less
Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.
Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo
2017-01-01
The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.
Dust in the wind: challenges for urban aerodynamics
NASA Astrophysics Data System (ADS)
Boris, Jay P.
2007-04-01
The fluid dynamics of airflow through a city controls the transport and dispersion of airborne contaminants. This is urban aerodynamics, not meteorology. The average flow, large-scale fluctuations and turbulence are closely coupled to the building geometry. Buildings create large "rooster-tail" wakes; there are systematic fountain flows up the backs of tall buildings; and dust in the wind can move perpendicular to or even against the locally prevailing wind. Requirements for better prediction accuracy demand time-dependent, three-dimensional CFD computations that include solar heating and buoyancy, complete landscape and building geometry specification including foliage and, realistic wind fluctuations. This fundamental prediction capability is necessary to assess urban visibility and line-of-sight sensor performance in street canyons and rugged terrain. Computing urban aerodynamics accurately is clearly a time-dependent High Performance Computing (HPC) problem. In an emergency, on the other hand, prediction technology to assess crisis information, sensor performance, and obscured line-of-sight propagation in the face of industrial spills, transportation accidents, or terrorist attacks has very tight time requirements that suggest simple approximations which tend to produce inaccurate results. In the past we have had to choose one or the other: a fast, inaccurate model or a slow accurate model. Using new fluid-dynamic principles, an urban-oriented emergency assessment system called CT-Analyst® was invented that solves this dilemma. It produces HPC-quality results for airborne contaminant scenarios nearly instantly and has unique new capabilities suited to sensor optimization. This presentation treats the design and use of CT-Analyst and discusses the developments needed for widespread use with advanced sensor and communication systems.
A parallel calibration utility for WRF-Hydro on high performance computers
NASA Astrophysics Data System (ADS)
Wang, J.; Wang, C.; Kotamarthi, V. R.
2017-12-01
A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.
Platform for Automated Real-Time High Performance Analytics on Medical Image Data.
Allen, William J; Gabr, Refaat E; Tefera, Getaneh B; Pednekar, Amol S; Vaughn, Matthew W; Narayana, Ponnada A
2018-03-01
Biomedical data are quickly growing in volume and in variety, providing clinicians an opportunity for better clinical decision support. Here, we demonstrate a robust platform that uses software automation and high performance computing (HPC) resources to achieve real-time analytics of clinical data, specifically magnetic resonance imaging (MRI) data. We used the Agave application programming interface to facilitate communication, data transfer, and job control between an MRI scanner and an off-site HPC resource. In this use case, Agave executed the graphical pipeline tool GRAphical Pipeline Environment (GRAPE) to perform automated, real-time, quantitative analysis of MRI scans. Same-session image processing will open the door for adaptive scanning and real-time quality control, potentially accelerating the discovery of pathologies and minimizing patient callbacks. We envision this platform can be adapted to other medical instruments, HPC resources, and analytics tools.
Create full-scale predictive economic models on ROI and innovation with performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joseph, Earl C.; Conway, Steve
The U.S. Department of Energy (DOE), the world's largest buyer and user of supercomputers, awarded IDC Research, Inc. a grant to create two macroeconomic models capable of quantifying, respectively, financial and non-financial (innovation) returns on investments in HPC resources. Following a 2013 pilot study in which we created the models and tested them on about 200 real-world HPC cases, DOE authorized us to conduct a full-out, three-year grant study to collect and measure many more examples, a process that would also subject the methodology to further testing and validation. A secondary, "stretch" goal of the full-out study was to advancemore » the methodology from association toward (but not all the way to) causation, by eliminating the effects of some of the other factors that might be contributing, along with HPC investments, to the returns produced in the investigated projects.« less
Appropriate Use Policy | High-Performance Computing | NREL
users of the National Renewable Energy Laboratory (NREL) High Performance Computing (HPC) resources government agency, National Laboratory, University, or private entity, the intellectual property terms (if issued a multifactor token which may be a physical token or a virtual token used with one-time password
Bioinformatics and Astrophysics Cluster (BinAc)
NASA Astrophysics Data System (ADS)
Krüger, Jens; Lutz, Volker; Bartusch, Felix; Dilling, Werner; Gorska, Anna; Schäfer, Christoph; Walter, Thomas
2017-09-01
BinAC provides central high performance computing capacities for bioinformaticians and astrophysicists from the state of Baden-Württemberg. The bwForCluster BinAC is part of the implementation concept for scientific computing for the universities in Baden-Württemberg. Community specific support is offered through the bwHPC-C5 project.
The Practical Obstacles of Data Transfer: Why researchers still love scp
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nam, Hai Ah; Hill, Jason J; Parete-Koon, Suzanne T
The importance of computing facilities is heralded every six months with the announcement of the new Top500 list, showcasing the world s fastest supercomputers. Unfortu- nately, with great computing capability does not come great long-term data storage capacity, which often means users must move their data to their local site archive, to remote sites where they may be doing future computation or anal- ysis, or back to their home institution, else face the dreaded data purge that most HPC centers employ to keep utiliza- tion of large parallel filesystems low to manage performance and capacity. At HPC centers, data transfermore » is crucial to the scientific workflow and will increase in importance as computing systems grow in size. The Energy Sciences Net- work (ESnet) recently launched its fifth generation network, a 100 Gbps high-performance, unclassified national network connecting more than 40 DOE research sites to support scientific research and collaboration. Despite the tenfold increase in bandwidth to DOE research sites amenable to multiple data transfer streams and high throughput, in prac- tice, researchers often under-utilize the network and resort to painfully-slow single stream transfer methods such as scp to avoid the complexity of using multiple stream tools such as GridFTP and bbcp, and contend with frustration from the lack of consistency of available tools between sites. In this study we survey and assess the data transfer methods pro- vided at several DOE supported computing facilities, includ- ing both leadership-computing facilities, connected through ESnet. We present observed transfer rates, suggested opti- mizations, and discuss the obstacles the tools must overcome to receive wide-spread adoption over scp.« less
Towards Test Driven Development for Computational Science with pFUnit
NASA Technical Reports Server (NTRS)
Rilee, Michael L.; Clune, Thomas L.
2014-01-01
Developers working in Computational Science & Engineering (CSE)/High Performance Computing (HPC) must contend with constant change due to advances in computing technology and science. Test Driven Development (TDD) is a methodology that mitigates software development risks due to change at the cost of adding comprehensive and continuous testing to the development process. Testing frameworks tailored for CSE/HPC, like pFUnit, can lower the barriers to such testing, yet CSE software faces unique constraints foreign to the broader software engineering community. Effective testing of numerical software requires a comprehensive suite of oracles, i.e., use cases with known answers, as well as robust estimates for the unavoidable numerical errors associated with implementation with finite-precision arithmetic. At first glance these concerns often seem exceedingly challenging or even insurmountable for real-world scientific applications. However, we argue that this common perception is incorrect and driven by (1) a conflation between model validation and software verification and (2) the general tendency in the scientific community to develop relatively coarse-grained, large procedures that compound numerous algorithmic steps.We believe TDD can be applied routinely to numerical software if developers pursue fine-grained implementations that permit testing, neatly side-stepping concerns about needing nontrivial oracles as well as the accumulation of errors. We present an example of a successful, complex legacy CSE/HPC code whose development process shares some aspects with TDD, which we contrast with current and potential capabilities. A mix of our proposed methodology and framework support should enable everyday use of TDD by CSE-expert developers.
Bokshan, Steven L; Doyle, Majella; Becker, Nils; Nalbantoglu, Ilke; Chapman, William C
2012-11-01
In 2002, the World Health Organization reclassified the soft tissue tumors known as hemangiopericytoma (HPC) as a variant of solitary fibrous tumor (SFT). As this classification system is still debated and has not been universally applied, the following account will provide an updated review of our understanding of those tumors still classified as HPC in the literature with special emphasis on hepatic HPC/SFT. HPC is a soft tissue neoplasm of mesenchymal origin first described by Stout and Murray in 1942. HPC constitutes 1 % of all vascular neoplasms and has been thought to coexist with trauma, prolonged steroid use, and hypertension. Although its presentation may be variable, intrahepatic HPC often presents with the patient's increasing awareness of a painless mass. Marked hypoglycemia may also accompany the neoplasm. Recent evidence suggests that uncontrolled growth may result from a loss of imprinting with overproduction of IGF-II in addition to alternative promoter usage. Diagnostic modalities including imaging, biopsy, and biochemical assays may be used to detect the presence of HPC. As most lesions are benign and slow growing, the prognosis is relatively favorable with 10-year survival between 54 and 70 %. Current mainstays of treatment include hepatic resection when possible especially with the use of adjuvant radiotherapy. Chemotherapeutic approaches have been poorly studied and are generally reserved for inoperable cases. Antiangiogenic compounds such as temozolomide and bevacizumab provide an exciting avenue of treatment. Finally, a case study will be reviewed highlighting diagnosis, treatment, and spectrum nature of hepatic HPC.
Mayado, A; Teodosio, C; Dasilva-Freire, N; Jara-Acevedo, M; Garcia-Montero, A C; Álvarez-Twose, I; Sánchez-Muñoz, L; Matito, A; Caldas, C; Muñoz-González, J I; Henriques, A; Sánchez-Gallego, J I; Escribano, L; Orfao, A
2018-01-13
Recent studies show that most systemic mastocytosis (SM) patients, including indolent SM (ISM) with (ISMs+) and without skin lesions (ISMs-), carry the KIT D816V mutation in PB leukocytes. We investigated the potential association between the degree of involvement of BM hematopoiesis by the KIT D816V mutation and the distribution of different maturation-associated compartments of bone marrow (BM) and peripheral blood (PB) CD34 + hematopoietic precursors (HPC) in ISM and identified the specific PB cell compartments that carry this mutation. The distribution of different maturation-associated subsets of BM and PB CD34 + HPC from 64 newly diagnosed (KIT-mutated) ISM patients and 14 healthy controls was analyzed by flow cytometry. In 18 patients, distinct FACS-purified PB cell compartments were also investigated for the KIT mutation. ISM patients showed higher percentages of both BM and PB MC-committed CD34 + HPC vs controls, particularly among ISM cases with MC-restricted KIT mutation (ISM MC ); this was associated with progressive blockade of maturation of CD34 + HPC to the neutrophil lineage from ISM MC to multilineage KIT-mutated cases (ISM ML ). Regarding the frequency of KIT-mutated cases and cell populations in PB, variable patterns were observed, the percentage of KIT-mutated PB CD34 + HPC, eosinophils, neutrophils, monocytes and T cells increasing from ISMs- MC and ISMs+ MC to ISM ML patients. The presence of the KIT D816V mutation in PB of ISM patients is associated with (early) involvement of circulating CD34 + HPC and multiple myeloid cell subpopulations, KIT-mutated PB CD34 + HPC potentially contributing to early dissemination of the disease. © 2018 EAACI and John Wiley and Sons A/S. Published by John Wiley and Sons Ltd.
DCDM1: Lessons Learned from the World's Most Energy Efficient Data Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sickinger, David E; Van Geet, Otto D; Carter, Thomas
This presentation discusses the holistic approach to design the world's most energy-efficient data center, which is located at the U.S. Department of Energy National Renewable Energy Laboratory (NREL). This high-performance computing (HPC) data center has achieved a trailing twelve-month average power usage effectiveness (PUE) of 1.04 and features a chiller-less design, component-level warm-water liquid cooling, and waste heat capture and reuse. We provide details of the demonstrated PUE and energy reuse effectiveness (ERE) and lessons learned during four years of production operation. Recent efforts to dramatically reduce the water footprint will also be discussed. Johnson Controls partnered with NREL andmore » Sandia National Laboratories to deploy a thermosyphon cooler (TSC) as a test bed at NREL's HPC data center that resulted in a 50% reduction in water usage during the first year of operation. The Thermosyphon Cooler Hybrid System (TCHS) integrates the control of a dry heat rejection device with an open cooling tower.« less
ECCD-induced tearing mode stabilization in coupled IPS/NIMROD/GENRAY HPC simulations
NASA Astrophysics Data System (ADS)
Jenkins, Thomas; Kruger, S. E.; Held, E. D.; Harvey, R. W.; Elwasif, W. R.
2012-03-01
We summarize ongoing developments toward an integrated, predictive model for determining optimal ECCD-based NTM stabilization strategies in ITER. We demonstrate the capability of the SWIM Project's Integrated Plasma Simulator (IPS) framework to choreograph multiple executions of, and data exchanges between, physics codes modeling various spatiotemporal scales of this coupled RF/MHD problem on several thousand HPC processors. As NIMROD evolves fluid equations to model bulk plasma behavior, self-consistent propagation/deposition of RF power in the ensuing plasma profiles is calculated by GENRAY. Data from both codes is then processed by computational geometry packages to construct the RF-induced quasilinear diffusion tensor; moments of this tensor (entering as additional terms in NIMROD's fluid equations due to the disparity in RF/MHD spatiotemporal scales) influence the dynamics of current, momentum, and energy evolution as well as the MHD closures. Initial results are shown to correctly capture the physics of magnetic island stabilization; we also discuss the development of a numerical plasma control system for active feedback stabilization of tearing modes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, Thomas; Liu, Zan; Sickinger, David
The Thermosyphon Cooler Hybrid System (TCHS) integrates the control of a dry heat rejection device, the thermosyphon cooler (TSC), with an open cooling tower. A combination of equipment and controls, this new heat rejection system embraces the 'smart use of water,' using evaporative cooling when it is most advantageous and then saving water and modulating toward increased dry sensible cooling as system operations and ambient weather conditions permit. Innovative fan control strategies ensure the most economical balance between water savings and parasitic fan energy. The unique low-pressure-drop design of the TSC allows water to be cooled directly by the TSCmore » evaporator without risk of bursting tubes in subfreezing ambient conditions. Johnson Controls partnered with the National Renewable Energy Laboratory (NREL) and Sandia National Laboratories to deploy the TSC as a test bed at NREL's high-performance computing (HPC) data center in the first half of 2016. Located in NREL's Energy Systems Integration Facility (ESIF), this HPC data center has achieved an annualized average power usage effectiveness rating of 1.06 or better since 2012. Warm-water liquid cooling is used to capture heat generated by computer systems direct to water; that waste heat is either reused as the primary heat source in the ESIF building or rejected using evaporative cooling. This data center is the single largest source of water and power demand on the NREL campus, using about 7,600 m3 (2.0 million gal) of water during the past year with an hourly average IT load of nearly 1 MW (3.4 million Btu/h) -- so dramatically reducing water use while continuing efficient data center operations is of significant interest. Because Sandia's climate is similar to NREL's, this new heat rejection system being deployed at NREL has gained interest at Sandia. Sandia's data centers utilize an hourly average of 8.5 MW (29 million Btu/h) and are also one of the largest consumers of water on Sandia's site. In addition to describing the installation of the TSC and its integration into the ESIF, this paper focuses on the full heat rejection system simulation program used for hourly analysis of the energy and water consumption of the complete system under varying operating scenarios. A follow-up paper will detail the test results. The evaluation of the TSC's performance at NREL will also determine a path forward at Sandia for possible deployment in a large-scale system not only for data center use but also possibly site wide.« less
Dynamic updating of hippocampal object representations reflects new conceptual knowledge
Mack, Michael L.; Love, Bradley C.; Preston, Alison R.
2016-01-01
Concepts organize the relationship among individual stimuli or events by highlighting shared features. Often, new goals require updating conceptual knowledge to reflect relationships based on different goal-relevant features. Here, our aim is to determine how hippocampal (HPC) object representations are organized and updated to reflect changing conceptual knowledge. Participants learned two classification tasks in which successful learning required attention to different stimulus features, thus providing a means to index how representations of individual stimuli are reorganized according to changing task goals. We used a computational learning model to capture how people attended to goal-relevant features and organized object representations based on those features during learning. Using representational similarity analyses of functional magnetic resonance imaging data, we demonstrate that neural representations in left anterior HPC correspond with model predictions of concept organization. Moreover, we show that during early learning, when concept updating is most consequential, HPC is functionally coupled with prefrontal regions. Based on these findings, we propose that when task goals change, object representations in HPC can be organized in new ways, resulting in updated concepts that highlight the features most critical to the new goal. PMID:27803320
Large Scale Computing and Storage Requirements for High Energy Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Richard A.; Wasserman, Harvey
2010-11-24
The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. Themore » effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.« less
2015-09-01
with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS . 1...UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS (ES) Technical and Project Engineering, LLC QED Systems, LLC Alexandria, VA...AND ADDRESS (ES) US Army Research Laboratory ATTN: RDRL-CIH-C Aberdeen Proving Ground, MD 21005 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR
2015-09-01
Gateway 2 4. Voice Packet Flow: SIP , Session Description Protocol (SDP), and RTP 3 5. Voice Data Analysis 5 6. Call Analysis 6 7. Call Metrics 6...analysis processing is designed for a general VoIP system architecture based on Session Initiation Protocol ( SIP ) for negotiating call sessions and...employs Skinny Client Control Protocol for network communication between the phone and the local CallManager (e.g., for each dialed digit), SIP
DOE Office of Scientific and Technical Information (OSTI.GOV)
2013-04-04
Spindle is software infrastructure that solves file system scalabiltiy problems associated with starting dynamically linked applications in HPC environments. When an HPC applications starts up thousands of pricesses at once, and those processes simultaneously access a shared file system to look for shared libraries, it can cause significant performance problems for both the application and other users. Spindle scalably coordinates the distribution of shared libraries to an application to avoid hammering the shared file system.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Senate Committee on Commerce, Science, and Transportation.
This committee report is intended to accompany S. 1067, a bill designed to provide for a coordinated federal research program in high-performance computing (HPC). The primary objective of the legislation is given as the acceleration of research, development, and application of the most advanced computing technology in research, education, and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael Pernice
2010-09-01
INL has agreed to provide participants in the Nuclear Energy Advanced Mod- eling and Simulation (NEAMS) program with access to its high performance computing (HPC) resources under sponsorship of the Enabling Computational Technologies (ECT) program element. This report documents the process used to select applications and the software stack in place at INL.
Extreme I/O on HPC for HEP using the Burst Buffer at NERSC
NASA Astrophysics Data System (ADS)
Bhimji, Wahid; Bard, Debbie; Burleigh, Kaylan; Daley, Chris; Farrell, Steve; Fasel, Markus; Friesen, Brian; Gerhardt, Lisa; Liu, Jialin; Nugent, Peter; Paul, Dave; Porter, Jeff; Tsulaia, Vakho
2017-10-01
In recent years there has been increasing use of HPC facilities for HEP experiments. This has initially focussed on less I/O intensive workloads such as generator-level or detector simulation. We now demonstrate the efficient running of I/O-heavy analysis workloads on HPC facilities at NERSC, for the ATLAS and ALICE LHC collaborations as well as astronomical image analysis for DESI and BOSS. To do this we exploit a new 900 TB NVRAM-based storage system recently installed at NERSC, termed a Burst Buffer. This is a novel approach to HPC storage that builds on-demand filesystems on all-SSD hardware that is placed on the high-speed network of the new Cori supercomputer. We describe the hardware and software involved in this system, and give an overview of its capabilities, before focusing in detail on how the ATLAS, ALICE and astronomical workflows were adapted to work on this system. We describe these modifications and the resulting performance results, including comparisons to other filesystems. We demonstrate that we can meet the challenging I/O requirements of HEP experiments and scale to many thousands of cores accessing a single shared storage system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Yan-Lin, E-mail: yanlin.shao@dnvgl.com; Faltinsen, Odd M.
2014-10-01
We propose a new efficient and accurate numerical method based on harmonic polynomials to solve boundary value problems governed by 3D Laplace equation. The computational domain is discretized by overlapping cells. Within each cell, the velocity potential is represented by the linear superposition of a complete set of harmonic polynomials, which are the elementary solutions of Laplace equation. By its definition, the method is named as Harmonic Polynomial Cell (HPC) method. The characteristics of the accuracy and efficiency of the HPC method are demonstrated by studying analytical cases. Comparisons will be made with some other existing boundary element based methods,more » e.g. Quadratic Boundary Element Method (QBEM) and the Fast Multipole Accelerated QBEM (FMA-QBEM) and a fourth order Finite Difference Method (FDM). To demonstrate the applications of the method, it is applied to some studies relevant for marine hydrodynamics. Sloshing in 3D rectangular tanks, a fully-nonlinear numerical wave tank, fully-nonlinear wave focusing on a semi-circular shoal, and the nonlinear wave diffraction of a bottom-mounted cylinder in regular waves are studied. The comparisons with the experimental results and other numerical results are all in satisfactory agreement, indicating that the present HPC method is a promising method in solving potential-flow problems. The underlying procedure of the HPC method could also be useful in other fields than marine hydrodynamics involved with solving Laplace equation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malony, Allen D.; Wolf, Felix G.
2014-01-31
The growing number of cores provided by today’s high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data – even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensivelymore » across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish these objectives: (1) refactor TAU and Scalasca performance system components for core code sharing and (2) integrate TAU and Scalasca functionality through data interfaces, formats, and utilities. As presented in this report, the project has completed these goals. In addition to shared technical advances, the groups have worked to engage with users through application performance engineering and tools training. In this regard, the project benefits from the close interactions the teams have with national laboratories in the United States and Germany. We have also sought to enhance our interactions through joint tutorials and outreach. UO has become a member of the Virtual Institute of High-Productivity Supercomputing (VI-HPS) established by the Helmholtz Association of German Research Centres as a center of excellence, focusing on HPC tools for diagnosing programming errors and optimizing performance. UO and FZJ have conducted several VI-HPS training activities together within the past three years.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malony, Allen D.; Wolf, Felix G.
2014-01-31
The growing number of cores provided by today’s high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data – even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensivelymore » across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish these objectives: (1) refactor TAU and Scalasca performance system components for core code sharing and (2) integrate TAU and Scalasca functionality through data interfaces, formats, and utilities. As presented in this report, the project has completed these goals. In addition to shared technical advances, the groups have worked to engage with users through application performance engineering and tools training. In this regard, the project benefits from the close interactions the teams have with national laboratories in the United States and Germany. We have also sought to enhance our interactions through joint tutorials and outreach. UO has become a member of the Virtual Institute of High-Productivity Supercomputing (VI-HPS) established by the Helmholtz Association of German Research Centres as a center of excellence, focusing on HPC tools for diagnosing programming errors and optimizing performance. UO and FZJ have conducted several VI-HPS training activities together within the past three years.« less
Barrington, James H; Chrismas, Bryna C R; Gibson, Oliver R; Tuttle, James; Pegrum, J; Govilkar, S; Kabir, Chindu; Giannakakis, N; Rayan, F; Okasheh, Z; Sanaullah, A; Ng Man Sun, S; Pearce, Oliver; Taylor, Lee
2017-01-01
Ischemic preconditioning (IPC) is valid technique which elicits reductions in femoral blood flow occlusion mediated reperfusion stress (oxidative stress, Hsp gene transcripts) within the systemic blood circulation and/or skeletal muscle. It is unknown whether systemic hypoxia, evoked by hypoxic preconditioning (HPC) has efficacy in priming the heat shock protein (Hsp) system thus reducing reperfusion stress following blood flow occlusion, in the same manner as IPC. The comparison between IPC and HPC being relevant as a preconditioning strategy prior to orthopedic surgery. In an independent group design, 18 healthy men were exposed to 40 min of (1) passive whole-body HPC (FiO 2 = 0.143; no ischemia. N = 6), (2) IPC (FiO 2 = 0.209; four bouts of 5 min ischemia and 5 min reperfusion. n = 6), or (3) rest (FiO 2 = 0.209; no ischemia. n = 6). The interventions were administered 1 h prior to 30 min of tourniquet derived femoral blood flow occlusion and were followed by 2 h subsequent reperfusion. Systemic blood samples were taken pre- and post-intervention. Systemic blood and gastrocnemius skeletal muscle samples were obtained pre-, 15 min post- (15PoT) and 120 min (120PoT) post-tourniquet deflation. To determine the cellular stress response gastrocnemius and leukocyte Hsp72 mRNA and Hsp32 mRNA gene transcripts were determined by RT-qPCR. The plasma oxidative stress response (protein carbonyl, reduced glutathione/oxidized glutathione ratio) was measured utilizing commercially available kits. In comparison to control, at 15PoT a significant difference in gastrocnemius Hsp72 mRNA was seen in HPC (-1.93-fold; p = 0.007) and IPC (-1.97-fold; p = 0.006). No significant differences were observed in gastrocnemius Hsp32 and Hsp72 mRNA, leukocyte Hsp72 and Hsp32 mRNA, or oxidative stress markers ( p > 0.05) between HPC and IPC. HPC provided near identical amelioration of blood flow occlusion mediated gastrocnemius stress response (Hsp72 mRNA), compared to an established IPC protocol. This was seen independent of changes in systemic oxidative stress, which likely explains the absence of change in Hsp32 mRNA transcripts within leukocytes and the gastrocnemius. Both the established IPC and novel HPC interventions facilitate a priming of the skeletal muscle, but not leukocyte, Hsp system prior to femoral blood flow occlusion. This response demonstrates a localized tissue specific adaptation which may ameliorate reperfusion stress.
Barrington, James H.; Chrismas, Bryna C. R.; Gibson, Oliver R.; Tuttle, James; Pegrum, J.; Govilkar, S.; Kabir, Chindu; Giannakakis, N.; Rayan, F.; Okasheh, Z.; Sanaullah, A.; Ng Man Sun, S; Pearce, Oliver; Taylor, Lee
2017-01-01
Ischemic preconditioning (IPC) is valid technique which elicits reductions in femoral blood flow occlusion mediated reperfusion stress (oxidative stress, Hsp gene transcripts) within the systemic blood circulation and/or skeletal muscle. It is unknown whether systemic hypoxia, evoked by hypoxic preconditioning (HPC) has efficacy in priming the heat shock protein (Hsp) system thus reducing reperfusion stress following blood flow occlusion, in the same manner as IPC. The comparison between IPC and HPC being relevant as a preconditioning strategy prior to orthopedic surgery. In an independent group design, 18 healthy men were exposed to 40 min of (1) passive whole-body HPC (FiO2 = 0.143; no ischemia. N = 6), (2) IPC (FiO2 = 0.209; four bouts of 5 min ischemia and 5 min reperfusion. n = 6), or (3) rest (FiO2 = 0.209; no ischemia. n = 6). The interventions were administered 1 h prior to 30 min of tourniquet derived femoral blood flow occlusion and were followed by 2 h subsequent reperfusion. Systemic blood samples were taken pre- and post-intervention. Systemic blood and gastrocnemius skeletal muscle samples were obtained pre-, 15 min post- (15PoT) and 120 min (120PoT) post-tourniquet deflation. To determine the cellular stress response gastrocnemius and leukocyte Hsp72 mRNA and Hsp32 mRNA gene transcripts were determined by RT-qPCR. The plasma oxidative stress response (protein carbonyl, reduced glutathione/oxidized glutathione ratio) was measured utilizing commercially available kits. In comparison to control, at 15PoT a significant difference in gastrocnemius Hsp72 mRNA was seen in HPC (−1.93-fold; p = 0.007) and IPC (−1.97-fold; p = 0.006). No significant differences were observed in gastrocnemius Hsp32 and Hsp72 mRNA, leukocyte Hsp72 and Hsp32 mRNA, or oxidative stress markers (p > 0.05) between HPC and IPC. HPC provided near identical amelioration of blood flow occlusion mediated gastrocnemius stress response (Hsp72 mRNA), compared to an established IPC protocol. This was seen independent of changes in systemic oxidative stress, which likely explains the absence of change in Hsp32 mRNA transcripts within leukocytes and the gastrocnemius. Both the established IPC and novel HPC interventions facilitate a priming of the skeletal muscle, but not leukocyte, Hsp system prior to femoral blood flow occlusion. This response demonstrates a localized tissue specific adaptation which may ameliorate reperfusion stress. PMID:28824456
Design Aspects of the Rayleigh Convection Code
NASA Astrophysics Data System (ADS)
Featherstone, N. A.
2017-12-01
Understanding the long-term generation of planetary or stellar magnetic field requires complementary knowledge of the large-scale fluid dynamics pervading large fractions of the object's interior. Such large-scale motions are sensitive to the system's geometry which, in planets and stars, is spherical to a good approximation. As a result, computational models designed to study such systems often solve the MHD equations in spherical geometry, frequently employing a spectral approach involving spherical harmonics. We present computational and user-interface design aspects of one such modeling tool, the Rayleigh convection code, which is suitable for deployment on desktop and petascale-hpc architectures alike. In this poster, we will present an overview of this code's parallel design and its built-in diagnostics-output package. Rayleigh has been developed with NSF support through the Computational Infrastructure for Geodynamics and is expected to be released as open-source software in winter 2017/2018.
Using HPC within an operational forecasting configuration
NASA Astrophysics Data System (ADS)
Jagers, H. R. A.; Genseberger, M.; van den Broek, M. A. F. H.
2012-04-01
Various natural disasters are caused by high-intensity events, for example: extreme rainfall can in a short time cause major damage in river catchments, storms can cause havoc in coastal areas. To assist emergency response teams in operational decisions, it's important to have reliable information and predictions as soon as possible. This starts before the event by providing early warnings about imminent risks and estimated probabilities of possible scenarios. In the context of various applications worldwide, Deltares has developed an open and highly configurable forecasting and early warning system: Delft-FEWS. Finding the right balance between simulation time (and hence prediction lead time) and simulation accuracy and detail is challenging. Model resolution may be crucial to capture certain critical physical processes. Uncertainty in forcing conditions may require running large ensembles of models; data assimilation techniques may require additional ensembles and repeated simulations. The computational demand is steadily increasing and data streams become bigger. Using HPC resources is a logical step; in different settings Delft-FEWS has been configured to take advantage of distributed computational resources available to improve and accelerate the forecasting process (e.g. Montanari et al, 2006). We will illustrate the system by means of a couple of practical applications including the real-time dynamic forecasting of wind driven waves, flow of water, and wave overtopping at dikes of Lake IJssel and neighboring lakes in the center of The Netherlands. Montanari et al., 2006. Development of an ensemble flood forecasting system for the Po river basin, First MAP D-PHASE Scientific Meeting, 6-8 November 2006, Vienna, Austria.
PATHA: Performance Analysis Tool for HPC Applications
Yoo, Wucherl; Koo, Michelle; Cao, Yi; ...
2016-02-18
Large science projects rely on complex workflows to analyze terabytes or petabytes of data. These jobs are often running over thousands of CPU cores and simultaneously performing data accesses, data movements, and computation. It is difficult to identify bottlenecks or to debug the performance issues in these large workflows. In order to address these challenges, we have developed Performance Analysis Tool for HPC Applications (PATHA) using the state-of-art open source big data processing tools. Our framework can ingest system logs to extract key performance measures, and apply the most sophisticated statistical tools and data mining methods on the performance data.more » Furthermore, it utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of PATHA, we conduct a case study on the workflows from an astronomy project known as the Palomar Transient Factory (PTF). This study processed 1.6 TB of system logs collected on the NERSC supercomputer Edison. Using PATHA, we were able to identify performance bottlenecks, which reside in three tasks of PTF workflow with the dependency on the density of celestial objects.« less
Parallel Application Performance on Two Generations of Intel Xeon HPC Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Christopher H.; Long, Hai; Sides, Scott
2015-10-15
Two next-generation node configurations hosting the Haswell microarchitecture were tested with a suite of microbenchmarks and application examples, and compared with a current Ivy Bridge production node on NREL" tm s Peregrine high-performance computing cluster. A primary conclusion from this study is that the additional cores are of little value to individual task performance--limitations to application parallelism, or resource contention among concurrently running but independent tasks, limits effective utilization of these added cores. Hyperthreading generally impacts throughput negatively, but can improve performance in the absence of detailed attention to runtime workflow configuration. The observations offer some guidance to procurement ofmore » future HPC systems at NREL. First, raw core count must be balanced with available resources, particularly memory bandwidth. Balance-of-system will determine value more than processor capability alone. Second, hyperthreading continues to be largely irrelevant to the workloads that are commonly seen, and were tested here, at NREL. Finally, perhaps the most impactful enhancement to productivity might occur through enabling multiple concurrent jobs per node. Given the right type and size of workload, more may be achieved by doing many slow things at once, than fast things in order.« less
Hot Chips and Hot Interconnects for High End Computing Systems
NASA Technical Reports Server (NTRS)
Saini, Subhash
2005-01-01
I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).
Savary, Caroline; Rousselet, Marie-Christine; Michalak, Sophie; Fournier, Henri-Dominique; Taris, Michaël; Loussouarn, Delphine; Rousseau, Audrey
2016-08-01
The 2007 World Health Organization (WHO) classification of tumors of the central nervous system distinguishes meningeal hemangiopericytomas (HPC) from solitary fibrous tumors (TFS). In the WHO classification of tumors of soft tissue and bone, those neoplasms are no longer separate entities since the discovery in 2013 of a common oncogenic event, i.e. the NAB2-STAT6 gene fusion. A shared histopronostic grading system, called "Marseille grading system", was recently proposed, based on hypercellularity, mitotic count and necrosis. We evaluated the immunophenotype and histoprognosis in a retrospective cohort of intracranial HPC and TFS. Fifteen initial tumors and 2 recurrences were evaluated by immunohistochemistry for STAT6, CD34, EMA, progesterone receptors and Ki67. The pronostic value of the WHO and the Marseille grading systems was tested on 12 patients with clinical follow-up. Initial tumors were 11 HPC and 4 SFT. STAT6 and CD34 were expressed in 16/17 tumors, EMA and progesterone receptors in 2 and 5 cases, respectively. The Ki67 labelling index was 6.25% in HPC and 3% in SFT. Half of the tumors recurred between 2 years and 9 years after initial diagnosis (mean time 5 years). No statistical difference in the risk of recurrence was associated with either grade (WHO or Marseille), in this small cohort. The diagnosis of HPC and TFS is facilitated by the almost constant immuno-expression of STAT6, and this justifies their common classification. The high rate of recurrence implies a very long-term follow-up because the current grading systems do not accurately predict the individual risk. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Gyrokinetic particle-in-cell optimization on emerging multi- and manycore platforms
Madduri, Kamesh; Im, Eun-Jin; Ibrahim, Khaled Z.; ...
2011-03-02
The next decade of high-performance computing (HPC) systems will see a rapid evolution and divergence of multi- and manycore architectures as power and cooling constraints limit increases in microprocessor clock speeds. Understanding efficient optimization methodologies on diverse multicore designs in the context of demanding numerical methods is one of the greatest challenges faced today by the HPC community. In this paper, we examine the efficient multicore optimization of GTC, a petascale gyrokinetic toroidal fusion code for studying plasma microturbulence in tokamak devices. For GTC’s key computational components (charge deposition and particle push), we explore efficient parallelization strategies across a broadmore » range of emerging multicore designs, including the recently-released Intel Nehalem-EX, the AMD Opteron Istanbul, and the highly multithreaded Sun UltraSparc T2+. We also present the first study on tuning gyrokinetic particle-in-cell (PIC) algorithms for graphics processors, using the NVIDIA C2050 (Fermi). Our work discusses several novel optimization approaches for gyrokinetic PIC, including mixed-precision computation, particle binning and decomposition strategies, grid replication, SIMDized atomic floating-point operations, and effective GPU texture memory utilization. Overall, we achieve significant performance improvements of 1.3–4.7× on these complex PIC kernels, despite the inherent challenges of data dependency and locality. Finally, our work also points to several architectural and programming features that could significantly enhance PIC performance and productivity on next-generation architectures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu Henry; Tate, Zeb; Abhyankar, Shrirang
The power grid has been evolving over the last 120 years, but it is seeing more changes in this decade and next than it has seen over the past century. In particular, the widespread deployment of intermittent renewable generation, smart loads and devices, hierarchical and distributed control technologies, phasor measurement units, energy storage, and widespread usage of electric vehicles will require fundamental changes in methods and tools for the operation and planning of the power grid. The resulting new dynamic and stochastic behaviors will demand the inclusion of more complexity in modeling the power grid. Solving such complex models inmore » the traditional computing environment will be a major challenge. Along with the increasing complexity of power system models, the increasing complexity of smart grid data further adds to the prevailing challenges. In this environment, the myriad of smart sensors and meters in the power grid increase by multiple orders of magnitude, so do the volume and speed of the data. The information infrastructure will need to drastically change to support the exchange of enormous amounts of data as smart grid applications will need the capability to collect, assimilate, analyze and process the data, to meet real-time grid functions. High performance computing (HPC) holds the promise to enhance these functions, but it is a great resource that has not been fully explored and adopted for the power grid domain.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agelastos, Anthony; Allan, Benjamin; Brandt, Jim
A detailed understanding of HPC applications’ resource needs and their complex interactions with each other and HPC platform resources are critical to achieving scalability and performance. Such understanding has been difficult to achieve because typical application profiling tools do not capture the behaviors of codes under the potentially wide spectrum of actual production conditions and because typical monitoring tools do not capture system resource usage information with high enough fidelity to gain sufficient insight into application performance and demands. In this paper we present both system and application profiling results based on data obtained through synchronized system wide monitoring onmore » a production HPC cluster at Sandia National Laboratories (SNL). We demonstrate analytic and visualization techniques that we are using to characterize application and system resource usage under production conditions for better understanding of application resource needs. Furthermore, our goals are to improve application performance (through understanding application-to-resource mapping and system throughput) and to ensure that future system capabilities match their intended workloads.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamlet, Jason R.; Keliiaa, Curtis M.
This report assesses current public domain cyber security practices with respect to cyber indications and warnings. It describes cybersecurity industry and government activities, including cybersecurity tools, methods, practices, and international and government-wide initiatives known to be impacting current practice. Of particular note are the U.S. Government's Trusted Internet Connection (TIC) and 'Einstein' programs, which are serving to consolidate the Government's internet access points and to provide some capability to monitor and mitigate cyber attacks. Next, this report catalogs activities undertaken by various industry and government entities. In addition, it assesses the benchmarks of HPC capability and other HPC attributes thatmore » may lend themselves to assist in the solution of this problem. This report draws few conclusions, as it is intended to assess current practice in preparation for future work, however, no explicit references to HPC usage for the purpose of analyzing cyber infrastructure in near-real-time were found in the current practice. This report and a related SAND2010-4766 National Cyber Defense High Performance Computing and Analysis: Concepts, Planning and Roadmap report are intended to provoke discussion throughout a broad audience about developing a cohesive HPC centric solution to wide-area cybersecurity problems.« less
Evolution of Embedded Processing for Wide Area Surveillance
2014-01-01
future vision . 15. SUBJECT TERMS Embedded processing; high performance computing; general-purpose graphical processing units (GPGPUs) 16. SECURITY...recon- naissance (ISR) mission capabilities. The capabilities these advancements are achieving include the ability to provide persistent all...fighters to support and positively affect their mission . Significant improvements in high-performance computing (HPC) technology make it possible to
Amin, Muhammad; Abbas, Nazia Shahana; Hussain, Muhammad Ajaz; Sher, Muhammad; Edgar, Kevin J
2018-07-01
The present study reveals the syntheses of hydroxypropylcellulose‑(HPC) and hydroxyethylcellulose‑(HEC) based macromolecular prodrugs (MPDs) of ciprofloxacin (CIP) using homogeneous reaction methodology. Covalently loaded drug content (DC) of each prodrug was quantified using UV-Vis spectrophotometry to determine degree of substitution (DS). HPC-ciprofloxacin (HPC-CIP) conjugates showed DS of CIP in the range 0.87-1.15 whereas HEC-ciprofloxacin (HEC-CIP) conjugates showed DS range 0.51-0.75. Transmission electron microscopy revealed that HPC-CIP conjugate 2 and HEC-CIP conjugate 6 self-assembled into nanoparticles of 150-300 and 180-250nm, respectively. Size exclusion chromatography revealed HPC-CIP conjugate 2 and HEC-CIP conjugate 6 as monodisperse systems. In vitro drug release studies indicated 15 and 43% CIP release from HPC-CIP conjugate 2 after 6h in simulated gastric and simulated intestinal fluids (SGF and SIF), respectively. HEC-CIP conjugate 6 showed 16% and 46% release after 6h in SGF and SIF, respectively. HPC-CIP conjugate 2 and HEC-CIP conjugate 6 exhibited half-lives of 10.87 and 11.71h, respectively with area under the curve values of 164 and 175hμgmL -1 , respectively, indicating enhanced bioavailability and improved pharmacokinetic profiles in animal model. Equal antibacterial activities to that of unmodified CIP confirmed their competitive efficacies. Cytotoxicity studies supported their non-toxic nature and biocompatibility. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Valisetty, R.; Rajendran, A.; Agarwal, G.; Dongare, A.; Ianni, J.; Namburu, R.
2018-07-01
The Hugoniot elastic limit (HEL, or the shock precursor) decay phenomenon was investigated under an uniaxial strain condition, in a plate-on-plate impact configuration, using large-scale molecular dynamics (MD) high performance computing (HPC) simulations on a multi-billion 5000 Å thick nanocrystalline aluminum (nc-Al) system with an average grain size of 1000 Å and at five impact velocities ranging from 0.7 to 1.5 km s‑1. The averaged stress and strain distributions were obtained in the shock fronts’ travel direction using a material conserving atom slicing method. The loading paths in terms of the Rayleigh lines experienced by the atom system in the evolving shock fronts exhibited a strong dependency on the shock stress levels. This dependency decreased as the impact velocity increased from 0.7 to 1.5 km s‑1. By combining the HELs from MD results with plate impact experimental data, the precursor decay for the nc-Al was predicted from nano-to-macro scale thickness range. The evolving shock fronts were characterized in terms of parameters such as the shock front thickness, shock rise time and strain rate. The MD results were further analyzed using a crystal analysis algorithm and a twin dislocation identification method to obtain the densities of the atomistic defects evolving behind the evolving shock fronts. High-fidelity large-scale HPC simulation results showed that certain dislocation partials strongly influenced the elastic–plastic transition response across the HELs. The twinning dislocations increased by more than a factor of 10 during the transition and remained constant under further shock compression.
Schweizer, Leonille; Koelsche, Christian; Sahm, Felix; Piro, Rosario M; Capper, David; Reuss, David E; Pusch, Stefan; Habel, Antje; Meyer, Jochen; Göck, Tanja; Jones, David T W; Mawrin, Christian; Schittenhelm, Jens; Becker, Albert; Heim, Stephanie; Simon, Matthias; Herold-Mende, Christel; Mechtersheimer, Gunhild; Paulus, Werner; König, Rainer; Wiestler, Otmar D; Pfister, Stefan M; von Deimling, Andreas
2013-05-01
Non-central nervous system hemangiopericytoma (HPC) and solitary fibrous tumor (SFT) are considered by pathologists as two variants of a single tumor entity now subsumed under the entity SFT. Recent detection of frequent NAB2-STAT6 fusions in both, HPC and SFT, provided additional support for this view. On the other hand, current neuropathological practice still distinguishes between HPC and SFT. The present study set out to identify genes involved in the formation of meningeal HPC. We performed exome sequencing and detected the NAB2-STAT6 fusion in DNA of 8/10 meningeal HPC thereby providing evidence of close relationship of these tumors with peripheral SFT. Due to the considerable effort required for exome sequencing, we sought to explore surrogate markers for the NAB2-STAT6 fusion protein. We adopted the Duolink proximity ligation assay and demonstrated the presence of NAB2-STAT6 fusion protein in 17/17 HPC and the absence in 15/15 meningiomas. More practical, presence of the NAB2-STAT6 fusion protein resulted in a strong nuclear signal in STAT6 immunohistochemistry. The nuclear reallocation of STAT6 was detected in 35/37 meningeal HPC and 25/25 meningeal SFT but not in 87 meningiomas representing the most important differential diagnosis. Tissues not harboring the NAB2-STAT6 fusion protein presented with nuclear expression of NAB2 and cytoplasmic expression of STAT6 proteins. In conclusion, we provide strong evidence for meningeal HPC and SFT to constitute variants of a single entity which is defined by NAB2-STAT6 fusion. In addition, we demonstrate that this fusion can be rapidly detected by STAT6 immunohistochemistry which shows a consistent nuclear reallocation. This immunohistochemical assay may prove valuable for the differentiation of HPC and SFT from other mesenchymal neoplasms.
Role of HPC in Advancing Computational Aeroelasticity
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.
2004-01-01
On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karthik, Rajasekar
2014-01-01
In this paper, an architecture for building Scalable And Mobile Environment For High-Performance Computing with spatial capabilities called SAME4HPC is described using cutting-edge technologies and standards such as Node.js, HTML5, ECMAScript 6, and PostgreSQL 9.4. Mobile devices are increasingly becoming powerful enough to run high-performance apps. At the same time, there exist a significant number of low-end and older devices that rely heavily on the server or the cloud infrastructure to do the heavy lifting. Our architecture aims to support both of these types of devices to provide high-performance and rich user experience. A cloud infrastructure consisting of OpenStack withmore » Ubuntu, GeoServer, and high-performance JavaScript frameworks are some of the key open-source and industry standard practices that has been adopted in this architecture.« less
Experience Paper: Software Engineering and Community Codes Track in ATPESC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubey, Anshu; Riley, Katherine M.
Argonne Training Program in Extreme Scale Computing (ATPESC) was started by the Argonne National Laboratory with the objective of expanding the ranks of better prepared users of high performance computing (HPC) machines. One of the unique aspects of the program was inclusion of software engineering and community codes track. The inclusion was motivated by the observation that the projects with a good scientific and software process were better able to meet their scientific goals. In this paper we present our experience of running the software track from the beginning of the program until now. We discuss the motivations, the reception,more » and the evolution of the track over the years. We welcome discussion and input from the community to enhance the track in ATPESC, and also to facilitate inclusion of similar tracks in other HPC oriented training programs.« less
NASA Astrophysics Data System (ADS)
Ogden, F. L.; Lai, W.; Douglas, C. C.; Miller, S. N.; Zhang, Y.
2012-12-01
The CI-WATER project is a cooperative effort between the Utah and Wyoming EPSCoR jurisdictions, and is funded through a cooperative agreement with the U.S. National Science Foundation EPSCoR. The CI-WATER project is acquiring hardware and developing software cyberinfrastructure (CI) to enhance accessibility of High Performance Computing for water resources modeling in the Western U.S. One of the components of the project is development of a large-scale, high-resolution, physically-based, data-driven, integrated computational water resources model, which we call the CI-WATER HPC model. The objective of this model development is to enable evaluation of integrated system behavior to guide and support water system planning and management by individual users, cities, or states. The model is first being tested in the Green River basin of Wyoming, which is the largest tributary to the Colorado River. The model will ultimately be applied to simulate the entire Upper Colorado River basin for hydrological studies, watershed management, economic analysis, as well as evaluation of potential changes in environmental policy and law, population, land use, and climate. In addition to hydrologically important processes simulated in many hydrological models, the CI-WATER HPC model will emphasize anthropogenic influences such as land use change, water resources infrastructure, irrigation practices, trans-basin diversions, and urban/suburban development. The model operates on an unstructured mesh, employing adaptive mesh at grid sizes as small as 10 m as needed- particularly in high elevation snow melt regions. Data for the model are derived from remote sensing sources, atmospheric models and geophysical techniques. Monte-Carlo techniques and ensemble Kalman filtering methodologies are employed for data assimilation. The model includes application programming interface (API) standards to allow easy substitution of alternative process-level simulation routines, and provide post-processing, visualization, and communication of massive amounts of output. The open-source CI-WATER model represents a significant advance in water resources modeling, and will be useful to water managers, planners, resource economists, and the hydrologic research community in general.
Seismic waveform modeling over cloud
NASA Astrophysics Data System (ADS)
Luo, Cong; Friederich, Wolfgang
2016-04-01
With the fast growing computational technologies, numerical simulation of seismic wave propagation achieved huge successes. Obtaining the synthetic waveforms through numerical simulation receives an increasing amount of attention from seismologists. However, computational seismology is a data-intensive research field, and the numerical packages usually come with a steep learning curve. Users are expected to master considerable amount of computer knowledge and data processing skills. Training users to use the numerical packages, correctly access and utilize the computational resources is a troubled task. In addition to that, accessing to HPC is also a common difficulty for many users. To solve these problems, a cloud based solution dedicated on shallow seismic waveform modeling has been developed with the state-of-the-art web technologies. It is a web platform integrating both software and hardware with multilayer architecture: a well designed SQL database serves as the data layer, HPC and dedicated pipeline for it is the business layer. Through this platform, users will no longer need to compile and manipulate various packages on the local machine within local network to perform a simulation. By providing users professional access to the computational code through its interfaces and delivering our computational resources to the users over cloud, users can customize the simulation at expert-level, submit and run the job through it.
NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development
NASA Astrophysics Data System (ADS)
Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.
2009-12-01
NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are designed for Linux operating systems (OS), the arrival of the WindowsHPC 2008 OS provides the opportunity to evaluate the use of a new platform on which to develop and port climate and earth science models. In particular, we are evaluating Microsoft's Visual Studio Integrated Developer Environment to determine its appropriateness for the climate modeling community. In the initial phases of this project, we have ported GEOS-5, WRF, GISS ModelE, and GFS to Linux on a CX1 and are in the process of porting WRF and ModelE to WindowsHPC 2008. Initial tests on the CX1 Linux OS indicate favorable comparisons in terms of performance and consistency of scientific results when compared with experiments executed on NASA high end systems. As in the past, NASA's large clusters will continue to be an important part of our objectives. We envision a seamless environment in which an investigator performs model development and testing on a desktop system and can seamlessly transfer execution to supercomputer clusters for production.
Final Report Extreme Computing and U.S. Competitiveness DOE Award. DE-FG02-11ER26087/DE-SC0008764
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mustain, Christopher J.
The Council has acted on each of the grant deliverables during the funding period. The deliverables are: (1) convening the Council’s High Performance Computing Advisory Committee (HPCAC) on a bi-annual basis; (2) broadening public awareness of high performance computing (HPC) and exascale developments; (3) assessing the industrial applications of extreme computing; and (4) establishing a policy and business case for an exascale economy.
LAPACKrc: Fast linear algebra kernels/solvers for FPGA accelerators
NASA Astrophysics Data System (ADS)
Gonzalez, Juan; Núñez, Rafael C.
2009-07-01
We present LAPACKrc, a family of FPGA-based linear algebra solvers able to achieve more than 100x speedup per commodity processor on certain problems. LAPACKrc subsumes some of the LAPACK and ScaLAPACK functionalities, and it also incorporates sparse direct and iterative matrix solvers. Current LAPACKrc prototypes demonstrate between 40x-150x speedup compared against top-of-the-line hardware/software systems. A technology roadmap is in place to validate current performance of LAPACKrc in HPC applications, and to increase the computational throughput by factors of hundreds within the next few years.
On the energy footprint of I/O management in Exascale HPC systems
Dorier, Matthieu; Yildiz, Orcun; Ibrahim, Shadi; ...
2016-03-21
The advent of unprecedentedly scalable yet energy hungry Exascale supercomputers poses a major challenge in sustaining a high performance-per-watt ratio. With I/O management acquiring a crucial role in supporting scientific simulations, various I/O management approaches have been proposed to achieve high performance and scalability. But, the details of how these approaches affect energy consumption have not been studied yet. Therefore, this paper aims to explore how much energy a supercomputer consumes while running scientific simulations when adopting various I/O management approaches. In particular, we closely examine three radically different I/O schemes including time partitioning, dedicated cores, and dedicated nodes. Tomore » accomplish this, we implement the three approaches within the Damaris I/O middleware and perform extensive experiments with one of the target HPC applications of the Blue Waters sustained-petaflop supercomputer project: the CM1 atmospheric model. Our experimental results obtained on the French Grid'5000 platform highlight the differences among these three approaches and illustrate in which way various configurations of the application and of the system can impact performance and energy consumption. Moreover, we propose and validate a mathematical model that estimates the energy consumption of a HPC simulation under different I/O approaches. This proposed model gives hints to pre-select the most energy-efficient I/O approach for a particular simulation on a particular HPC system and therefore provides a step towards energy-efficient HPC simulations in Exascale systems. To the best of our knowledge, our work provides the first in-depth look into the energy-performance tradeoffs of I/O management approaches.« less
Establishment of HPC(R2A) for regrowth control in non-chlorinated distribution systems.
Uhl, Wolfgang; Schaule, Gabriela
2004-05-01
Drinking water distributed without disinfection and without regrowth problems for many years may show bacterial regrowth when the residence time and/or temperature in the distribution system increases or when substrate and/or bacterial concentration in the treated water increases. An example of a regrowth event in a major German city is discussed. Regrowth of HPC bacteria occurred unexpectedly at the end of a very hot summer. No pathogenic or potentially pathogenic bacteria were identified. Increased residence times in the distribution system and temperatures up to 25 degrees C were identified as most probable causes and the regrowth event was successfully overcome by changing flow regimes and decreasing residence times. Standard plate counts of HPC bacteria using the spread plate technique on nutrient rich agar according to German Drinking Water Regulations (GDWR) had proven to be a very good indicator of hygienically safe drinking water and to demonstrate the effectiveness of water treatment. However, the method proved insensitive for early regrowth detection. Regrowth experiments in the lab and sampling of the distribution system during two summers showed that spread plate counts on nutrient-poor R2A agar after 7-day incubation yielded 100 to 200 times higher counts. Counts on R2A after 3-day incubation were three times less than after 7 days. As the precision of plate count methods is very poor for counts less than 10 cfu/plate, a method yielding higher counts is better suited to detect upcoming regrowth than a method yielding low counts. It is shown that for the identification of regrowth events HPC(R2A) gives a further margin of about 2 weeks for reaction before HPC(GDWR). Copyright 2003 Elsevier B.V.
GPU Implementation of High Rayleigh Number Three-Dimensional Mantle Convection
NASA Astrophysics Data System (ADS)
Sanchez, D. A.; Yuen, D. A.; Wright, G. B.; Barnett, G. A.
2010-12-01
Although we have entered the age of petascale computing, many factors are still prohibiting high-performance computing (HPC) from infiltrating all suitable scientific disciplines. For this reason and others, application of GPU to HPC is gaining traction in the scientific world. With its low price point, high performance potential, and competitive scalability, GPU has been an option well worth considering for the last few years. Moreover with the advent of NVIDIA's Fermi architecture, which brings ECC memory, better double-precision performance, and more RAM to GPU, there is a strong message of corporate support for GPU in HPC. However many doubts linger concerning the practicality of using GPU for scientific computing. In particular, GPU has a reputation for being difficult to program and suitable for only a small subset of problems. Although inroads have been made in addressing these concerns, for many scientists GPU still has hurdles to clear before becoming an acceptable choice. We explore the applicability of GPU to geophysics by implementing a three-dimensional, second-order finite-difference model of Rayleigh-Benard thermal convection on an NVIDIA GPU using C for CUDA. Our code reaches sufficient resolution, on the order of 500x500x250 evenly-spaced finite-difference gridpoints, on a single GPU. We make extensive use of highly optimized CUBLAS routines, allowing us to achieve performance on the order of O( 0.1 ) µs per timestep*gridpoint at this resolution. This performance has allowed us to study high Rayleigh number simulations, on the order of 2x10^7, on a single GPU.
Decentralized Grid Scheduling with Evolutionary Fuzzy Systems
NASA Astrophysics Data System (ADS)
Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander
In this paper, we address the problem of finding workload exchange policies for decentralized Computational Grids using an Evolutionary Fuzzy System. To this end, we establish a non-invasive collaboration model on the Grid layer which requires minimal information about the participating High Performance and High Throughput Computing (HPC/HTC) centers and which leaves the local resource managers completely untouched. In this environment of fully autonomous sites, independent users are assumed to submit their jobs to the Grid middleware layer of their local site, which in turn decides on the delegation and execution either on the local system or on remote sites in a situation-dependent, adaptive way. We find for different scenarios that the exchange policies show good performance characteristics not only with respect to traditional metrics such as average weighted response time and utilization, but also in terms of robustness and stability in changing environments.
HPC USER WORKSHOP - JUNE 12TH | High-Performance Computing | NREL
to CentOS 7, changes to modules management, Singularity and containers on Peregrine, and using of changes, with the remaining two hours dedicated to demos and one-on-one interaction as needed
Hierarchically porous carbon/polyaniline hybrid for use in supercapacitors.
Joo, Min Jae; Yun, Young Soo; Jin, Hyoung-Joon
2014-12-01
A hierarchically porous carbon (HPC)/polyaniline (PANI) hybrid electrode was prepared by the polymerization of PANI on the surface of the HPC via rapid-mixing polymerization. The surface morphologies and chemical composition of the HPC/PANI hybrid electrode were characterized using transmission electron microscopy and X-ray photoelectron spectroscopy (XPS), respectively. The surface morphologies and XPS results for the HPC, PANI and HPC/PANI hybrids indicate that PANI is coated on the surface of HPC in the HPC/PANI hybrids which have two different nitrogen groups as a benzenoid amine (-NH-) peak and positively charged nitrogen (N+) peak. The electrochemical performances of the HPC/PANI hybrids were analyzed by performing cyclic voltammetry and galvanostatic charge-discharge tests. The HPC/PANI hybrids showed a better specific capacitance (222 F/g) than HPC (111 F/g) because of effect of pseudocapacitor behavior. In addition, good cycle stabilities were maintained over 1000 cycles.
On Parallelizing Single Dynamic Simulation Using HPC Techniques and APIs of Commercial Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diao, Ruisheng; Jin, Shuangshuang; Howell, Frederic
Time-domain simulations are heavily used in today’s planning and operation practices to assess power system transient stability and post-transient voltage/frequency profiles following severe contingencies to comply with industry standards. Because of the increased modeling complexity, it is several times slower than real time for state-of-the-art commercial packages to complete a dynamic simulation for a large-scale model. With the growing stochastic behavior introduced by emerging technologies, power industry has seen a growing need for performing security assessment in real time. This paper presents a parallel implementation framework to speed up a single dynamic simulation by leveraging the existing stability model librarymore » in commercial tools through their application programming interfaces (APIs). Several high performance computing (HPC) techniques are explored such as parallelizing the calculation of generator current injection, identifying fast linear solvers for network solution, and parallelizing data outputs when interacting with APIs in the commercial package, TSAT. The proposed method has been tested on a WECC planning base case with detailed synchronous generator models and exhibits outstanding scalable performance with sufficient accuracy.« less
ECCD-induced tearing mode stabilization in coupled IPS/NIMROD/GENRAY HPC simulations
NASA Astrophysics Data System (ADS)
Jenkins, Thomas; Kruger, S. E.; Held, E. D.; Harvey, R. W.; Elwasif, W. R.; Schnack, D. D.; SWIM Project Team
2011-10-01
We present developments toward an integrated, predictive model for determining optimal ECCD-based NTM stabilization strategies in ITER. We demonstrate the capability of the SWIM Project's Integrated Plasma Simulator (IPS) framework to choreograph multiple executions of, and data exchanges between, physics codes modeling various spatiotemporal scales of this coupled RF/MHD problem on several thousand HPC processors. As NIMROD evolves fluid equations to model bulk plasma behavior, self-consistent propagation/deposition of RF power in the ensuing plasma profiles is calculated by GENRAY. A third code (QLCALC) then interfaces with computational geometry packages to construct the RF-induced quasilinear diffusion tensor from NIMROD/GENRAY data, and the moments of this tensor (entering as additional terms in NIMROD's fluid equations due to the disparity in RF/MHD spatiotemporal scales) influence the dynamics of current, momentum, and energy evolution. Initial results are shown to correctly capture the physics of magnetic island stabilization [Jenkins et al., PoP 17, 012502 (2010)]; we also discuss the development of a numerical plasma control system for active feedback stabilization of tearing modes. Funded by USDoE SciDAC.
Production experience with the ATLAS Event Service
NASA Astrophysics Data System (ADS)
Benjamin, D.; Calafiura, P.; Childers, T.; De, K.; Guan, W.; Maeno, T.; Nilsson, P.; Tsulaia, V.; Van Gemmeren, P.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The ATLAS Event Service (AES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the AES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Google Compute Engine, and a growing number of HPC platforms. After briefly reviewing the concept and the architecture of the Event Service, we will report the status and experience gained in AES commissioning and production operations on supercomputers, and our plans for extending ES application beyond Geant4 simulation to other workflows, such as reconstruction and data analysis.
NASA Astrophysics Data System (ADS)
Bocharov, A. N.; Bityurin, V. A.; Golovin, N. N.; Evstigneev, N. M.; Petrovskiy, V. P.; Ryabkov, O. I.; Teplyakov, I. O.; Shustov, A. A.; Solomonov, Yu S.; Fortov, V. E.
2016-11-01
In this paper, an approach to solve conjugate heat- and mass-transfer problems is considered to be applied to hypersonic vehicle surface of arbitrary shape. The approach under developing should satisfy the following demands. (i) The surface of the body of interest may have arbitrary geometrical shape. (ii) The shape of the body can change during calculation. (iii) The flight characteristics may vary in a wide range, specifically flight altitude, free-stream Mach number, angle-of-attack, etc. (iv) The approach should be realized with using the high-performance-computing (HPC) technologies. The approach is based on coupled solution of 3D unsteady hypersonic flow equations and 3D unsteady heat conductance problem for the thick wall. Iterative process is applied to account for ablation of wall material and, consequently, mass injection from the surface and changes in the surface shape. While iterations, unstructured computational grids both in the flow region and within the wall interior are adapted to the current geometry and flow conditions. The flow computations are done on HPC platform and are most time-consuming part of the whole problem, while heat conductance problem can be solved on many kinds of computers.
Recurrent extradural hemangiopericytoma of thoracic spine: a case report.
Jayashankar, Erukkambattu; Prabhala, Shailaja; Raju, Subodh; Tanikella, Ramamurti
2014-01-01
Hemangiopericytoma (HPC) is a rare tumor that arises from pericapillary cells or pericytes of Zimmerman. In the central nervous system, it accounts for less than 1% of tumors, and spinal involvement is very rare. Meningeal hemangiopericytomas show morphological similarities with meningiomas particularly with angiomatous meningioma, where one needs to take the help of immunohistochemistry (IHC) to delineate HPC from meningioma. Here, we report a case of recurrent extradural HPC in a 16 year-old girl, who 5 years back had a pathological diagnosis of angiomatous meningioma, for D5-D6 lesion. On evaluation, magnetic resonance imaging (MRI) showed a large extradural tumor with a significant cord compression involving D5-D6 body, pedicle and ribs. Excision of the lesion and spinal stabilization was performed. The histopathological examination and immunohistochemistry performed on tumor sections revealed features favoring HPC. To conclude, detailed IHC is helpful in avoiding misdiagnosis and in further management of the patient.
NASA Astrophysics Data System (ADS)
Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri
2015-04-01
Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
Institute for Sustained Performance, Energy, and Resilience (SuPER)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jagode, Heike; Bosilca, George; Danalis, Anthony
The University of Tennessee (UTK) and University of Texas at El Paso (UTEP) partnership supported the three main thrusts of the SUPER project---performance, energy, and resilience. The UTK-UTEP effort thus helped advance the main goal of SUPER, which was to ensure that DOE's computational scientists can successfully exploit the emerging generation of high performance computing (HPC) systems. This goal is being met by providing application scientists with strategies and tools to productively maximize performance, conserve energy, and attain resilience. The primary vehicle through which UTK provided performance measurement support to SUPER and the larger HPC community is the Performance Applicationmore » Programming Interface (PAPI). PAPI is an ongoing project that provides a consistent interface and methodology for collecting hardware performance information from various hardware and software components, including most major CPUs, GPUs and accelerators, interconnects, I/O systems, and power interfaces, as well as virtual cloud environments. The PAPI software is widely used for performance modeling of scientific and engineering applications---for example, the HOMME (High Order Methods Modeling Environment) climate code, and the GAMESS and NWChem computational chemistry codes---on DOE supercomputers. PAPI is widely deployed as middleware for use by higher-level profiling, tracing, and sampling tools (e.g., CrayPat, HPCToolkit, Scalasca, Score-P, TAU, Vampir, PerfExpert), making it the de facto standard for hardware counter analysis. PAPI has established itself as fundamental software infrastructure in every application domain (spanning academia, government, and industry), where improving performance can be mission critical. Ultimately, as more application scientists migrate their applications to HPC platforms, they will benefit from the extended capabilities this grant brought to PAPI to analyze and optimize performance in these environments, whether they use PAPI directly, or via third-party performance tools. Capabilities added to PAPI through this grant include support for new architectures such as the lastest GPU and Xeon Phi accelerators, and advanced power measurement and management features. Another important topic for the UTK team was providing support for a rich ecosystem of different fault management strategies in the context of parallel computing. Our long term efforts have been oriented toward proposing flexible strategies and providing building boxes that application developers can use to build the most efficient fault management technique for their application. These efforts span across the entire software spectrum, from theoretical models of existing strategies to easily assess their performance, to algorithmic modifications to take advantage of specific mathematical properties for data redundancy and to extensions to widely used programming paradigms to empower the application developers to deal with all types of faults. We have also continued our tight collaborations with users to help them adopt these technologies to ensure their application always deliver meaningful scientific data. Large supercomputer systems are becoming more and more power and energy constrained, and future systems and applications running on them will need to be optimized to run under power caps and/or minimize energy consumption. The UTEP team contributed to the SUPER energy thrust by developing power modeling methodologies and investigating power management strategies. Scalability modeling results showed that some applications can scale better with respect to an increasing power budget than with respect to only the number of processors. Power management, in particular shifting power to processors on the critical path of an application execution, can reduce perturbation due to system noise and other sources of runtime variability, which are growing problems on large-scale power-constrained computer systems.« less
Schumacher, Anett; Vlassov, Ekaterina; Ito, Rutsuko
2016-04-01
The resolution of an approach-avoidance conflict induced by ambivalent information involves the appraisal of the incentive value of the outcomes and associated stimuli to orchestrate an appropriate behavioral response. Much research has been directed at delineating the neural circuitry underlying approach motivation and avoidance motivation separately. Very little research, however, has examined the neural substrates engaged at the point of decision making when opposing incentive motivations are experienced simultaneously. We hereby examine the role of the dorsal and ventral hippocampus (HPC) in a novel approach-avoidance decision making paradigm, revisiting a once popular theory of HPC function, which posited the HPC to be the driving force of a behavioral inhibition system that is activated in situations of imminent threat. Rats received pre-training excitotoxic lesions of the dorsal or ventral HPC, and were trained to associate different non-spatial cues with appetitive, aversive and neutral outcomes in three separate arms of the radial maze. On the final day of testing, a state of approach-avoidance conflict was induced by simultaneously presenting two cues of opposite valences, and comparing the time the rats spent interacting with the superimposed 'conflict' cue, and the neutral cue. The ventral HPC-lesioned group showed significant preference for the conflict cue over the neutral cue, compared to the dorsal HPC-lesioned, and control groups. Thus, we provide evidence that the ventral, but not dorsal HPC, is a crucial component of the neural circuitry concerned with exerting inhibitory control over approach tendencies under circumstances in which motivational conflict is experienced. © 2015 Wiley Periodicals, Inc.
Grids, Clouds, and Virtualization
NASA Astrophysics Data System (ADS)
Cafaro, Massimo; Aloisio, Giovanni
This chapter introduces and puts in context Grids, Clouds, and Virtualization. Grids promised to deliver computing power on demand. However, despite a decade of active research, no viable commercial grid computing provider has emerged. On the other hand, it is widely believed - especially in the Business World - that HPC will eventually become a commodity. Just as some commercial consumers of electricity have mission requirements that necessitate they generate their own power, some consumers of computational resources will continue to need to provision their own supercomputers. Clouds are a recent business-oriented development with the potential to render this eventually as rare as organizations that generate their own electricity today, even among institutions who currently consider themselves the unassailable elite of the HPC business. Finally, Virtualization is one of the key technologies enabling many different Clouds. We begin with a brief history in order to put them in context, and recall the basic principles and concepts underlying and clearly differentiating them. A thorough overview and survey of existing technologies provides the basis to delve into details as the reader progresses through the book.
Connecting to HPC VPN | High-Performance Computing | NREL
and password will match your NREL network account login/password. From OS X or Linux, open a terminal finalized. Open a Remote Desktop connection using server name WINHPC02 (this is the login node). Mac Mac
Diós, Péter; Pernecker, Tivadar; Nagy, Sándor; Pál, Szilárd; Dévay, Attila
2014-01-01
The object of the present study is to evaluate the effect of application of low-substituted hydroxypropyl cellulose (L-HPC) 11 and B1 as excipients promoting floating in gastroretentive tablets. Directly compressed tablets were formed based on experimental design. Face-centred central composite design was applied with two factors and 3 levels, where amount of sodium alginate (X1) and L-HPC (X2) were the numerical factors. Applied types of L-HPCs and their 1:1 mixture were included in a categorical factor (X3). Studied parameters were floating lag time, floating time, floating force, swelling behaviour of tablets and dissolution of paracetamol, which was used as a model active substance. Due to their physical character, L-HPCs had different water uptake and flowability. Lower flowability and lower water uptake was observed after 60 min at L-HPC 11 compared to L-HPC B1. Shorter floating times were detected at L-HPC 11 and L-HPC mixtures with 0.5% content of sodium alginate, whereas alginate was the only significant factor. Evaluating results of drug release and swelling studies on floating tablets revealed correlation, which can serve to help to understand the mechanism of action of L-HPCs in the field development of gastroretentive dosage forms. PMID:26702261
NASA Technical Reports Server (NTRS)
Warner, James E.; Zubair, Mohammad; Ranjan, Desh
2017-01-01
This work investigates novel approaches to probabilistic damage diagnosis that utilize surrogate modeling and high performance computing (HPC) to achieve substantial computational speedup. Motivated by Digital Twin, a structural health management (SHM) paradigm that integrates vehicle-specific characteristics with continual in-situ damage diagnosis and prognosis, the methods studied herein yield near real-time damage assessments that could enable monitoring of a vehicle's health while it is operating (i.e. online SHM). High-fidelity modeling and uncertainty quantification (UQ), both critical to Digital Twin, are incorporated using finite element method simulations and Bayesian inference, respectively. The crux of the proposed Bayesian diagnosis methods, however, is the reformulation of the numerical sampling algorithms (e.g. Markov chain Monte Carlo) used to generate the resulting probabilistic damage estimates. To this end, three distinct methods are demonstrated for rapid sampling that utilize surrogate modeling and exploit various degrees of parallelism for leveraging HPC. The accuracy and computational efficiency of the methods are compared on the problem of strain-based crack identification in thin plates. While each approach has inherent problem-specific strengths and weaknesses, all approaches are shown to provide accurate probabilistic damage diagnoses and several orders of magnitude computational speedup relative to a baseline Bayesian diagnosis implementation.
PHoToNs–A parallel heterogeneous and threads oriented code for cosmological N-body simulation
NASA Astrophysics Data System (ADS)
Wang, Qiao; Cao, Zong-Yan; Gao, Liang; Chi, Xue-Bin; Meng, Chen; Wang, Jie; Wang, Long
2018-06-01
We introduce a new code for cosmological simulations, PHoToNs, which incorporates features for performing massive cosmological simulations on heterogeneous high performance computer (HPC) systems and threads oriented programming. PHoToNs adopts a hybrid scheme to compute gravitational force, with the conventional Particle-Mesh (PM) algorithm to compute the long-range force, the Tree algorithm to compute the short range force and the direct summation Particle-Particle (PP) algorithm to compute gravity from very close particles. A self-similar space filling a Peano-Hilbert curve is used to decompose the computing domain. Threads programming is advantageously used to more flexibly manage the domain communication, PM calculation and synchronization, as well as Dual Tree Traversal on the CPU+MIC platform. PHoToNs scales well and efficiency of the PP kernel achieves 68.6% of peak performance on MIC and 74.4% on CPU platforms. We also test the accuracy of the code against the much used Gadget-2 in the community and found excellent agreement.
When to Renew Software Licences at HPC Centres? A Mathematical Analysis
NASA Astrophysics Data System (ADS)
Baolai, Ge; MacIsaac, Allan B.
2010-11-01
In this paper we study a common problem faced by many high performance computing (HPC) centres: When and how to renew commercial software licences. Software vendors often sell perpetual licences along with forward update and support contracts at an additional, annual cost. Every year or so, software support personnel and the budget units of HPC centres are required to make the decision of whether or not to renew such support, and usually such decisions are made intuitively. The total cost for a continuing support contract can, however, be costly. One might therefore want a rational answer to the question of whether the option for a renewal should be exercised and when. In an attempt to study this problem within a market framework, we present the mathematical problem derived for the day to day operation of a hypothetical HPC centre that charges for the use of software packages. In the mathematical model, we assume that the uncertainty comes from the demand, number of users using the packages, as well as the price. Further we assume the availability of up to date software versions may also affect the demand. We develop a renewal strategy that aims to maximize the expected profit from the use the software under consideration. The derived problem involves a decision tree, which constitutes a numerical procedure that can be processed in parallel.
AGIS: Integration of new technologies used in ATLAS Distributed Computing
NASA Astrophysics Data System (ADS)
Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria
2017-10-01
The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.
Optimizing CyberShake Seismic Hazard Workflows for Large HPC Resources
NASA Astrophysics Data System (ADS)
Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.
2014-12-01
The CyberShake computational platform is a well-integrated collection of scientific software and middleware that calculates 3D simulation-based probabilistic seismic hazard curves and hazard maps for the Los Angeles region. Currently each CyberShake model comprises about 235 million synthetic seismograms from about 415,000 rupture variations computed at 286 sites. CyberShake integrates large-scale parallel and high-throughput serial seismological research codes into a processing framework in which early stages produce files used as inputs by later stages. Scientific workflow tools are used to manage the jobs, data, and metadata. The Southern California Earthquake Center (SCEC) developed the CyberShake platform using USC High Performance Computing and Communications systems and open-science NSF resources.CyberShake calculations were migrated to the NSF Track 1 system NCSA Blue Waters when it became operational in 2013, via an interdisciplinary team approach including domain scientists, computer scientists, and middleware developers. Due to the excellent performance of Blue Waters and CyberShake software optimizations, we reduced the makespan (a measure of wallclock time-to-solution) of a CyberShake study from 1467 to 342 hours. We will describe the technical enhancements behind this improvement, including judicious introduction of new GPU software, improved scientific software components, increased workflow-based automation, and Blue Waters-specific workflow optimizations.Our CyberShake performance improvements highlight the benefits of scientific workflow tools. The CyberShake workflow software stack includes the Pegasus Workflow Management System (Pegasus-WMS, which includes Condor DAGMan), HTCondor, and Globus GRAM, with Pegasus-mpi-cluster managing the high-throughput tasks on the HPC resources. The workflow tools handle data management, automatically transferring about 13 TB back to SCEC storage.We will present performance metrics from the most recent CyberShake study, executed on Blue Waters. We will compare the performance of CPU and GPU versions of our large-scale parallel wave propagation code, AWP-ODC-SGT. Finally, we will discuss how these enhancements have enabled SCEC to move forward with plans to increase the CyberShake simulation frequency to 1.0 Hz.
Macchi, Elena; Zema, Lucia; Pandey, Preetanshu; Gazzaniga, Andrea; Felton, Linda A
2016-03-01
In a previous study, hydroxypropyl cellulose (HPC)-based capsular shells prepared by injection molding and intended for pulsatile release were successfully coated with 10mg/cm(2) Eudragit® L film. The suitability of HPC capsules for the development of a colon delivery platform based on a time dependent approach was demonstrated. In the present work, data logging devices (PyroButton®) were used to monitor the microenvironmental conditions, i.e. temperature (T) and relative humidity (RH), during coating processes performed under different spray rates (1.2, 2.5 and 5.5g/min). As HPC-based capsules present special features, a preliminary study was conducted on commercially available gelatin capsules for comparison purposes. By means of PyroButton data-loggers it was possible to acquire information about the impact of the effective T and RH conditions experienced by HPC substrates during the process on the technological properties and release performance of the coated systems. The use of increasing spray rates seemed to promote a tendency of the HPC shells to slightly swell at the beginning of the spraying process; moreover, capsules coated under spray rates of 1.2 and 2.5g/min showed the desired release performance, i.e. ability to withstand the acidic media followed by the pulsatile release expected for uncoated capsules. Preliminary stability studies seemed to show that coating conditions might also influence the release performance of the system upon storage. Copyright © 2015 Elsevier B.V. All rights reserved.
DOVIS: an implementation for high-throughput virtual screening using AutoDock.
Zhang, Shuxing; Kumar, Kamal; Jiang, Xiaohui; Wallqvist, Anders; Reifman, Jaques
2008-02-27
Molecular-docking-based virtual screening is an important tool in drug discovery that is used to significantly reduce the number of possible chemical compounds to be investigated. In addition to the selection of a sound docking strategy with appropriate scoring functions, another technical challenge is to in silico screen millions of compounds in a reasonable time. To meet this challenge, it is necessary to use high performance computing (HPC) platforms and techniques. However, the development of an integrated HPC system that makes efficient use of its elements is not trivial. We have developed an application termed DOVIS that uses AutoDock (version 3) as the docking engine and runs in parallel on a Linux cluster. DOVIS can efficiently dock large numbers (millions) of small molecules (ligands) to a receptor, screening 500 to 1,000 compounds per processor per day. Furthermore, in DOVIS, the docking session is fully integrated and automated in that the inputs are specified via a graphical user interface, the calculations are fully integrated with a Linux cluster queuing system for parallel processing, and the results can be visualized and queried. DOVIS removes most of the complexities and organizational problems associated with large-scale high-throughput virtual screening, and provides a convenient and efficient solution for AutoDock users to use this software in a Linux cluster platform.
A Numerical Study of Scalable Cardiac Electro-Mechanical Solvers on HPC Architectures
Colli Franzone, Piero; Pavarino, Luca F.; Scacchi, Simone
2018-01-01
We introduce and study some scalable domain decomposition preconditioners for cardiac electro-mechanical 3D simulations on parallel HPC (High Performance Computing) architectures. The electro-mechanical model of the cardiac tissue is composed of four coupled sub-models: (1) the static finite elasticity equations for the transversely isotropic deformation of the cardiac tissue; (2) the active tension model describing the dynamics of the intracellular calcium, cross-bridge binding and myofilament tension; (3) the anisotropic Bidomain model describing the evolution of the intra- and extra-cellular potentials in the deforming cardiac tissue; and (4) the ionic membrane model describing the dynamics of ionic currents, gating variables, ionic concentrations and stretch-activated channels. This strongly coupled electro-mechanical model is discretized in time with a splitting semi-implicit technique and in space with isoparametric finite elements. The resulting scalable parallel solver is based on Multilevel Additive Schwarz preconditioners for the solution of the Bidomain system and on BDDC preconditioned Newton-Krylov solvers for the non-linear finite elasticity system. The results of several 3D parallel simulations show the scalability of both linear and non-linear solvers and their application to the study of both physiological excitation-contraction cardiac dynamics and re-entrant waves in the presence of different mechano-electrical feedbacks. PMID:29674971
Spark and HPC for High Energy Physics Data Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sehrish, Saba; Kowalkowski, Jim; Paterno, Marc
A full High Energy Physics (HEP) data analysis is divided into multiple data reduction phases. Processing within these phases is extremely time consuming, therefore intermediate results are stored in files held in mass storage systems and referenced as part of large datasets. This processing model limits what can be done with interactive data analytics. Growth in size and complexity of experimental datasets, along with emerging big data tools are beginning to cause changes to the traditional ways of doing data analyses. Use of big data tools for HEP analysis looks promising, mainly because extremely large HEP datasets can be representedmore » and held in memory across a system, and accessed interactively by encoding an analysis using highlevel programming abstractions. The mainstream tools, however, are not designed for scientific computing or for exploiting the available HPC platform features. We use an example from the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) in Geneva, Switzerland. The LHC is the highest energy particle collider in the world. Our use case focuses on searching for new types of elementary particles explaining Dark Matter in the universe. We use HDF5 as our input data format, and Spark to implement the use case. We show the benefits and limitations of using Spark with HDF5 on Edison at NERSC.« less
NASA Astrophysics Data System (ADS)
Vilotte, J.-P.; Atkinson, M.; Michelini, A.; Igel, H.; van Eck, T.
2012-04-01
Increasingly dense seismic and geodetic networks are continuously transmitting a growing wealth of data from around the world. The multi-use of these data leaded the seismological community to pioneer globally distributed open-access data infrastructures, standard services and formats, e.g., the Federation of Digital Seismic Networks (FDSN) and the European Integrated Data Archives (EIDA). Our ability to acquire observational data outpaces our ability to manage, analyze and model them. Research in seismology is today facing a fundamental paradigm shift. Enabling advanced data-intensive analysis and modeling applications challenges conventional storage, computation and communication models and requires a new holistic approach. It is instrumental to exploit the cornucopia of data, and to guarantee optimal operation and design of the high-cost monitoring facilities. The strategy of VERCE is driven by the needs of the seismological data-intensive applications in data analysis and modeling. It aims to provide a comprehensive architecture and framework adapted to the scale and the diversity of those applications, and integrating the data infrastructures with Grid, Cloud and HPC infrastructures. It will allow prototyping solutions for new use cases as they emerge within the European Plate Observatory Systems (EPOS), the ESFRI initiative of the solid Earth community. Computational seismology, and information management, is increasingly revolving around massive amounts of data that stem from: (1) the flood of data from the observational systems; (2) the flood of data from large-scale simulations and inversions; (3) the ability to economically store petabytes of data online; (4) the evolving Internet and Data-aware computing capabilities. As data-intensive applications are rapidly increasing in scale and complexity, they require additional services-oriented architectures offering a virtualization-based flexibility for complex and re-usable workflows. Scientific information management poses computer science challenges: acquisition, organization, query and visualization tasks scale almost linearly with the data volumes. Commonly used FTP-GREP metaphor allows today to scan gigabyte-sized datasets but will not work for scanning terabyte-sized continuous waveform datasets. New data analysis and modeling methods, exploiting the signal coherence within dense network arrays, are nonlinear. Pair-algorithms on N points scale as N2. Wave form inversion and stochastic simulations raise computing and data handling challenges These applications are unfeasible for tera-scale datasets without new parallel algorithms that use near-linear processing, storage and bandwidth, and that can exploit new computing paradigms enabled by the intersection of several technologies (HPC, parallel scalable database crawler, data-aware HPC). This issues will be discussed based on a number of core pilot data-intensive applications and use cases retained in VERCE. This core applications are related to: (1) data processing and data analysis methods based on correlation techniques; (2) cpu-intensive applications such as large-scale simulation of synthetic waveforms in complex earth systems, and full waveform inversion and tomography. We shall analyze their workflow and data flow, and their requirements for a new service-oriented architecture and a data-aware platform with services and tools. Finally, we will outline the importance of a new collaborative environment between seismology and computer science, together with the need for the emergence and the recognition of 'research technologists' mastering the evolving data-aware technologies and the data-intensive research goals in seismology.
An asynchronous traversal engine for graph-based rich metadata management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Dong; Carns, Philip; Ross, Robert B.
Rich metadata in high-performance computing (HPC) systems contains extended information about users, jobs, data files, and their relationships. Property graphs are a promising data model to represent heterogeneous rich metadata flexibly. Specifically, a property graph can use vertices to represent different entities and edges to record the relationships between vertices with unique annotations. The high-volume HPC use case, with millions of entities and relationships, naturally requires an out-of-core distributed property graph database, which must support live updates (to ingest production information in real time), low-latency point queries (for frequent metadata operations such as permission checking), and large-scale traversals (for provenancemore » data mining). Among these needs, large-scale property graph traversals are particularly challenging for distributed graph storage systems. Most existing graph systems implement a "level synchronous" breadth-first search algorithm that relies on global synchronization in each traversal step. This performs well in many problem domains; but a rich metadata management system is characterized by imbalanced graphs, long traversal lengths, and concurrent workloads, each of which has the potential to introduce or exacerbate stragglers (i.e., abnormally slow steps or servers in a graph traversal) that lead to low overall throughput for synchronous traversal algorithms. Previous research indicated that the straggler problem can be mitigated by using asynchronous traversal algorithms, and many graph-processing frameworks have successfully demonstrated this approach. Such systems require the graph to be loaded into a separate batch-processing framework instead of being iteratively accessed, however. In this work, we investigate a general asynchronous graph traversal engine that can operate atop a rich metadata graph in its native format. We outline a traversal-aware query language and key optimizations (traversal-affiliate caching and execution merging) necessary for efficient performance. We further explore the effect of different graph partitioning strategies on the traversal performance for both synchronous and asynchronous traversal engines. Our experiments show that the asynchronous graph traversal engine is more efficient than its synchronous counterpart in the case of HPC rich metadata processing, where more servers are involved and larger traversals are needed. Furthermore, the asynchronous traversal engine is more adaptive to different graph partitioning strategies.« less
An asynchronous traversal engine for graph-based rich metadata management
Dai, Dong; Carns, Philip; Ross, Robert B.; ...
2016-06-23
Rich metadata in high-performance computing (HPC) systems contains extended information about users, jobs, data files, and their relationships. Property graphs are a promising data model to represent heterogeneous rich metadata flexibly. Specifically, a property graph can use vertices to represent different entities and edges to record the relationships between vertices with unique annotations. The high-volume HPC use case, with millions of entities and relationships, naturally requires an out-of-core distributed property graph database, which must support live updates (to ingest production information in real time), low-latency point queries (for frequent metadata operations such as permission checking), and large-scale traversals (for provenancemore » data mining). Among these needs, large-scale property graph traversals are particularly challenging for distributed graph storage systems. Most existing graph systems implement a "level synchronous" breadth-first search algorithm that relies on global synchronization in each traversal step. This performs well in many problem domains; but a rich metadata management system is characterized by imbalanced graphs, long traversal lengths, and concurrent workloads, each of which has the potential to introduce or exacerbate stragglers (i.e., abnormally slow steps or servers in a graph traversal) that lead to low overall throughput for synchronous traversal algorithms. Previous research indicated that the straggler problem can be mitigated by using asynchronous traversal algorithms, and many graph-processing frameworks have successfully demonstrated this approach. Such systems require the graph to be loaded into a separate batch-processing framework instead of being iteratively accessed, however. In this work, we investigate a general asynchronous graph traversal engine that can operate atop a rich metadata graph in its native format. We outline a traversal-aware query language and key optimizations (traversal-affiliate caching and execution merging) necessary for efficient performance. We further explore the effect of different graph partitioning strategies on the traversal performance for both synchronous and asynchronous traversal engines. Our experiments show that the asynchronous graph traversal engine is more efficient than its synchronous counterpart in the case of HPC rich metadata processing, where more servers are involved and larger traversals are needed. Furthermore, the asynchronous traversal engine is more adaptive to different graph partitioning strategies.« less
Advances in Parallel Computing and Databases for Digital Pathology in Cancer Research
2016-11-13
these technologies and how we have used them in the past. We are interested in learning more about the needs of clinical pathologists as we continue to...such as image processing and correlation. Further, High Performance Computing (HPC) paradigms such as the Message Passing Interface (MPI) have been...Defense for Research and Engineering. such as pMatlab [4], or bcMPI [5] can significantly reduce the need for deep knowledge of parallel computing. In
SoAx: A generic C++ Structure of Arrays for handling particles in HPC codes
NASA Astrophysics Data System (ADS)
Homann, Holger; Laenen, Francois
2018-03-01
The numerical study of physical problems often require integrating the dynamics of a large number of particles evolving according to a given set of equations. Particles are characterized by the information they are carrying such as an identity, a position other. There are generally speaking two different possibilities for handling particles in high performance computing (HPC) codes. The concept of an Array of Structures (AoS) is in the spirit of the object-oriented programming (OOP) paradigm in that the particle information is implemented as a structure. Here, an object (realization of the structure) represents one particle and a set of many particles is stored in an array. In contrast, using the concept of a Structure of Arrays (SoA), a single structure holds several arrays each representing one property (such as the identity) of the whole set of particles. The AoS approach is often implemented in HPC codes due to its handiness and flexibility. For a class of problems, however, it is known that the performance of SoA is much better than that of AoS. We confirm this observation for our particle problem. Using a benchmark we show that on modern Intel Xeon processors the SoA implementation is typically several times faster than the AoS one. On Intel's MIC co-processors the performance gap even attains a factor of ten. The same is true for GPU computing, using both computational and multi-purpose GPUs. Combining performance and handiness, we present the library SoAx that has optimal performance (on CPUs, MICs, and GPUs) while providing the same handiness as AoS. For this, SoAx uses modern C++ design techniques such template meta programming that allows to automatically generate code for user defined heterogeneous data structures.
3D streamers simulation in a pin to plane configuration using massively parallel computing
NASA Astrophysics Data System (ADS)
Plewa, J.-M.; Eichwald, O.; Ducasse, O.; Dessante, P.; Jacobs, C.; Renon, N.; Yousfi, M.
2018-03-01
This paper concerns the 3D simulation of corona discharge using high performance computing (HPC) managed with the message passing interface (MPI) library. In the field of finite volume methods applied on non-adaptive mesh grids and in the case of a specific 3D dynamic benchmark test devoted to streamer studies, the great efficiency of the iterative R&B SOR and BiCGSTAB methods versus the direct MUMPS method was clearly demonstrated in solving the Poisson equation using HPC resources. The optimization of the parallelization and the resulting scalability was undertaken as a function of the HPC architecture for a number of mesh cells ranging from 8 to 512 million and a number of cores ranging from 20 to 1600. The R&B SOR method remains at least about four times faster than the BiCGSTAB method and requires significantly less memory for all tested situations. The R&B SOR method was then implemented in a 3D MPI parallelized code that solves the classical first order model of an atmospheric pressure corona discharge in air. The 3D code capabilities were tested by following the development of one, two and four coplanar streamers generated by initial plasma spots for 6 ns. The preliminary results obtained allowed us to follow in detail the formation of the tree structure of a corona discharge and the effects of the mutual interactions between the streamers in terms of streamer velocity, trajectory and diameter. The computing time for 64 million of mesh cells distributed over 1000 cores using the MPI procedures is about 30 min ns-1, regardless of the number of streamers.
Ruan, Ming; Young, Calvin K.; McNaughton, Neil
2017-01-01
Hippocampal (HPC) theta oscillations have long been linked to various functions of the brain. Many cortical and subcortical areas that also exhibit theta oscillations have been linked to functional circuits with the hippocampus on the basis of coupled activities at theta frequencies. We examine, in freely moving rats, the characteristics of diencephalic theta local field potentials (LFPs) recorded in the supramammillary/mammillary (SuM/MM) areas that are bi-directionally connected to the HPC through the septal complex. Using partial directed coherence (PDC), we find support for previous suggestions that SuM modulates HPC theta at higher frequencies. We find weak separation of SuM and MM by dominant theta frequency recorded locally. Contrary to oscillatory cell activities under anesthesia where SuM is insensitive, but MM is sensitive to medial septal (MS) inactivation, theta LFPs persisted and became indistinguishable after MS-inactivation. However, MS-inactivation attenuated SuM/MM theta power, while increasing the frequency of SuM/MM theta. MS-inactivation also reduced root mean squared power in both HPC and SuM/MM equally, but reduced theta power differentially in the time domain. We provide converging evidence that SuM is preferentially involved in coding HPC theta at higher frequencies, and that the MS-HPC circuit normally imposes a frequency-limiting modulation over the SuM/MM area as suggested by cell-based recordings in anesthetized animals. In addition, we provide evidence that the postulated SuM-MS-HPC-MM circuit is under complex bi-directional control, rather than SuM and MM having roles as unidirectional relays in the network. PMID:28955209
Power-Time Curve Comparison between Weightlifting Derivatives
Suchomel, Timothy J.; Sole, Christopher J.
2017-01-01
This study examined the power production differences between weightlifting derivatives through a comparison of power-time (P-t) curves. Thirteen resistance-trained males performed hang power clean (HPC), jump shrug (JS), and hang high pull (HHP) repetitions at relative loads of 30%, 45%, 65%, and 80% of their one repetition maximum (1RM) HPC. Relative peak power (PPRel), work (WRel), and P-t curves were compared. The JS produced greater PPRel than the HPC (p < 0.001, d = 2.53) and the HHP (p < 0.001, d = 2.14). In addition, the HHP PPRel was statistically greater than the HPC (p = 0.008, d = 0.80). Similarly, the JS produced greater WRel compared to the HPC (p < 0.001, d = 1.89) and HHP (p < 0.001, d = 1.42). Furthermore, HHP WRel was statistically greater than the HPC (p = 0.003, d = 0.73). The P-t profiles of each exercise were similar during the first 80-85% of the movement; however, during the final 15-20% of the movement the P-t profile of the JS was found to be greater than the HPC and HHP. The JS produced greater PPRel and WRel compared to the HPC and HHP with large effect size differences. The HHP produced greater PPRel and WRel than the HPC with moderate effect size differences. The JS and HHP produced markedly different P-t profiles in the final 15-20% of the movement compared to the HPC. Thus, these exercises may be superior methods of training to enhance PPRel. The greatest differences in PPRel between the JS and HHP and the HPC occurred at lighter loads, suggesting that loads of 30-45% 1RM HPC may provide the best training stimulus when using the JS and HHP. In contrast, loads ranging 65-80% 1RM HPC may provide an optimal stimulus for power production during the HPC. Key points The JS and HHP exercises produced greater relative peak power and relative work compared to the HPC. Although the power-time curves were similar during the first 80-85% of the movement, the JS and HHP possessed unique power-time characteristics during the final 15-20% of the movement compared to the HPC. The JS and HHP may be effectively implemented to train peak power characteristics, especially using loads ranging from 30-45% of an individual’s 1RM HPC. The HPC may be best implemented using loads ranging from 65-80% of an individual’s 1RM HPC. PMID:28912659
Discrete Particle Model for Porous Media Flow using OpenFOAM at Intel Xeon Phi Coprocessors
NASA Astrophysics Data System (ADS)
Shang, Zhi; Nandakumar, Krishnaswamy; Liu, Honggao; Tyagi, Mayank; Lupo, James A.; Thompson, Karten
2015-11-01
The discrete particle model (DPM) in OpenFOAM was used to study the turbulent solid particle suspension flows through the porous media of a natural dual-permeability rock. The 2D and 3D pore geometries of the porous media were generated by sphere packing with the radius ratio of 3. The porosity is about 38% same as the natural dual-permeability rock. In the 2D case, the mesh cells reach 5 million with 1 million solid particles and in the 3D case, the mesh cells are above 10 million with 5 million solid particles. The solid particles are distributed by Gaussian distribution from 20 μm to 180 μm with expectation as 100 μm. Through the numerical simulations, not only was the HPC studied using Intel Xeon Phi Coprocessors but also the flow behaviors of large scale solid suspension flows in porous media were studied. The authors would like to thank the support by IPCC@LSU-Intel Parallel Computing Center (LSU # Y1SY1-1) and the HPC resources at Louisiana State University (http://www.hpc.lsu.edu).
McrEngine: A Scalable Checkpointing System Using Data-Aware Aggregation and Compression
Islam, Tanzima Zerin; Mohror, Kathryn; Bagchi, Saurabh; ...
2013-01-01
High performance computing (HPC) systems use checkpoint-restart to tolerate failures. Typically, applications store their states in checkpoints on a parallel file system (PFS). As applications scale up, checkpoint-restart incurs high overheads due to contention for PFS resources. The high overheads force large-scale applications to reduce checkpoint frequency, which means more compute time is lost in the event of failure. We alleviate this problem through a scalable checkpoint-restart system, mcrEngine. McrEngine aggregates checkpoints from multiple application processes with knowledge of the data semantics available through widely-used I/O libraries, e.g., HDF5 and netCDF, and compresses them. Our novel scheme improves compressibility ofmore » checkpoints up to 115% over simple concatenation and compression. Our evaluation with large-scale application checkpoints show that mcrEngine reduces checkpointing overhead by up to 87% and restart overhead by up to 62% over a baseline with no aggregation or compression.« less
Riaz, Sadia; Schumacher, Anett; Sivagurunathan, Seyon; Van Der Meer, Matthijs; Ito, Rutsuko
2017-07-01
The hippocampus (HPC) has been widely implicated in the contextual control of appetitive and aversive conditioning. However, whole hippocampal lesions do not invariably impair all forms of contextual processing, as in the case of complex biconditional context discrimination, leading to contention over the exact nature of the contribution of the HPC in contextual processing. Moreover, the increasingly well-established functional dissociation between the dorsal (dHPC) and ventral (vHPC) subregions of the HPC has been largely overlooked in the existing literature on hippocampal-based contextual memory processing in appetitively motivated tasks. Thus, the present study sought to investigate the individual roles of the dHPC and the vHPC in contextual biconditional discrimination (CBD) performance and memory retrieval. To this end, we examined the effects of transient post-acquisition pharmacological inactivation (using a combination of GABA A and GABA B receptor agonists muscimol and baclofen) of functionally distinct subregions of the HPC (CA1/CA3 subfields of the dHPC and vHPC) on CBD memory retrieval. Additional behavioral assays including novelty preference, light-dark box and locomotor activity test were also performed to confirm that the respective sites of inactivation were functionally silent. We observed robust deficits in CBD performance and memory retrieval following inactivation of the vHPC, but not the dHPC. Our data provides novel insight into the differential roles of the ventral and dorsal HPC in reward contextual processing, under conditions in which the context is defined by proximal cues. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Kadow, C.; Illing, S.; Kunst, O.; Cubasch, U.
2014-12-01
The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced by other users-saving CPU time, I/O and disk space. This study presents the different techniques and advantages of such a hybrid evaluation system making use of a Big Data HPC in climate science. website: www-miklip.dkrz.de visitor-login: guest password: miklip
NASA Astrophysics Data System (ADS)
Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Ulbrich, Uwe; Cubasch, Ulrich
2015-04-01
The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced by other users-saving CPU time, I/O and disk space. This study presents the different techniques and advantages of such a hybrid evaluation system making use of a Big Data HPC in climate science. website: www-miklip.dkrz.de visitor-login: click on "Guest"
Modeling Cardiac Electrophysiology at the Organ Level in the Peta FLOPS Computing Age
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Lawrence; Bishop, Martin; Hoetzl, Elena
2010-09-30
Despite a steep increase in available compute power, in-silico experimentation with highly detailed models of the heart remains to be challenging due to the high computational cost involved. It is hoped that next generation high performance computing (HPC) resources lead to significant reductions in execution times to leverage a new class of in-silico applications. However, performance gains with these new platforms can only be achieved by engaging a much larger number of compute cores, necessitating strongly scalable numerical techniques. So far strong scalability has been demonstrated only for a moderate number of cores, orders of magnitude below the range requiredmore » to achieve the desired performance boost.In this study, strong scalability of currently used techniques to solve the bidomain equations is investigated. Benchmark results suggest that scalability is limited to 512-4096 cores within the range of relevant problem sizes even when systems are carefully load-balanced and advanced IO strategies are employed.« less
High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing
NASA Astrophysics Data System (ADS)
Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.
2015-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.
Integration of the Chinese HPC Grid in ATLAS Distributed Computing
NASA Astrophysics Data System (ADS)
Filipčič, A.;
2017-10-01
Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.
BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations
NASA Astrophysics Data System (ADS)
Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I.; Strydis, Christos
2017-12-01
Objective. The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. Approach. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload’s performance characteristics. Main results. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. Significance. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly useful features to the computational-neuroscience community, paving the way for wider adoption.
BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations.
Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I; Strydis, Christos
2017-12-01
The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload's performance characteristics. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly useful features to the computational-neuroscience community, paving the way for wider adoption.
Power monitoring and control for large scale projects: SKA, a case study
NASA Astrophysics Data System (ADS)
Barbosa, Domingos; Barraca, João. Paulo; Maia, Dalmiro; Carvalho, Bruno; Vieira, Jorge; Swart, Paul; Le Roux, Gerhard; Natarajan, Swaminathan; van Ardenne, Arnold; Seca, Luis
2016-07-01
Large sensor-based science infrastructures for radio astronomy like the SKA will be among the most intensive datadriven projects in the world, facing very high demanding computation, storage, management, and above all power demands. The geographically wide distribution of the SKA and its associated processing requirements in the form of tailored High Performance Computing (HPC) facilities, require a Greener approach towards the Information and Communications Technologies (ICT) adopted for the data processing to enable operational compliance to potentially strict power budgets. Addressing the reduction of electricity costs, improve system power monitoring and the generation and management of electricity at system level is paramount to avoid future inefficiencies and higher costs and enable fulfillments of Key Science Cases. Here we outline major characteristics and innovation approaches to address power efficiency and long-term power sustainability for radio astronomy projects, focusing on Green ICT for science and Smart power monitoring and control.
Jflow: a workflow management system for web applications.
Mariette, Jérôme; Escudié, Frédéric; Bardou, Philippe; Nabihoudine, Ibouniyamine; Noirot, Céline; Trotard, Marie-Stéphane; Gaspin, Christine; Klopp, Christophe
2016-02-01
Biologists produce large data sets and are in demand of rich and simple web portals in which they can upload and analyze their files. Providing such tools requires to mask the complexity induced by the needed High Performance Computing (HPC) environment. The connection between interface and computing infrastructure is usually specific to each portal. With Jflow, we introduce a Workflow Management System (WMS), composed of jQuery plug-ins which can easily be embedded in any web application and a Python library providing all requested features to setup, run and monitor workflows. Jflow is available under the GNU General Public License (GPL) at http://bioinfo.genotoul.fr/jflow. The package is coming with full documentation, quick start and a running test portal. Jerome.Mariette@toulouse.inra.fr. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Hemangiopericytoma in the central nervous system. A study of eight cases.
Mekni, A; Kourda, J; Chelly, I; Ferchichi, L; Bellil, K; Hammouda, K B; Kchir, N; Zitouna, M; Khaldi, M; Haouet, S
2008-02-01
Most hemangiopericytomas (HPC) are located in the musculoskeletal system and the skin, while the location in the central nervous system (CNS) is rare. The latter represents 2 to 4% in large series of meningeal tumors, thus accounting for less than 1% of all CNS tumors. In the central nervous system, tumors with a hemangiopericytomatous histolopathological pattern can be either hemangiopericytomas or solitary fibrous tumors. CNS-HPCs have a relentless tendency for local recurrence and metastases outside the CNS. Metastasis can also appear many years after adequate treatment of the primary tumor. We present a pathological study of eight patients with CNS-HPC and compare our results with corresponding published data. The CNS-HPC group consisted of three males and five females with a mean age of 36.75 years. The tumors were supratentorial in four cases, infratentorial in two cases, tentorial in one case and located in the spinal cord in the last one. Histologically, CNS-HPCs were similar to their soft tissue counterparts. One case demonstrated increased cellularity, marked nuclear hyperchromasia and marked cellular pleomorphism with infiltration of the cerebellum. All patients underwent surgery with gross-total resection in all cases. No patients received postoperative radiation therapy. Only four patients recurred locally after six, seven and eight months, and five years. Our study presents the pathological features of CNS-HPC as a distinct entity from both meningioma and solitary fibrous tumors. A comparative review of literature with our results is discussed.
Li, You Quan; Chua, Eu Tiong; Chua, Kevin L M; Chua, Melvin L K
2018-02-01
Intracranial hemangiopericytomas (HPC) are chemotherapy- and radiotherapy (RT)-resistant. Here, we report on a novel stereotactic radiosurgery (SRS) technique-"Cor Occidere" (Latin), as a potential strategy of overcoming radioresistance of HPC. A 36-year old female presented to our clinic for consideration of a 3rd-course of RT for her recurrent cavernous sinus HPC, following previous cranial RT at 13 and 5 years prior, and a failed 9 months trial of bevacizumab/temozolomide. The tumor-adjacent brain stem and carotid artery risked substantial damage given the cumulative RT doses to these organs. We therefore designed an SRS plan targeting only the tumor core with 16 Gy single-fraction. Despite underdosing the tumor margin, we achieved stable disease over 25 months, contrasting her responses to systemic therapies. Achieving tumor control despite a suboptimal treatment that utilized high dose ablation of the tumor core suggests novel biological mechanisms to overcome radioresistance of HPC.
Equivalent cardioprotection induced by ischemic and hypoxic preconditioning.
Xiang, Xujin; Lin, Haixia; Liu, Jin; Duan, Zeyan
2013-04-01
We aimed to compare cardioprotection induced by various hypoxic preconditioning (HPC) and ischemic preconditioning (IPC) protocols. Isolated rat hearts were randomly divided into 7 groups (n = 7 per group) and received 3 or 5 cycles of 3-minute ischemia or hypoxia followed by 3-minute reperfusion (IPC33 or HPC33 or IPC53 or HPC53 group), 3 cycles of 5-minute ischemia or hypoxia followed by 5-minute reperfusion (IPC35 group or HPC35 group), or 30-minute perfusion (ischemic/reperfusion group), respectively. Then all the hearts were subjected to 50-minute ischemia and 120-minute reperfusion. Cardiac function, infarct size, and coronary flow rate (CFR) were evaluated. Recovery of cardiac function and CFR in IPC35, HPC35, and HPC53 groups was significantly improved as compared with I/R group (p < 0.01). There were no significant differences in cardiac function parameters between IPC35 and HPC35 groups. Consistently, infarct size was significantly reduced in IPC35, HPC35, and HPC53 groups compared with ischemic/reperfusion group. Multiple-cycle short duration HPC exerted cardioprotection, which was as powerful as that of IPC. Georg Thieme Verlag KG Stuttgart · New York.
A population-based analysis of Head and Neck hemangiopericytoma.
Shaigany, Kevin; Fang, Christina H; Patel, Tapan D; Park, Richard Chan; Baredes, Soly; Eloy, Jean Anderson
2016-03-01
Hemangiopericytomas (HPC) are tumors that arise from pericytes. Hemangiopericytomas of the head and neck are rare and occur both extracranially and intracranially. This study analyzes the demographic, clinicopathologic, treatment modalities, and survival characteristics of extracranial head and neck hemangiopericytomas (HN-HPC) and compares them to HPCs at other body sites (Other-HPC). The Surveillance, Epidemiology, and End Results (SEER) database (1973-2012) was queried for HN-HPC (121 cases) and Other-HPC (510 cases). Data were analyzed comparatively with respect to various demographic and clinicopathologic factors. Disease-specific survival (DSS) was analyzed using the Kaplan-Meier model. There was no significant difference in age at time of diagnosis between HN-HPC and Other-HPC. Head and neck HPC was most commonly located in the connective and soft tissue (18.4%), followed by the nasal cavity and paranasal sinuses (8.5%). Head and neck HPCs were smaller than Other-HPC (P < 0.0001) and more likely to be a lower histologic grade (P < 0.0097). The primary treatment modality for HN-HPC was surgery alone, used in 55.8% of cases. The 5-, 10-, and 20-year DSS for HN-HPC were 84.0%, 79.4%, and 69.4%, respectfully. Higher histologic grade and the presence of distant metastases were poor prognostic factors for HN-HPC. Head and neck HPCs are rare tumors. This study represents the largest series of HN-HPCs to date. Surgery alone is the primary treatment modality for HN-HPC, with a favorable prognosis. Adjuvant radiotherapy does not appear to confer a survival benefit for any body site. 4. Laryngoscope, 126:643-650, 2016. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.
Molgenis-impute: imputation pipeline in a box.
Kanterakis, Alexandros; Deelen, Patrick; van Dijk, Freerk; Byelas, Heorhiy; Dijkstra, Martijn; Swertz, Morris A
2015-08-19
Genotype imputation is an important procedure in current genomic analysis such as genome-wide association studies, meta-analyses and fine mapping. Although high quality tools are available that perform the steps of this process, considerable effort and expertise is required to set up and run a best practice imputation pipeline, particularly for larger genotype datasets, where imputation has to scale out in parallel on computer clusters. Here we present MOLGENIS-impute, an 'imputation in a box' solution that seamlessly and transparently automates the set up and running of all the steps of the imputation process. These steps include genome build liftover (liftovering), genotype phasing with SHAPEIT2, quality control, sample and chromosomal chunking/merging, and imputation with IMPUTE2. MOLGENIS-impute builds on MOLGENIS-compute, a simple pipeline management platform for submission and monitoring of bioinformatics tasks in High Performance Computing (HPC) environments like local/cloud servers, clusters and grids. All the required tools, data and scripts are downloaded and installed in a single step. Researchers with diverse backgrounds and expertise have tested MOLGENIS-impute on different locations and imputed over 30,000 samples so far using the 1,000 Genomes Project and new Genome of the Netherlands data as the imputation reference. The tests have been performed on PBS/SGE clusters, cloud VMs and in a grid HPC environment. MOLGENIS-impute gives priority to the ease of setting up, configuring and running an imputation. It has minimal dependencies and wraps the pipeline in a simple command line interface, without sacrificing flexibility to adapt or limiting the options of underlying imputation tools. It does not require knowledge of a workflow system or programming, and is targeted at researchers who just want to apply best practices in imputation via simple commands. It is built on the MOLGENIS compute workflow framework to enable customization with additional computational steps or it can be included in other bioinformatics pipelines. It is available as open source from: https://github.com/molgenis/molgenis-imputation.
Implementation of the NAS Parallel Benchmarks in Java
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan (Technical Monitor)
2002-01-01
Several features make Java an attractive choice for High Performance Computing (HPC). In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for CFD applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Hale, Elaine; Hodge, Bri-Mathias
2016-08-11
This paper discusses the development of, approaches for, experiences with, and some results from a large-scale, high-performance-computer-based (HPC-based) co-simulation of electric power transmission and distribution systems using the Integrated Grid Modeling System (IGMS). IGMS was developed at the National Renewable Energy Laboratory (NREL) as a novel Independent System Operator (ISO)-to-appliance scale electric power system modeling platform that combines off-the-shelf tools to simultaneously model 100s to 1000s of distribution systems in co-simulation with detailed ISO markets, transmission power flows, and AGC-level reserve deployment. Lessons learned from the co-simulation architecture development are shared, along with a case study that explores the reactivemore » power impacts of PV inverter voltage support on the bulk power system.« less
Cloud CPFP: a shotgun proteomics data analysis pipeline using cloud and high performance computing.
Trudgian, David C; Mirzaei, Hamid
2012-12-07
We have extended the functionality of the Central Proteomics Facilities Pipeline (CPFP) to allow use of remote cloud and high performance computing (HPC) resources for shotgun proteomics data processing. CPFP has been modified to include modular local and remote scheduling for data processing jobs. The pipeline can now be run on a single PC or server, a local cluster, a remote HPC cluster, and/or the Amazon Web Services (AWS) cloud. We provide public images that allow easy deployment of CPFP in its entirety in the AWS cloud. This significantly reduces the effort necessary to use the software, and allows proteomics laboratories to pay for compute time ad hoc, rather than obtaining and maintaining expensive local server clusters. Alternatively the Amazon cloud can be used to increase the throughput of a local installation of CPFP as necessary. We demonstrate that cloud CPFP allows users to process data at higher speed than local installations but with similar cost and lower staff requirements. In addition to the computational improvements, the web interface to CPFP is simplified, and other functionalities are enhanced. The software is under active development at two leading institutions and continues to be released under an open-source license at http://cpfp.sourceforge.net.
Scott, Daniel B; Van Dyke, Michele I; Anderson, William B; Huck, Peter M
2015-12-01
The potential for regrowth of nitrifying microorganisms was monitored in 2 full-scale chloraminated drinking water distribution systems in Ontario, Canada, over a 9-month period. Quantitative PCR was used to measure amoA genes from ammonia-oxidizing bacteria (AOB) and ammonia-oxidizing archaea (AOA), and these values were compared with water quality parameters that can influence nitrifier survival and growth, including total chlorine, ammonia, temperature, pH, and organic carbon. Although there were no severe nitrification episodes, AOB and AOA were frequently detected at low concentrations in samples collected from both distribution systems. A culture-based presence-absence test confirmed the presence of viable nitrifiers. AOB were usually present in similar or greater numbers than AOA in both systems. As well, AOB showed higher regrowth potential compared with AOA in both systems. Statistically significant correlations were measured between several water quality parameters of relevance to nitrification. Total chlorine was negatively correlated with both nitrifiers and heterotrophic plate count (HPC) bacteria, and ammonia levels were positively correlated with nitrifiers. Of particular importance was the strong correlation between HPC and AOB, which reinforced the usefulness of HPC as an operational parameter to measure general microbiological conditions in distribution systems.
GSDC: A Unique Data Center in Korea for HEP research
NASA Astrophysics Data System (ADS)
Ahn, Sang-Un
2017-04-01
Global Science experimental Data hub Center (GSDC) at Korea Institute of Science and Technology Information (KISTI) is a unique data center in South Korea established for promoting the fundamental research fields by supporting them with the expertise on Information and Communication Technology (ICT) and the infrastructure for High Performance Computing (HPC), High Throughput Computing (HTC) and Networking. GSDC has supported various research fields in South Korea dealing with the large scale of data, e.g. RENO experiment for neutrino research, LIGO experiment for gravitational wave detection, Genome sequencing project for bio-medical, and HEP experiments such as CDF at FNAL, Belle at KEK, and STAR at BNL. In particular, GSDC has run a Tier-1 center for ALICE experiment using the LHC at CERN since 2013. In this talk, we present the overview on computing infrastructure that GSDC runs for the research fields and we discuss on the data center infrastructure management system deployed at GSDC.
The clinical phenotype of hereditary versus sporadic prostate cancer: HPC definition revisited.
Cremers, Ruben G; Aben, Katja K; van Oort, Inge M; Sedelaar, J P Michiel; Vasen, Hans F; Vermeulen, Sita H; Kiemeney, Lambertus A
2016-07-01
The definition of hereditary prostate cancer (HPC) is based on family history and age at onset. Intuitively, HPC is a serious subtype of prostate cancer but there are only limited data on the clinical phenotype of HPC. Here, we aimed to compare the prognosis of HPC to the sporadic form of prostate cancer (SPC). HPC patients were identified through a national registry of HPC families in the Netherlands, selecting patients diagnosed from the year 2000 onward (n = 324). SPC patients were identified from the Netherlands Cancer Registry (NCR) between 2003 and 2006 for a population-based study into the genetic susceptibility of PC (n = 1,664). Detailed clinical data were collected by NCR-registrars, using a standardized registration form. Follow-up extended up to the end of 2013. Differences between the groups were evaluated by cross-tabulations and tested for statistical significance while accounting for familial dependency of observations by GEE. Differences in progression-free and overall survival were evaluated using χ(2) testing with GEE in a proportional-hazards model. HPC patients were on average 3 years younger at diagnosis, had lower PSA values, lower Gleason scores, and more often locally confined disease. Of the HPC patients, 35% had high-risk disease (NICE-criteria) versus 51% of the SPC patients. HPC patients were less often treated with active surveillance. Kaplan-Meier 5-year progression-free survival after radical prostatectomy was comparable for HPC (78%) and SPC (74%; P = 0.30). The 5-year overall survival was 85% (95%CI 81-89%) for HPC versus 80% (95%CI 78-82%) for SPC (P = 0.03). HPC has a favorable clinical phenotype but patients more often underwent radical treatment. The major limitation of HPC is the absence of a genetics-based definition of HPC, which may lead to over-diagnosis of PC in men with a family history of prostate cancer. The HPC definition should, therefore, be re-evaluated, aiming at a reduction of over-diagnosis and overtreatment among men with multiple relatives diagnosed with PC. Prostate 76:897-904, 2016. © 2016 The Authors. The Prostate published by Wiley Periodicals, Inc. © 2016 The Authors. The Prostate published by Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallarno, George; Rogers, James H; Maxwell, Don E
The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learnedmore » in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.« less
A Look at the Impact of High-End Computing Technologies on NASA Missions
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Dunbar, Jill; Hardman, John; Bailey, F. Ron; Wheeler, Lorien; Rogers, Stuart
2012-01-01
From its bold start nearly 30 years ago and continuing today, the NASA Advanced Supercomputing (NAS) facility at Ames Research Center has enabled remarkable breakthroughs in the space agency s science and engineering missions. Throughout this time, NAS experts have influenced the state-of-the-art in high-performance computing (HPC) and related technologies such as scientific visualization, system benchmarking, batch scheduling, and grid environments. We highlight the pioneering achievements and innovations originating from and made possible by NAS resources and know-how, from early supercomputing environment design and software development, to long-term simulation and analyses critical to design safe Space Shuttle operations and associated spinoff technologies, to the highly successful Kepler Mission s discovery of new planets now capturing the world s imagination.
Reliable High Performance Peta- and Exa-Scale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G
2012-04-02
As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continuemore » to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and faulty operation. By synthesizing models of individual components into a whole-system behavior models my work is making it possible to automatically understand the behavior of arbitrary real-world systems to enable them to tolerate a wide range of system faults. My project is following a multi-pronged research strategy. Section II discusses my work on modeling the behavior of existing applications and systems. Section II.A discusses resilience in the face of soft faults and Section II.B looks at techniques to tolerate performance faults. Finally Section III presents an alternative approach that studies how a system should be designed from the ground up to make resilience natural and easy.« less
CDAC Student Report: Summary of LLNL Internship
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herriman, Jane E.
Multiple objectives motivated me to apply for an internship at LLNL: I wanted to experience the work environment at a national lab, to learn about research and job opportunities at LLNL in particular, and to gain greater experience with code development, particularly within the realm of high performance computing (HPC). This summer I was selected to participate in LLNL's Computational Chemistry and Material Science Summer Institute (CCMS). CCMS is a 10 week program hosted by the Quantum Simulations group leader, Dr. Eric Schwegler. CCMS connects graduate students to mentors at LLNL involved in similar re- search and provides weekly seminarsmore » on a broad array of topics from within chemistry and materials science. Dr. Xavier Andrade and Dr. Erik Draeger served as my co-mentors over the summer, and Dr. Andrade continues to mentor me now that CCMS has concluded. Dr. Andrade is a member of the Quantum Simulations group within the Physical and Life Sciences at LLNL, and Dr. Draeger leads the HPC group within the Center for Applied Scientific Computing (CASC). The two have worked together to develop Qb@ll, an open-source first principles molecular dynamics code that was the platform for my summer research project.« less
Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators
NASA Astrophysics Data System (ADS)
Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.
2015-12-01
Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
Circadian time-place (or time-route) learning in rats with hippocampal lesions.
Cole, Emily; Mistlberger, Ralph E; Merza, Devon; Trigiani, Lianne J; Madularu, Dan; Simundic, Amanda; Mumby, Dave G
2016-12-01
Circadian time-place learning (TPL) is the ability to remember both the place and biological time of day that a significant event occurred (e.g., food availability). This ability requires that a circadian clock provide phase information (a time tag) to cognitive systems involved in linking representations of an event with spatial reference memory. To date, it is unclear which neuronal substrates are critical in this process, but one candidate structure is the hippocampus (HPC). The HPC is essential for normal performance on tasks that require allocentric spatial memory and exhibits circadian rhythms of gene expression that are sensitive to meal timing. Using a novel TPL training procedure and enriched, multidimensional environment, we trained rats to locate a food reward that varied between two locations relative to time of day. After rats acquired the task, they received either HPC or SHAM lesions and were re-tested. Rats with HPC lesions were initially impaired on the task relative to SHAM rats, but re-attained high scores with continued testing. Probe tests revealed that the rats were not using an alternation strategy or relying on light-dark transitions to locate the food reward. We hypothesize that transient disruption and recovery reflect a switch from HPC-dependent allocentric navigation (learning places) to dorsal striatum-dependent egocentric spatial navigation (learning routes to a location). Whatever the navigation strategy, these results demonstrate that the HPC is not required for rats to find food in different locations using circadian phase as a discriminative cue. Copyright © 2016 Elsevier Inc. All rights reserved.
Context memory formation requires activity-dependent protein degradation in the hippocampus.
Cullen, Patrick K; Ferrara, Nicole C; Pullins, Shane E; Helmstetter, Fred J
2017-11-01
Numerous studies have indicated that the consolidation of contextual fear memories supported by an aversive outcome like footshock requires de novo protein synthesis as well as protein degradation mediated by the ubiquitin-proteasome system (UPS). Context memory formed in the absence of an aversive stimulus by simple exposure to a novel environment requires de novo protein synthesis in both the dorsal (dHPC) and ventral (vHPC) hippocampus. However, the role of UPS-mediated protein degradation in the consolidation of context memory in the absence of a strong aversive stimulus has not been investigated. In the present study, we used the context preexposure facilitation effect (CPFE) procedure, which allows for the dissociation of context learning from context-shock learning, to investigate the role of activity-dependent protein degradation in the dHPC and vHPC during the formation of a context memory. We report that blocking protein degradation with the proteasome inhibitor clasto-lactacystin β-lactone (βLac) or blocking protein synthesis with anisomycin (ANI) immediately after context preexposure significantly impaired context memory formation. Additionally, we examined 20S proteasome activity at different time points following context exposure and saw that the activity of proteasomes in the dHPC increases immediately after stimulus exposure while the vHPC exhibits a biphasic pattern of proteolytic activity. Taken together, these data suggest that the requirement of increased proteolysis during memory consolidation is not driven by processes triggered by the strong aversive outcome (i.e., shock) normally used to support fear conditioning. © 2017 Cullen et al.; Published by Cold Spring Harbor Laboratory Press.
Frequency-specific hippocampal-prefrontal interactions during associative learning
Brincat, Scott L.; Miller, Earl K.
2015-01-01
Much of our knowledge of the world depends on learning associations (e.g., face-name), for which the hippocampus (HPC) and prefrontal cortex (PFC) are critical. HPC-PFC interactions have rarely been studied in monkeys, whose cognitive/mnemonic abilities are akin to humans. Here, we show functional differences and frequency-specific interactions between HPC and PFC of monkeys learning object-pair associations, an animal model of human explicit memory. PFC spiking activity reflected learning in parallel with behavioral performance, while HPC neurons reflected feedback about whether trial-and-error guesses were correct or incorrect. Theta-band HPC-PFC synchrony was stronger after errors, was driven primarily by PFC to HPC directional influences, and decreased with learning. In contrast, alpha/beta-band synchrony was stronger after correct trials, was driven more by HPC, and increased with learning. Rapid object associative learning may occur in PFC, while HPC may guide neocortical plasticity by signaling success or failure via oscillatory synchrony in different frequency bands. PMID:25706471
Costa - Introduction to 2015 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, James E.
In parallel with Sandia National Laboratories having two major locations (NM and CA), along with a number of smaller facilities across the nation, so too is the distribution of scientific, engineering and computing resources. As a part of Sandia’s Institutional Computing Program, CA site-based Sandia computer scientists and engineers have been providing mission and research staff with local CA resident expertise on computing options while also focusing on two growing high performance computing research problems. The first is how to increase system resilience to failure, as machines grow larger, more complex and heterogeneous. The second is how to ensure thatmore » computer hardware and configurations are optimized for specialized data analytical mission needs within the overall Sandia computing environment, including the HPC subenvironment. All of these activities support the larger Sandia effort in accelerating development and integration of high performance computing into national security missions. Sandia continues to both promote national R&D objectives, including the recent Presidential Executive Order establishing the National Strategic Computing Initiative and work to ensure that the full range of computing services and capabilities are available for all mission responsibilities, from national security to energy to homeland defense.« less
Status report of the end-to-end ASKAP software system: towards early science operations
NASA Astrophysics Data System (ADS)
Guzman, Juan Carlos; Chapman, Jessica; Marquarding, Malte; Whiting, Matthew
2016-08-01
The Australian SKA Pathfinder (ASKAP) is a novel centimetre radio synthesis telescope currently in the commissioning phase and located in the midwest region of Western Australia. It comprises of 36 x 12 m diameter reflector antennas each equipped with state-of-the-art and award winning Phased Array Feeds (PAF) technology. The PAFs provide a wide, 30 square degree field-of-view by forming up to 36 separate dual-polarisation beams at once. This results in a high data rate: 70 TB of correlated visibilities in an 8-hour observation, requiring custom-written, high-performance software running in dedicated High Performance Computing (HPC) facilities. The first six antennas equipped with first-generation PAF technology (Mark I), named the Boolardy Engineering Test Array (BETA) have been in use since 2014 as a platform to test PAF calibration and imaging techniques, and along the way it has been producing some great science results. Commissioning of the ASKAP Array Release 1, that is the first six antennas with second-generation PAFs (Mark II) is currently under way. An integral part of the instrument is the Central Processor platform hosted at the Pawsey Supercomputing Centre in Perth, which executes custom-written software pipelines, designed specifically to meet the ASKAP imaging requirements of wide field of view and high dynamic range. There are three key hardware components of the Central Processor: The ingest nodes (16 x node cluster), the fast temporary storage (1 PB Lustre file system) and the processing supercomputer (200 TFlop system). This High-Performance Computing (HPC) platform is managed and supported by the Pawsey support team. Due to the limited amount of data generated by BETA and the first ASKAP Array Release, the Central Processor platform has been running in a more "traditional" or user-interactive mode. But this is about to change: integration and verification of the online ingest pipeline starts in early 2016, which is required to support the full 300 MHz bandwidth for Array Release 1; followed by the deployment of the real-time data processing components. In addition to the Central Processor, the first production release of the CSIRO ASKAP Science Data Archive (CASDA) has also been deployed in one of the Pawsey Supercomputing Centre facilities and it is integrated to the end-to-end ASKAP data flow system. This paper describes the current status of the "end-to-end" data flow software system from preparing observations to data acquisition, processing and archiving; and the challenges of integrating an HPC facility as a key part of the instrument. It also shares some lessons learned since the start of integration activities and the challenges ahead in preparation for the start of the Early Science program.
1995-01-01
possible to determine communication points. For this version, a C program spawning Posix threads and using semaphores to synchronize would have to...performance such as the time required for network communication and synchronization as well as issues of asynchrony and memory hierarchy. For example...enhances reusability. Process (or task) parallel computations can also be succinctly expressed with a small set of process creation and synchronization
Implementation of NAS Parallel Benchmarks in Java
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry
2000-01-01
A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.
Contamination Control of Freeze Shoe Coring System for Collection of Aquifer Sands
NASA Astrophysics Data System (ADS)
Homola, K.; van Geen, A.; Spivack, A. J.; Grzybowski, B.; Schlottenmier, D.
2017-12-01
We have developed and tested an original device, the freeze-shoe coring system, designed to recover undisturbed samples of water contained in sand-dominated aquifers. Aquifer sands are notoriously difficult to collect together with porewater from coincident depths, as high hydraulic permeability leads to water drainage and mixing during retrieval. Two existing corer designs were reconfigured to incorporate the freeze-shoe system; a Hydraulic Piston (HPC) and a Rotary (RC) Corer. Once deployed, liquid CO2 contained in an interior tank is channeled to coils at the core head where it changes phase, rapidly cooling the deepest portion of the core. The resulting frozen core material impedes water loss during recovery. We conducted contamination tests to examine the integrity of cores retrieved during a March 2017 yard test deployment. Perfluorocarbon tracer (PFC) was added to the drill fluid and recovered cores were subsampled to capture the distribution of PFC throughout the core length and interior. Samples were collected from two HPC and one RC core and analyzed for PFC concentrations. The lowest porewater contamination, around 0.01% invasive fluid, occurs in the center of both HPC cores. The greatest contamination (up to 10%) occurs at the disturbed edges where core material contacts drill fluid. There was lower contamination in the core interior than top, bottom, and edges, as well as significantly lower contamination in HPC cores that those recovered with the RC. These results confirm that the freeze-shoe system, proposed for field test deployments in West Bengal, India, can successfully collect intact porewater and sediment material with minimal if any contamination from drill fluid.
Bu, Xiangning; Zhang, Nan; Yang, Xuan; Liu, Yanyan; Du, Jianli; Liang, Jing; Xu, Qunyuan; Li, Junfa
2011-04-01
Hypoxic preconditioning (HPC) initiates intracellular signaling pathway to provide protection against subsequent cerebral ischemic injuries, and its mechanism may provide molecular targets for therapy in stroke. According to our study of conventional protein kinase C βII (cPKCβII) activation in HPC, the role of cPKCβII in HPC-induced neuroprotection and its interacting proteins were determined in this study. The autohypoxia-induced HPC and middle cerebral artery occlusion (MCAO)-induced cerebral ischemia mouse models were prepared as reported. We found that HPC reduced 6 h MCAO-induced neurological deficits, infarct volume, edema ratio and cell apoptosis in peri-infarct region (penumbra), but cPKCβII inhibitors Go6983 and LY333531 blocked HPC-induced neuroprotection. Proteomic analysis revealed that the expression of four proteins in cytosol and eight proteins in particulate fraction changed significantly among 49 identified cPKCβII-interacting proteins in cortex of HPC mice. In addition, HPC could inhibit the decrease of phosphorylated collapsin response mediator protein-2 (CRMP-2) level and increase of CRMP-2 breakdown product. TAT-CRMP-2 peptide, which prevents the cleavage of endogenous CRMP-2, could inhibit CRMP-2 dephosphorylation and proteolysis as well as the infarct volume of 6 h MCAO mice. This study is the first to report multiple cPKCβII-interacting proteins in HPC mouse brain and the role of cPKCβII-CRMP-2 in HPC-induced neuroprotection against early stages of ischemic injuries in mice. © 2011 The Authors. Journal of Neurochemistry © 2011 International Society for Neurochemistry.
NASA Astrophysics Data System (ADS)
Christou, Michalis; Christoudias, Theodoros; Morillo, Julián; Alvarez, Damian; Merx, Hendrik
2016-09-01
We examine an alternative approach to heterogeneous cluster-computing in the many-core era for Earth system models, using the European Centre for Medium-Range Weather Forecasts Hamburg (ECHAM)/Modular Earth Submodel System (MESSy) Atmospheric Chemistry (EMAC) model as a pilot application on the Dynamical Exascale Entry Platform (DEEP). A set of autonomous coprocessors interconnected together, called Booster, complements a conventional HPC Cluster and increases its computing performance, offering extra flexibility to expose multiple levels of parallelism and achieve better scalability. The EMAC model atmospheric chemistry code (Module Efficiently Calculating the Chemistry of the Atmosphere (MECCA)) was taskified with an offload mechanism implemented using OmpSs directives. The model was ported to the MareNostrum 3 supercomputer to allow testing with Intel Xeon Phi accelerators on a production-size machine. The changes proposed in this paper are expected to contribute to the eventual adoption of Cluster-Booster division and Many Integrated Core (MIC) accelerated architectures in presently available implementations of Earth system models, towards exploiting the potential of a fully Exascale-capable platform.
Verification and Validation of COAMPS: Results from a Fully-Coupled Air/Sea/Wave Modeling System
NASA Astrophysics Data System (ADS)
Smith, T.; Allard, R. A.; Campbell, T. J.; Chu, Y. P.; Dykes, J.; Zamudio, L.; Chen, S.; Gabersek, S.
2016-02-01
The Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) is a state-of-the art, fully-coupled air/sea/wave modeling system that is currently being validated for operational transition to both the Naval Oceanographic Office (NAVO) and to the Fleet Numerical Meteorology and Oceanography Center (FNMOC). COAMPS is run at the Department of Defense Supercomputing Resource Center (DSRC) operated by the DoD High Performance Computing Modernization Program (HPCMP). A total of four models including the Naval Coastal Ocean Model (NCOM), Simulating Waves Nearshore (SWAN), WaveWatch III, and the COAMPS atmospheric model are coupled through both the Earth System Modeling Framework (ESMF). Results from regions of naval operational interests, including the Western Atlantic (U.S. East Coast), RIMPAC (Hawaii), and DYNAMO (Indian Ocean), will show the advantages of utilizing a coupled modeling system versus an uncoupled or stand alone model. Statistical analyses, which include model/observation comparisons, will be presented in the form of operationally approved scorecards for both the atmospheric and oceanic output. Also, computational logistics involving the HPC resources for the COAMPS simulations will be shown.
Measurements of file transfer rates over dedicated long-haul connections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Settlemyer, Bradley W; Imam, Neena
2016-01-01
Wide-area file transfers are an integral part of several High-Performance Computing (HPC) scenarios. Dedicated network connections with high capacity, low loss rate and low competing traffic, are increasingly being provisioned over current HPC infrastructures to support such transfers. To gain insights into these file transfers, we collected transfer rate measurements for Lustre and xfs file systems between dedicated multi-core servers over emulated 10 Gbps connections with round trip times (rtt) in 0-366 ms range. Memory transfer throughput over these connections is measured using iperf, and file IO throughput on host systems is measured using xddprof. We consider two file systemmore » configurations: Lustre over IB network and xfs over SSD connected to PCI bus. Files are transferred using xdd across these connections, and the transfer rates are measured, which indicate the need to jointly optimize the connection and host file IO parameters to achieve peak transfer rates. In particular, these measurements indicate that (i) peak file transfer rate is lower than peak connection and host IO throughput, in some cases by as much as 50% or lower, (ii) xdd request sizes that achieve peak throughput for host file IO do not necessarily lead to peak file transfer rates, and (iii) parallelism in host IO and TCP transport does not always improve the file transfer rates.« less
NASA Astrophysics Data System (ADS)
Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro; Eriksen, Janus Juul; Ettenhuber, Patrick; Kristensen, Kasper; Larkin, Jeff; Liakh, Dmitry; Pawłowski, Filip; Vose, Aaron; Wang, Yang Min; Jørgensen, Poul
2017-03-01
We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide-Expand-Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide-Expand-Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalability of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the "resolution of the identity second-order Møller-Plesset perturbation theory" (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Shujiang; Kline, Keith L; Nair, S. Surendran
A global energy crop productivity model that provides geospatially explicit quantitative details on biomass potential and factors affecting sustainability would be useful, but does not exist now. This study describes a modeling platform capable of meeting many challenges associated with global-scale agro-ecosystem modeling. We designed an analytical framework for bioenergy crops consisting of six major components: (i) standardized natural resources datasets, (ii) global field-trial data and crop management practices, (iii) simulation units and management scenarios, (iv) model calibration and validation, (v) high-performance computing (HPC) simulation, and (vi) simulation output processing and analysis. The HPC-Environmental Policy Integrated Climate (HPC-EPIC) model simulatedmore » a perennial bioenergy crop, switchgrass (Panicum virgatum L.), estimating feedstock production potentials and effects across the globe. This modeling platform can assess soil C sequestration, net greenhouse gas (GHG) emissions, nonpoint source pollution (e.g., nutrient and pesticide loss), and energy exchange with the atmosphere. It can be expanded to include additional bioenergy crops (e.g., miscanthus, energy cane, and agave) and food crops under different management scenarios. The platform and switchgrass field-trial dataset are available to support global analysis of biomass feedstock production potential and corresponding metrics of sustainability.« less
Kinslow, Connor J; Rajpara, Raj S; Wu, Cheng-Chia; Bruce, Samuel S; Canoll, Peter D; Wang, Shih-Hsiu; Sonabend, Adam M; Sheth, Sameer A; McKhann, Guy M; Sisti, Michael B; Bruce, Jeffrey N; Wang, Tony J C
2017-06-01
Meningeal hemangiopericytoma (m-HPC) is a rare tumor of the central nervous system (CNS), which is distinguished clinically from meningioma by its tendency to recur and metastasize. The histological classification and grading scheme for m-HPC is still evolving and few studies have identified tumor features that are associated with metastasis. All patients at our institution with m-HPC were assessed for patient, tumor, and treatment characteristics associated with survival, recurrence, and metastasis. New findings were validated using the SEER database. Twenty-seven patients were identified in our institutional records with m-HPC with a median follow-up time of 85 months. Invasiveness was the strongest predictor of decreased overall survival (OS) and decreased metastasis-free survival (MFS) (p = 0.004 and 0.001). On subgroup analysis, bone invasion trended towards decreased OS (p = 0.056). Bone invasion and soft tissue invasion were significantly associated with decreased MFS (p = 0.001 and 0.012). An additional 315 patients with m-HPC were identified in the SEER database that had information on tumor invasion and 263 with information on distant metastasis. Invasion was significantly associated with decreased survival (HR = 5.769, p = 0.007) and metastasis (OR 134, p = 0.000) in the SEER data. In this study, the authors identified a previously unreported tumor characteristic, invasiveness, as the strongest factor associated with decreased survival and metastasis. The association of invasion with decreased survival and metastasis was confirmed in a separate, larger, publicly available database. Invasion may be a useful parameter in the histological grading and clinical management of hemangiopericytoma of the CNS.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-27
... plated nuts attaching the HPC stage 3 to 8 drum to the HPC stage 9 to 12 drum, removal of silver residue... plated nuts attaching the HPC stage 3 to 8 drum to the HPC stage 9 to 12 drum, removal of silver residue... AD, removal from service of the fully silver plated nuts attaching the HPC stage 3 to 8 drum to the...
On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat
NASA Astrophysics Data System (ADS)
Hua, H.
2016-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.
NASA Astrophysics Data System (ADS)
Farias, Barbara; Hsiao, Lilian; Khan, Saad
Oil-in-water emulsions with polymers are widely used for personal care products. Since the accumulation of traditional surfactants on the skin can promote irritation, an alternative is the use of hydrogenated phosphatidylcholine (HPC), a phospholipid that can form a lamellar structure similar to the skin barrier. This research aims to investigate the effect of composition on the rheological and tribological characteristics in complex systems containing HPC. For tribology experiments we used a soft model contacts made of polydimethylsiloxane (PDMS), while for bulk rheology studies we used dynamic and steady shear experiments. We examine how the addition of polymer, HPC and oil affects friction coefficients, lubrication regimes, viscoelasticity, yield stress, and gel formation. The bulk rheology shows that the studied systems are shear thinning and have gel-like behavior. The effect of each component was investigated by going from simple to more complex systems. The Stribeck curves obtained are related to the bulk rheology results to obtain physical insights into these complex systems. The results suggest that the polymer and phospholipids are being adsorbed onto the PDMS surface, reducing the friction coefficient at lower entrainment speeds.
Self-desiccation mechanism of high-performance concrete.
Yang, Quan-Bing; Zhang, Shu-Qing
2004-12-01
Investigations on the effects of W/C ratio and silica fume on the autogenous shrinkage and internal relative humidity of high performance concrete (HPC), and analysis of the self-desiccation mechanisms of HPC showed that the autogenous shrinkage and internal relative humidity of HPC increases and decreases with the reduction of W/C respectively; and that these phenomena were amplified by the addition of silica fume. Theoretical analyses indicated that the reduction of RH in HPC was not due to shortage of water, but due to the fact that the evaporable water in HPC was not evaporated freely. The reduction of internal relative humidity or the so-called self-desiccation of HPC was chiefly caused by the increase in mole concentration of soluble ions in HPC and the reduction of pore size or the increase in the fraction of micro-pore water in the total evaporable water (T(r)/T(te) ratio).
Scalable and Power Efficient Data Analytics for Hybrid Exascale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choudhary, Alok; Samatova, Nagiza; Wu, Kesheng
This project developed a generic and optimized set of core data analytics functions. These functions organically consolidate a broad constellation of high performance analytical pipelines. As the architectures of emerging HPC systems become inherently heterogeneous, there is a need to design algorithms for data analysis kernels accelerated on hybrid multi-node, multi-core HPC architectures comprised of a mix of CPUs, GPUs, and SSDs. Furthermore, the power-aware trend drives the advances in our performance-energy tradeoff analysis framework which enables our data analysis kernels algorithms and software to be parameterized so that users can choose the right power-performance optimizations.
Exposure to high ambient temperatures alters embryology in rabbits
NASA Astrophysics Data System (ADS)
García, M. L.; Argente, M. J.
2017-09-01
High ambient temperatures are a determining factor in the deterioration of embryo quality and survival in mammals. The aim of this study was to evaluate the effect of heat stress on embryo development, embryonic size and size of the embryonic coats in rabbits. A total of 310 embryos from 33 females in thermal comfort zone and 264 embryos of 28 females in heat stress conditions were used in the experiment. The traits studied were ovulation rate, percentage of total embryos, percentage of normal embryos, embryo area, zona pellucida thickness and mucin coat thickness. Traits were measured at 24 and 48 h post-coitum (hpc); mucin coat thickness was only measured at 48 hpc. The embryos were classified as zygotes or two-cell embryos at 24 hpc, and 16-cells or early morulae at 48 hpc. The ovulation rate was one oocyte lower in heat stress conditions than in thermal comfort. Percentage of normal embryos was lower in heat stress conditions at 24 hpc (17.2%) and 48 hpc (13.2%). No differences in percentage of zygotes or two-cell embryos were found at 24 hpc. The embryo development and area was affected by heat stress at 48 hpc (10% higher percentage of 16-cells and 883 μm2 smaller, respectively). Zona pellucida was thicker under thermal stress at 24 hpc (1.2 μm) and 48 hpc (1.5 μm). No differences in mucin coat thickness were found. In conclusion, heat stress appears to alter embryology in rabbits.
An Optimizing Compiler for Petascale I/O on Leadership Class Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choudhary, Alok; Kandemir, Mahmut
In high-performance computing systems, parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final report summarizesmore » the major achievements of the project and also points out promising future directions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro; Trujillo, Susie
During calendar year 2017, Sandia National Laboratories (SNL) made strides towards developing an open portable design platform rich in highperformance computing (HPC) enabled modeling, analysis and synthesis tools. The main focus was to lay the foundations of the core interfaces that will enable plug-n-play insertion of synthesis optimization technologies in the areas of modeling, analysis and synthesis.
Data Retention Policy | High-Performance Computing | NREL
HPC Data Retention Policy. File storage areas on Peregrine and Gyrfalcon are either user-centric to reclaim storage. We can make special arrangements for permanent storage, if needed. User-Centric > is 3 months after the last project ends. During this retention period, the user may log in to
Peregrine Software Toolchains | High-Performance Computing | NREL
toolchain is an open-source alternative against which many technical applications are natively developed and tested. The Portland Group compilers are not fully supported, but are available to the HPC community. Use Group (PGI) C/C++ and Fortran (partially supported) The PGI Accelerator compilers include NVIDIA GPU
Flesher, M. Melissa; Butt, Allen E.; Kinney-Hurd, Brandee L.
2011-01-01
Pavlovian trace conditioning critically depends on the medial prefrontal cortex (mPFC) and hippocampus (HPC), whereas delay conditioning does not depend on these brain structures. Given that the cholinergic basal forebrain system modulates activity in both the mPFC and HPC, it was reasoned that the level of acetylcholine (ACh) release in these regions would show distinct profiles during testing in trace and delay conditioning paradigms. To test this assumption, microdialysis probes were implanted unilaterally into the mPFC and HPC of rats that were pre-trained in appetitive trace and delay conditioning paradigms using different conditional stimuli in the two tasks. On the day of microdialysis testing, dialysate samples were collected during a quiet baseline interval before trials were initiated, and again during performance in separate blocks of trace and delay conditioning trials in each animal. ACh levels were quantified using high performance liquid chromatography and electrochemical detection techniques. Consistent with our hypothesis, results showed that ACh release in the mPFC was greater during trace conditioning than during delay conditioning. The level of ACh released during trace conditioning in the HPC was also greater than the levels observed during delay conditioning. While ACh efflux in both the mPFC and HPC selectively increased during trace conditioning, ACh levels in the mPFC during trace conditioning testing showed the greatest increases observed. These results demonstrate a dissociation in cholinergic activation of the mPFC and HPC during performance in trace but not delay appetitive conditioning, where this cholinergic activity may contribute to attentional mechanisms, adaptive response timing, or memory consolidation necessary for successful trace conditioning. PMID:21514394
NASA Astrophysics Data System (ADS)
Gutzwiller, David; Gontier, Mathieu; Demeulenaere, Alain
2014-11-01
Multi-Block structured solvers hold many advantages over their unstructured counterparts, such as a smaller memory footprint and efficient serial performance. Historically, multi-block structured solvers have not been easily adapted for use in a High Performance Computing (HPC) environment, and the recent trend towards hybrid GPU/CPU architectures has further complicated the situation. This paper will elaborate on developments and innovations applied to the NUMECA FINE/Turbo solver that have allowed near-linear scalability with real-world problems on over 250 hybrid GPU/GPU cluster nodes. Discussion will focus on the implementation of virtual partitioning and load balancing algorithms using a novel meta-block concept. This implementation is transparent to the user, allowing all pre- and post-processing steps to be performed using a simple, unpartitioned grid topology. Additional discussion will elaborate on developments that have improved parallel performance, including fully parallel I/O with the ADIOS API and the GPU porting of the computationally heavy CPUBooster convergence acceleration module. Head of HPC and Release Management, Numeca International.
2015-09-01
the network Mac8 Medium Access Control ( Mac ) (Ethernet) address observed as destination for outgoing packets subsessionid8 Zero-based index of...15. SUBJECT TERMS tactical networks, data reduction, high-performance computing, data analysis, big data 16. SECURITY CLASSIFICATION OF: 17...Integer index of row cts_deid Device (instrument) Identifier where observation took place cts_collpt Collection point or logical observation point on
Diversity in computing technologies and strategies for dynamic resource allocation
Garzoglio, G.; Gutsche, O.
2015-12-23
Here, High Energy Physics (HEP) is a very data intensive and trivially parallelizable science discipline. HEP is probing nature at increasingly finer details requiring ever increasing computational resources to process and analyze experimental data. In this paper, we discuss how HEP provisioned resources so far using Grid technologies, how HEP is starting to include new resource providers like commercial Clouds and HPC installations, and how HEP is transparently provisioning resources at these diverse providers.
Programming for 1.6 Millon cores: Early experiences with IBM's BG/Q SMP architecture
NASA Astrophysics Data System (ADS)
Glosli, James
2013-03-01
With the stall in clock cycle improvements a decade ago, the drive for computational performance has continues along a path of increasing core counts on a processor. The multi-core evolution has been expressed in both a symmetric multi processor (SMP) architecture and cpu/GPU architecture. Debates rage in the high performance computing (HPC) community which architecture best serves HPC. In this talk I will not attempt to resolve that debate but perhaps fuel it. I will discuss the experience of exploiting Sequoia, a 98304 node IBM Blue Gene/Q SMP at Lawrence Livermore National Laboratory. The advantages and challenges of leveraging the computational power BG/Q will be detailed through the discussion of two applications. The first application is a Molecular Dynamics code called ddcMD. This is a code developed over the last decade at LLNL and ported to BG/Q. The second application is a cardiac modeling code called Cardioid. This is a code that was recently designed and developed at LLNL to exploit the fine scale parallelism of BG/Q's SMP architecture. Through the lenses of these efforts I'll illustrate the need to rethink how we express and implement our computational approaches. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Malaquias, Lorena F B; Schulte, Heidi L; Chaker, Juliano A; Karan, Kapish; Durig, Thomas; Marreto, Ricardo N; Gratieri, Tais; Gelfuso, Guilherme M; Cunha-Filho, Marcilio
2018-01-01
This work aimed at obtaining an optimized itraconazole (ITZ) solid oral formulation in terms of palatability and dissolution rate by combining different polymers using hot melt extrusion (HME), according to a simplex centroid mixture design. For this, the polymers Plasdone ® (poly(1-vinylpyrrolidone-co-vinyl acetate) [PVP/VA]), Klucel ® ELF (2-hydroxypropyl ether cellulose [HPC]), and Soluplus ® (SOL, polyvinyl caprolactam-polyvinyl acetate-polyethylene glycol) were processed using a laboratory HME equipment operating without recirculation at constant temperature. Samples were characterized by physicochemical assays, as well as dissolution rate and palatability using an e-tongue. All materials became homogeneous and dense after HME processing. Thermal and structural analyses demonstrated drug amorphization, whereas IR spectroscopy evidenced drug stability and drug-excipient interactions in HME systems. Extrudates presented a significant increase in dissolution rate compared to ITZ raw material, mainly with formulations containing PVP/VA and HPC. A pronounced improvement in taste masking was also identified for HME systems, especially in those containing higher amounts of SOL and HPC. Data showed polymers act synergistically favoring formulation functional properties. Predicted best formulation should contain ITZ 25.0%, SOL 33.2%, HPC 28.9%, and PVP/VA 12.9% (w/w). Optimized response considering dissolution rate and palatability reinforces the benefit of polymer combinations. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Yang, Dewei; Jing, Huijuan; Wang, Zhaowu; Li, Jiaheng; Hu, Mingxiang; Lv, Ruitao; Zhang, Rui; Chen, Deliang
2018-05-19
Activated carbon (AC) based supercapacitors exhibit intrinsic advantages in energy storage. Traditional two-step synthesis (carbonization and activation) of AC faces difficulties in precisely regulating its pore-size distribution and thoroughly removing residual impurities like silicon oxide. This paper reports a novel coupled ultrasonication-milling (CUM) process for the preparation of hierarchically porous carbon (HPC) using corn cobs as the carbon resource. The as-obtained HPC is of a large surface area (2288 m 2 g -1 ) with a high mesopore ratio of ∼44.6%. When tested in a three-electrode system, the HPC exhibits a high specific capacitance of 465 F g -1 at 0.5 Ag -1 , 2.7 times higher than that (170 F g -1 ) of the commercial AC (YP-50F). In the two-electrode test system, the HPC device exhibits a specific capacitance of 135 F g -1 at 1 A g -1 , twice higher than that (68 F g -1 ) of YP-50F. The above excellent energy-storage properties are resulted from the CUM process which efficiently removes the impurities and modulates the mesopore/micropore structures of the AC samples derived from the agricultural resides of corn cobs. The CUM process is an efficient method to prepare high-performance biomass-derived AC materials. Copyright © 2018 Elsevier Inc. All rights reserved.
Heterotrophic bacteria in drinking water distribution system: a review.
Chowdhury, Shakhawat
2012-10-01
The microbiological quality of drinking water in municipal water distribution systems (WDS) depends on several factors. Free residual chlorine and/or chloramines are typically used to minimize bacterial recontamination and/or regrowth in WDS. Despite such preventive measures, regrowth of heterotrophic (HPC) and opportunistic bacteria in bulk water and biofilms has yet to be controlled completely. No approach has shown complete success in eliminating biofilms or HPC bacteria from bulk water and pipe surfaces. Biofilms can provide shelter for pathogenic bacteria and protect these bacteria from disinfectants. Some HPC bacteria may be associated with aesthetic and non-life threatening diseases. Research to date has achieved important success in understanding occurrence and regrowth of bacteria in bulk water and biofilms in WDS. To achieve comprehensive understanding and to provide efficient control against bacteria regrowth, future research on bacteria regrowth dynamics and their implications is warranted. In this study, a review was performed on the literature published in this area. The findings and limitations of these papers are summarized. Occurrences of bacteria in WDS, factors affecting bacteria regrowth in bulk water and biofilms, bacteria control strategies, sources of nutrients, human health risks from bacterial exposure, modelling of bacteria regrowth and methods of bacteria sampling and detection and quantification are investigated. Advances to date are noted, and future research needs are identified. Finally, research directions are proposed to effectively control HPC and opportunistic bacteria in bulk water and biofilms in WDS.
GW Calculations of Materials on the Intel Xeon-Phi Architecture
NASA Astrophysics Data System (ADS)
Deslippe, Jack; da Jornada, Felipe H.; Vigil-Fowler, Derek; Biller, Ariel; Chelikowsky, James R.; Louie, Steven G.
Intel Xeon-Phi processors are expected to power a large number of High-Performance Computing (HPC) systems around the United States and the world in the near future. We evaluate the ability of GW and pre-requisite Density Functional Theory (DFT) calculations for materials on utilizing the Xeon-Phi architecture. We describe the optimization process and performance improvements achieved. We find that the GW method, like other higher level Many-Body methods beyond standard local/semilocal approximations to Kohn-Sham DFT, is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism over plane-waves, band-pairs and frequencies. Support provided by the SCIDAC program, Department of Energy, Office of Science, Advanced Scientic Computing Research and Basic Energy Sciences. Grant Numbers DE-SC0008877 (Austin) and DE-AC02-05CH11231 (LBNL).
Drug repurposing: translational pharmacology, chemistry, computers and the clinic.
Issa, Naiem T; Byers, Stephen W; Dakshanamurthy, Sivanesan
2013-01-01
The process of discovering a pharmacological compound that elicits a desired clinical effect with minimal side effects is a challenge. Prior to the advent of high-performance computing and large-scale screening technologies, drug discovery was largely a serendipitous endeavor, as in the case of thalidomide for erythema nodosum leprosum or cancer drugs in general derived from flora located in far-reaching geographic locations. More recently, de novo drug discovery has become a more rationalized process where drug-target-effect hypotheses are formulated on the basis of already known compounds/protein targets and their structures. Although this approach is hypothesis-driven, the actual success has been very low, contributing to the soaring costs of research and development as well as the diminished pharmaceutical pipeline in the United States. In this review, we discuss the evolution in computational pharmacology as the next generation of successful drug discovery and implementation in the clinic where high-performance computing (HPC) is used to generate and validate drug-target-effect hypotheses completely in silico. The use of HPC would decrease development time and errors while increasing productivity prior to in vitro, animal and human testing. We highlight approaches in chemoinformatics, bioinformatics as well as network biopharmacology to illustrate potential avenues from which to design clinically efficacious drugs. We further discuss the implications of combining these approaches into an integrative methodology for high-accuracy computational predictions within the context of drug repositioning for the efficient streamlining of currently approved drugs back into clinical trials for possible new indications.
Sacro-anterior haemangiopericytoma: a case report
Ge, Xiu-Hong; Liu, Shuai-Shuai; Shan, Hu-Sheng; Wang, Zhi-Min; Li, Qian-Wen
2014-01-01
Haemangiopericytoma (HPC) is a rare vascular tumor with borderline malignancy, considerable histological variability, and unpredictable clinical and biological behavior. HPC can present a diagnostic challenge because of its indeterminate clinical, radiological, and pathological features. HPC generally presents in adulthood and is equally frequent in both sexes. HPC can arise in any site in the body as a slowly growing and painless mass. The precise cell type origin of HPC is uncertain. One third of HPCs occur in the head and neck areas. Exceptional cases of hemangioblastoma arising outside the head and neck areas have been reported, but little is known about their clinicopathologic and immunohistochemical features. This study reports on a case of a large sacro-anterior HPC in a 65-year-old male. PMID:25009757